CR Drost (kentox) wrote,
CR Drost
kentox

Good god...

When statistical mechanics is Done RightTM, it looks like Absolute. Fucking. Magic.

I'm serious. When I become a professor, I am going to push, lobby, and bully the faculty into letting me teach the statistical mechanics course. I am going to start the course on the first day with a wizard hat1, and just start with a one hour lecture that will blow peoples' minds. It will go like this. (Yes, I have tried to do this whole lecture into a mirror, timing myself to make sure that it fits into a 1-hour time slot.)


- - -


Welcome to stat mech, I'm Chris Drost, I'll be your professor. Here is the syllabus; read it on your own damn time. Today, I want to blow your mind away, completely and totally. Hence, the wizard hat.

We start from two assumptions: first off, that you have these crazy notions of 'distinguishability' in your head, such that two systems which are completely in every way microscopically different might, to you, seem indistinguishable. Two cups of coffee, for example. Plausible, amirite? We call the things which you *can* distinguish, 'macrostates'. And we call all of the actual states of the particles 'microstates'.

Now, the natural first question is to ask, 'how many?' -- how many microstates are there for your macrostate? We call the answer the 'multiplicity' of your macrostate. So, going back to the cup of coffee, what we *can* distinguish is, say, volume and temperature, and total number of particles, and maybe sugar content. What we *can't* distinguish is the N particles -- which each have a 3d position vector and which each have a 3d momentum vector. 6N variables, where N is of the order of 10^22 or so.

Now, just speaking hypothetically, lets say you've got two systems which you want to consider together. System 1 is in macrostate M1 -- whatever that is -- which has multiplicity W1. System 2, likewise, is in macrostate M2, with multiplicity W2. What's the multiplicity of the coupling, (M1, M2) ?

Well, for each microstate in M1, there must be W2 different microstates in M2 that could still lead to the state (M1, M2). So we have W1 * W2 different microstates for (M1, M2). Multiplicities multiply, dudes. It's not very surprising that they do, but we don't like things that multiply. We like things that add.

So we define what we'll now call the 'entropy,' as S = k log W. It is the order of magnitude of the multiplicity, times a special scaling constant. I hear your complaints: 'Oh, we learned dS = dQ/T in thermodynamics class!' -- don't worry, we'll get around to it.

The important thing here is that you get rid of these ideas that entropy is "the measure of disorder" or something like that. Sure, you tend to be finnicky, so the states which you describe as "ordered" tend to have very low multiplicities -- whereas "disordered" is everything that's not "ordered," so it has a very high multiplicity. There are comparatively few ways for my shirts to be hung up on hangers in my closet, as there are for them to be in a big heap on my ground. But I want you to get the idea that entropy is just a way of counting multiplicity -- the order of magnitude, in fact, of that multiplicity.

Hammer that into your mind: Entropy is a phenomenon that occurs when certain microstates all "look the same" to you, when those microstates are all very distinct. Entropy is just a way of counting how many of these microstates there are within the macrostate.

I said that there were two assumptions, and I've tried to expound the first one, which is that there are lots of microstates which you say are 'all the same,' but which are fundamentally different. My second assumption is that nature doesn't care. (*writes NATURE DOESN'T CARE in big bold letters on a chalkboard.*) We assume that nature doesn't care about what you consider the same, or different. This means that the sorts of little microscopic changes that are always happening in this world-in-constant-motion tend to *randomize* the microstate, much like picking a microstate at random for your viewing pleasure. It also means that the macrostate with the highest multiplicity is just the most likely one that you'll observe -- Just because it has the highest multiplicity.

Now, you see that I am wearing a wizard hat. I plan on doing magic around now. So let's just watch.

Let's take our system 1 and system 2, which have energies E1 and E2, particle numbers N1 and N2, and volumes V1 and V2. We put them into thermal contact -- they can't exchange particles or volumes, but we put some metal plate between them that's a good conductor of heat. What do our assumptions say will happen after the changes randomize their microstates?

Well, we have some multiplicity W = W1(E1, N1, V1) * W2(E2, N2, V2). We go looking for the maximum by taking a derivative and setting it equal to zero: (∂W/∂E1) = 0, with constant N1, N2, V1, and V2 -- but E2 can change according to the formula of conservation of energy: E1 + E2 = E = constant. We get that:
∂W/∂E1 = ∂W1/∂E1 W2 + W1 ∂W2/∂E2 ∂E2/∂E1 = 0.
We also have that ∂E2/∂E1 = -1, from our constraint that E1 + E2 = E. (E2 = E - E1.) So we get that:
1/W1 ∂W1/∂E1 = 1/W2 ∂W2/∂E2
How's about that? We suddenly derive that, after they jostle around a bit, there will be some quantity that will be the same on either side -- a system property for each system, because only 1's appear on the left hand side of that equation, and only 2's appear on the right. And it is 1/W ∂W/∂E (= ∂S/∂E, to within the constant k), keeping N and V constant. Can you say temperature?

Unfortunately, temperature is not very well-defined on its own, and thermodynamics and statistical mechanics have to define it. But inspired by Carnot and the others who defined entropy by dS = dQ / T, we define the 'thermodynamic temperature' T as:
1/T = (∂S/∂E)N,V
What we've just proven, though, is that for this definition of temperature, if you let two systems exchange energy with each other, they'll do so until they come to the same temperature. Useful stuff!

We can play the same trick by now letting the thermal-contact plate move back and forth. The derivatives are exactly the same, except this time we find that (∂S/∂V)E,N is the same on both sides. With a little dimensional analysis (S has units of joules / kelvin; pressure has units of joules / m3), we have that:
(∂S/∂V)E,N = P/T
And for this definition of pressure, the pressures on both sides of our divider must be equal.

And finally, when we start to let them exchange particles, we define the 'chemical potential' μ as the amount of energy that it's worth to add a particle to the system, and we get that the chemical potential is the same on both sides, as defined by:
(∂S/∂N)E,V = /T
... where the minus sign will be explained shortly.

In total, then, we've got
dS = 1/T dE + P/T dV − μ/T dN
Or, solving for dE:
dE = T dS − P dV + μ dN
This is called the 'thermodynamic identity.'

Now you see why the minus sign is correct: it makes μ into the amount of energy which you get when you add a particle (dN = 1). You might wonder if there should have been a minus sign also for pressure, but you can quickly convince yourself that there isn't -- when a gas expands, for example, it usually has to push against something, and that tends to take energy. (Even if you pull the wall away yourself, the particles that bounce against a receding wall tend to come back with lower velocities, so dE should be negative when dV is positive.)

Okay, so that's an example of how the theory lets us define pressure, temperature, and entropy sensibly. Now let's even go further, and do two concrete examples, so that we're REALLY doing magic.

First, the ideal gas. We start by doing our multiplicity calculation, summing over all of the 3N momentum components and all of the 3N position components. In turn, we're going to partition each of these into a position-momentum box of size h, so that we keep W dimensionless. (Hi, quantum mechanics! What are you doing here?) It will not affect anything except the dimension of W, and the classical theorist has no reason to use Planck's constant for h, but let's be pedantic and semi-classical anyway.

We have that:
W = h-3NLp d3Np ∫Lx d3Nx
... where we need to define the limits of integration Lx and Lp for our integrals. Our constraints for the x integral are that all of the N particles must be within some volume V, so that ∫ dx ∫ dy ∫ dz = V for any one of them. Putting all of the 3N dimensions together, the integral over the Lx limits must be just VN.

We have a different constraint for Lp, because we have a fixed total energy E, so Σ /2m = E. But that's just a volume integral over a 3N-dimensional ball with radius sqrt(2mE). We write the answer to that as just C3N R3N, where Cn is the volume of the unit n-ball [Cn = πn/2 / Γ(n/2 + 1)].

So we have that:
W = C3N (2mE)3N/2 VN / h3N
And this means that:
S = N k log V + 3/2 N k log E + constants
Now, start taking partial derivatives! Use our definitions for P and T!

So, first off,
1/T = (∂S/∂E)N,V = 3/2 N k 1/E

E = 3/2 N k T
You've seen that formula before, but here we've derived it from our two little assumptions: that you see macrostates, and that the macrostate with the highest multiplicity wins because nature doesn't care what you think. Magic, I tells ya!

But if that doesn't leave you breathless, the derivative with respect to V is:
P/T = (∂S/∂V)E,N = N k 1/V

P V = N k T
No way? Yes way! I just derived the ideal gas law with those two assumptions and two simple 3N-dimensional hypervolume integrals!

And did you think that was impressive? Let me also finish by doing the specific heat of metals for you, right here, right now.

We model N particles as 3N harmonic oscillators -- one in each dimension. We assume that the particles are cool enough that they're basically stuck in their own potential wells, unable to affect their neighbors much, or to be affected similarly. We start with the same principle, that:
W = h-3NLp d3Np ∫Lx d3Nx
However, this time, each dimension is tangled up by the fact that:
Σi 1/2 k xi2 + 1/2 m vi2 = E = constant
Which we "circularize" by using the angular frequency ω = sqrt(k/m), into:
2E/m = Σi (ω xi)2 + vi2

W = m3N (hω)-3N ∫ d3N(ωx) ∫ d3Nv
This is a 6N-dimensional hypersphere with radius sqrt(2E/m), and if you expand C6N, you get:
W = (2πE/)3N / (3N)!

S = 3 N k log E + constants
The first number in parentheses up there is just the number of energy quanta. Roughly speaking, this is also the quantum mechanical result, which is that if there are Y energy quanta, then there are (Y + N - 1) choose (N - 1) different ways to spread them over N oscillators.

Most importantly, you can see that:
1/T = ∂S/∂E = 3 N k / E

E = 3 N k T
This means that solids which fit our description should have specific heats on the order of 3R, where R is the gas constant. R = 8.314472 J/(mol K), so 3R ~= 25 J/(mol K). Let's look on Wikipedia for solids with specific heats between, say, 24 and 26:
Lithium: 24.9 J/(mol K)
Beryllium: 16.4
Boron: 11.1
Carbon: 8.5 (graphite), 6.1 (diamond)
Sodium: 28.2
Magnesium: 24.9
Aluminium: 24.2
Silicon: 19.8
Sulfur: 22.8
Calcium: 25.9
Scandium: 25.5
Titanium: 25.1
Chromium: 23.4
Manganese: 26.3
Et cetera. Especially among metals, the approximation holds very well. (Personal note: And we didn't need no steenking Drude-Sommerfeld model to get the result, either!)

Footnotes
1Not that wizard hat.
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

  • 0 comments