Evosci.org - Evolving Science and Engineering

Brief note on Fluctuation Dissipation Theorem (Quick Proof)

(Original article written some time ago Jan 2022 and minor updates Aug 2023).

Update 19,8,23 below

People ask me my opinion of the Feynman Ratchet and Pawl and the general idea of rectifying random thermal noise. This has already been covered by the titular theorem, which dates back to the early 20th Century (Einstein et al), Szilard, Brillouin in their discussions of the Maxwell Demon; references can be found here (refs. 14-17 Paper on Heat Engines etc.) and from a very early paper of mine in 1997 ("Work in Constant Entropy Systems", page 112). I also said such in an AAAS conference at San Diego in 2016.

Let me explain the Fluctuation Dissipation Theorem: On the atomic scale, matter is in random thermal fluctuation. This is simply heat energy, otherwise known as Brownian Motion (after the biologist, Robert Brown, who noticed pollen grains moving randomly in water in the early 1800s). Kinetic Energy (the energy of movement) is continually interchanged with Potential Energy (the energy of displacement in a "force field", like gravity of electrostatic field).

The energy of the system is constant, this is just a statement of the Conservation of Energy - indeed, "Energy" is called "a conserved constant of motion" related to "symmetry in time", as you'll find out if you study graduate level physics. Colloquially put: the constants of physics don't appear to have changed over time and if you lift a brick (giving it potential energy), then drop it a thousand years later, you don't get more kinetic energy out!

Ergo, regarding the FD Theorem, if a particle (up to "meso-scale", say microns) is given a jolt by other particles around it, making it appear more energetic than the average energy of the other particles (the Fluctuation), it will soon give it up with other collisions (frictional processes with other particles). This is the "Dissipation".

So there you have it: the average energy of a particle can "Fluctuate" (from the ensemble) and then "Dissipate" in the next instant to the ensemble.

Now there are some fanciful ideas passing around (and wasting research resources and publication space**) that these fluctuations can be rectified to give excess power at equilibrium. Nope. One idea is to use a "nanoscale graphene membrane" which forms a capacitor. The Principle of Virtual Work (try Feynman Lectures 1) allows us to relate the energy of movement of the membrane to the change in electrical energy of the capacitor, ie.,

       (** Claiming that one can achieve net energy transfer from an equilibrium system to another at equilibrium and not violate the 2nd Law, is like a bank employee working in the vault and removing the riches, without permission and taking it home and then denying its theft!!! Extraordinary lack of comprehension or knowledge of the Laws. Peer review were napping. It's a simple matter of fact.)

And a little bit of the mechanical energy of the graphene system is diverted into electrical energy to shift a little charge and cause the voltage on the plates to modulate (they become "coupled"). This doesn't violate the Fluctuation Dissipation Theorem - if the size of the mechanical system (the little mechanical graphene system) is commensurate with the size of the system it interfaces, it will too fluctuate (and dissipate!). You'll see this effect up to "meso-scale" (say pollen grains, on the order of microns in size)...

But here's the rub... You'll interface your device to some macroscopic circuit, which will have large capacitance (or mass, etc.) and your fluctuation will tend to zero. Furthermore, your electrical load will fluctuate too with noise (Johnson-Nyquist Noise) and the net power you can deliver from your fluctuating element to your load just cancels the noise in your load! (Hint or cluebat: Look at the case of maximum power transference, that's right, Rsource = Rload now look at their noise temperatures (all same temperature at equilibrium, right???)... Any other ratio is a fraction but a fraction of zero is still zero!)

In case you haven't got it yet, a diode can be represented as two one-way resistors in parallel - one high, one low. As the quick proof link shows, NO combination of resistors with the noise source allows for any power transfer, if they're all at the same temperature. Rectification is a red-herring.

You'll further delude yourself that you can put a battery in series, just to bias some diodes you hope will rectify your fluctuating voltage from your nano-graphene and insist that you're delivering useful work, overlooking the fact that the battery and biased diodes are AMPLIFIERS (it's got nothing to do with the diodes being forwarded biased - in the reversed biased regime they look like transresistance devices - a "parametric amplifier": the small changes in current that come from your graphene capacitor get converted into large voltage swings. This modulates the reversed biased diode's capacitance and ultimately the change in charge comes from the battery). Yes my friends, you made an electret microphone. Your fancy smancy graphene nanoscale thingymajig outputs a signal that is a proxy for the amount of thermal fluctuation. Or maybe, if not using any tacit form of amplification, you picked up non-thermal vibration in your lab and simply built a microphone.

Update 19,8,23

There's some spurious argument that it has something to do with the amount of time the circuit is left conducting. You see... the argument above about RMS voltages and power between the systems presumes a long term averaging effect. Yeah, riiiighhhtttt. Apparently when the diode is conducting you're meant to switch it off for a bit. This is bogus, the diode is already doing switching by conducting one-way and presumably transferring power one-way to the load.

Let's run with this nonsense about some "low-frequency effect" (surely switching involves high frequencies???) and switching like it could ever be true: the noise voltage squared of the equivalent circuit for the graphene "voltage source" is 4kTfR (see Quick Proof), where kT is the thermodynamics stuff, f is the bandwidth and R the resistance. The power into some load with the same R would be 4kTf. Taking the highest frequency in the graphene lattice to be of the order of 1013Hz (speed of sound divided by inter-atomic distance, some 103 / 10-10 of that order), then around room temperature 300K, we might make R the order of 107 to 108 to give enough voltage to forward bias those diodes. The power into the load would be of the order of 10s of nW BUT we forget! - we must do the special boondoggle of switching or whatnot to get this "low frequency effect" (huh?! switching?!) - well that limits the bandwidth and instead of using the full spectrum, we use a fraction. So let's say we are limited to about 109Hz with the electronics, we are already 10-4 down on the full bandwidth of the noise and the power would be 10-4 less.

So piss poor power anyway (I mock, I jest).



So how does all of this relate to Sheehan and Capek's ideas? THE ABOVE DOESN'T AND ISN'T SERIOUS SCIENCE. I believe (as stated very early in the 1997 paper) that the key is phase changes (conference slide show). These fluctuations are able to do "microscopic work" across a phase boundary - the energy barrier to this must be similar to the thermal energy for the system to reach thermodynamic equilibrium in a reasonable time (and not be meta-stable equilibrium, say like diamond -> graphite from a large activation energy). The process should be reversible (implied too by having a small energy barrier relative to the thermal energy).

Microscopic work as opposed to "macroscopic" work can convert any amount of microscopic heat flows into microscopic work as the temperature difference tends to zero. The Carnot Theorem is not violated. I was given clue of this and set on my way in deriving a figure of merit for a novel water desalination idea I had in 1995/6 and by Dr John Cartlidge - a Physical Chemist (defunct chemistry department of City University London - way to go UK, gutting your pure science facilities) in 1997, as we talked and he gave me some knowledge "from an old guy" about steam injectors used in the old steam engines: "Remi, did you know that heat can flow uphill? After the steam has done work and cooled, it is injected under pressure back into the boiler which may be at 120deg C". It dawned on me that the key to this was the phase change.

Now a phase is a macroscopic volume of matter with differing properties. It is as though the microscopic work processes have been "magnified" and is expressed in the latent energy between the phases. An heat engine can then be run between the macroscopic phases because they have macroscopic differences in property. Then the high energy phase needs to be rendered spontaneously unstable (I call this a "phase changing catalyst" for 1st order systems. 2nd order systems naturally randomise but need a coupling step to deliver more energy out than put in).

If one has knowledge of thermodynamics, there is no way for a single substance to convert heat energy from one reservoir solely to work by any means of cycling of intensive, extensive or spatial co-ordinates of the working substance (say a plant diagram). These figures (A, B) from my Paper on Heat Engines illustrate the concept of how the substance needs to "jump off the trajectories" (adiabatics, isothermals, etc.) it is confined to by looking like a different substance for part of its cycle - in short a change in the Gibbs Free Energy, a phase transition. This applies to diode rectification demons in the negative: it is tempting to think that the reversed biased P-N junction looks to be supplying some energy barrier (hence a 1st order phase change) but there is no spontaneous phase change to release the latent heat as useful work at the anode:-

      Or take a vacuum tube with the cathode having a higher Richardson constant than the anode: it can be worked out from the mean free path and thermal velocity of the gas that the electron gas would thermalise very quickly (fraction of a microsecond), so we are in equilibrium thermodynamics territory. The cathode may have a bigger space charge region but for these electrons to "condense" onto the anode at the same temperature as the cathode, the chemical potential of the gas must exceed that of the electrons at the anode. This cannot happen at constant temperature and so the anode must be cooler than the cathode. Thus no spontaneous phase change can happen at the anode at the same temperature as the cathode.


In Sheehan's thermal rectifier case, he really has a 2-phase system: H2 and dissociated hydrogen. He has to run the system at elevated temperature so that the H2 bond becomes labile. It then becomes a system in thermal equilibrium that can do microscopic work across the phases at constant temperature. Spontaneous phase change can happen when the dissociated hydrogen "condenses" at one surface into molecular hydrogen, liberating the latent heat. It is similar to the water desalination idea.

Underlying this is the notion of phase changes as sorting processes. 1st order phase changes directly sort the molecules into those above and below some energy barrier. 2nd order phase changes do an inverse sorting problem. Please refer back to the main thermo-electric conversion page. Implicit in this is the notion of what Sheehan calls "Maxwell Zombies" and not "Demons". No computer is required to record the state of the sorting element, nature just "does it". If there is no storage of information in non-reversible computation, then there is no rejection of kTln(2) of heat per bit of data, as per the Landauer principle. (I briefly knew Rolf Landauer via P. T. Landsberg and one of my old teachers Farooq Abdullah, before his death in the late 1990s and both gave encouragement to my nascent ideas).


Main Page