entropy – One Universe at a Time https://briankoberlein.com Brian Koberlein Thu, 21 Feb 2019 22:09:36 +0000 en-US hourly 1 https://wordpress.org/?v=5.1 Time’s Arrow https://briankoberlein.com/2015/06/12/times-arrow/ https://briankoberlein.com/2015/06/12/times-arrow/#comments Fri, 12 Jun 2015 16:41:00 +0000 https://briankoberlein.com/?p=4826

In physics events are often time symmetric, so why is it that time so clearly seems to have a specific direction?

The post Time’s Arrow appeared first on One Universe at a Time.

]]>

Yesterday I talked about how time can be symmetrical in physics. For example, a video of billiard balls colliding looks the same whether played forwards or backwards. This would seem to contradict our everyday experience that time flows ever onward in one direction. We can remember yesterday, but not tomorrow, and if we break our favorite coffee mug we can’t simply unbreak it. This unidirectional nature of events is called the arrow of time, and it’s a bit of a mystery.

In classical Newtonian physics, interactions between simple particles is perfectly time symmetric. Where the direction of time appears is through thermodynamics. For example, if you had a room full of air, with all the air molecules bouncing around, it is very unlikely that all the molecules would at one point clump together in one corner of the room. It’s theoretically possible that all the air molecules happen to have the right trajectory to reach the corner at about the same time, but it’s extremely unlikely. On the other hand, if you started with a pressurized container of air in the corner and then released the air, the molecules would almost certainly spread evenly throughout the room given a bit of time. If you think of these examples as time-reversed siblings of each other, you can see that both are possible, but one is far more probable than the other.

We can express this difference in probability in terms of entropy. The pressure, temperature and volume of the gas in the room is known as its state. Since these are determined by the positions and speeds of all the air molecules in the gas, which is collectively called the microstate of the gas (the state of all the microscopic particles). For a given state of the gas, there are lots of ways the atoms could be moving and bouncing around. As long as the average motion of all the atoms is about the same, then the pressure, temperature and volume of the gas will be the same. This means there are lots of equivalent microstates for a given state of the gas. The more microstates for a given state of the gas, the greater the entropy of the gas, and the more likely the gas will be found in that state. So the arrow of time can be stated as the direction of increasing entropy. This is often expressed as the second law of thermodynamics, which states that the entropy of a system can never decrease.

In quantum theory the arrow of time can be expressed in other ways. In the simple Copenhagen interpretation of quantum theory, a quantum object is in a probabilistic state defined by a wavefunction, which then collapses into a definite state when observed. This collapse of the wavefunction is not reversible, and thus there is a single direction to time. In many ways the Copenhagen interpretation is overly simplistic, but the idea holds in other interpretations as well. For example, quantum systems left to themselves become more entangled over time. So another way to express the arrow of time is to say that it is in the direction of increasing entanglement.

Of course none of this addresses our most direct experience of time’s arrow, which is that we seem to have a conscious experience of the unidirectional flow of time. It’s so deeply ingrained in our personal experience that we intuitively feel that events occur at a specific “now” even though relativity clearly disproves a cosmic present moment. Just why we have such a strong experience of the arrow of time isn’t clear.

But given time, we might be able to figure it out.

The post Time’s Arrow appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2015/06/12/times-arrow/feed/ 4
Boltzmann’s Brain https://briankoberlein.com/2014/06/02/boltzmanns-brain/ https://briankoberlein.com/2014/06/02/boltzmanns-brain/#comments Mon, 02 Jun 2014 11:00:13 +0000 https://briankoberlein.com/?p=3132

Ludwig Boltzmann was a physicist who developed statistical mechanics, which connects Newtonian physics of particles to thermodynamics. Boltzmann’s kinetic theory not only explained how heat, work and energy are connected, it also gave a clear definition of entropy. While this revolutionized our understanding of everything from heat to the universe, it also led Boltzmann to a rather puzzling idea known as a Boltzmann brain.

The post Boltzmann’s Brain appeared first on One Universe at a Time.

]]>

Ludwig Boltzmann was a physicist who developed statistical mechanics, which connects Newtonian physics of particles to thermodynamics.  Boltzmann’s kinetic theory not only explained how heat, work and energy are connected, it also gave a clear definition of entropy. While this revolutionized our understanding of everything from heat to the universe, it also led Boltzmann to a rather puzzling idea known as a Boltzmann brain.

The pressure, temperature and volume of a gas is known as the state of the gas. Since these are determined by the positions and speeds of all the atoms or molecules in the gas, Boltzmann called these the microstate of the gas (the state of all the microscopic particles). For a given state of the gas, there are lots of ways the atoms could be moving and bouncing around. As long as the average motion of all the atoms is about the same, then the pressure, temperature and volume of the gas will be the same. This means there are lots of equivalent microstates for a given state of the gas. Basically what Boltzmann found was that the entropy of a system in a particular state depends on the number of equivalent microstates that state has.

This explains why entropy within a system increases.  Odds are, any physical system you have will tend toward a state with more microstates, since a state with few microstates (low entropy) is statistically much less likely to happen. But of course the catch is that statistically improbable is not the same as impossible.  Boltzmann supposed that if the universe were a vast sea of particles, it would be possible for particles to come together to form the state of your conscious brain, just as it could come together into the universe we see around us.  But which is more likely?

It is kind of like the classic example of monkeys banging on typewriters (or astrophysicists on laptops).  Let them bang around randomly for long enough, and there is a chance they will type out the complete works of the Library of Congress.  Of course it is far more likely that they will bang out To Kill a Mockingbird. In the same way, if the universe is a collection of microstates, then it is more likely to find itself in a conscious state that thinks it is in a universe rather than the entire universe itself.  That is, a Boltzmann brain is more probable than a universe.

Just to be clear, should not be seen as convincing evidence that you are a brain in a vat, or that we are all living in a virtual world. The idea of a Boltzmann brain is much like the idea of Schrodinger’s cat.  Both are examples of physical models taken to their extreme to find weaknesses in the model. In the case of Boltzmann brains, one flaw is the assumption that universe is simply a collection of microstates.  We now know that the universe began as a low entropy state of high density and temperature (aka the big bang). It then progressed via the laws of physics into atoms, stars, solar systems and a rocky little world where living things evolved over billions of years.  Your brain and the Library of Congress are not random states, but what hydrogen does over 13.8 billion years.

At each stage in the history of the universe, the overall entropy has increased.  Pockets of lower entropy such as living organisms are only possible due to higher entropy sources such as the Sun.  In the same way refrigerator can make things cold (lowering their entropy), but it must use energy to do so, and it creates more waste heat than it removes from the fridge. Overall entropy still increases. This, by the way, means the next time someone uses thermodynamics to deny evolution, you should point out that by the same argument their refrigerator shouldn’t exist.

A basic diagram of eternal inflation.

Of course there are those that argue the ordered universe solution to Boltzmann’ brain is simply kicking the can down the road.  While it is true that the early universe was a low entropy state, that doesn’t explain why it was a low entropy state. One solution to that is early cosmic inflation. The kind that BICEP2 hopes to have found. While inflation can solve the low entropy problem, it can also allow Boltzmann’s brain to reappear.  That’s because there are versions of inflation where regions of the “multiverse” are inflating all the time.  In this model our universe just happen to arise out of a local inflationary fluctuation. But if that’s the case, what is to prevent a Boltzmann brain from arising from a smaller fluctuation, and which is more likely?

All of this is pretty speculative, so it’s important not to take the idea too literally. What makes the Boltzmann brain idea interesting is that it helps us examine the most bizarre and puzzling aspects of our physical theories.

It’s enough to baffle anyone’s brain.

The post Boltzmann’s Brain appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2014/06/02/boltzmanns-brain/feed/ 2
Dying of the Light https://briankoberlein.com/2014/03/31/dying-light/ https://briankoberlein.com/2014/03/31/dying-light/#comments Mon, 31 Mar 2014 19:00:51 +0000 https://briankoberlein.com/?p=2032

Part 6 in the equations series. Boltzmann opens our eyes to a world where the warmth of our morning coffee forces us to confront our own mortality.

The post Dying of the Light appeared first on One Universe at a Time.

]]>

I enjoy a good cup of coffee in the morning.  It’s particularly nice on cool morning, sipping the coffee while the heat of the cup warms my hands.  Life is good.

Of course that’s always how it works.  The coffee is initially warmer than my hands, so as I hold the cup the heat flows from the coffee into my hands.  But why does it always work that way?  It seems rather obvious.  That’s just what heat does.  If you put something hot next to something cold, the heat will flow from the hot object to the cold object until the two have reached the same temperature.  This is why my coffee cools to room temperature over time.  But this property of heat has some interesting consequences.  Consequences that may determine the fate of the universe.

In the 1700s, heat was thought to be caused by a kind of fluid known as caloric.  It is where the term calorie comes from.  A hot object was thought to possess a lot of caloric.  A basic property of caloric was that it tended to spread out as much as possible.  So if you placed something cold (with not much caloric) against something hot (with lots of caloric), the caloric would flow from the hot object to the cold until it was evenly spread out.  Thus the hot object loses caloric and cools, while the cold object gains caloric and warms.

This idea isn’t bad as basic theories go, but there were some things it couldn’t explain.  One was things like rubbing your hands together.  If you rub your hands together, they get warmer.  Does that mean the motion of your hands somehow draws caloric to them?  Another was exothermic reactions.  You can set a cold log on fire, and it produces a lot of heat.  So where was all the caloric if the log was cold?

Then in 1845 James Joule demonstrated that heat was a form of energy.  It was known that heat could be used to do mechanical work (such as with a steam engine), but Joule showed that you could also convert mechanical work into heat.  Heat and work were therefore two types of energy.  Because of his research, the modern unit of energy is the Joule.  This connection between heat and mechanical work soon led to the development of three basic laws of thermodynamics (heat behavior).

The first law is simply that energy in all its forms is conserved.  This can be heat energy, mechanical work, energy of chemical reactions, etc.  Energy can move from one form to another, but it can’t be created or destroyed.  The second law states that things will always move toward thermodynamic equilibrium.  Stated simply, it means heat will always flow from hot to cold.  The third law is stated in a number of ways, but basically says there is a limit to how cool something can be, and that limit is known as absolute zero.

Of these rules, the second law of thermodynamics is perhaps the most interesting and misunderstood.  We know that heat flows from hot to cold, but why?  If I set a room-temperature cup of coffee on the table, why doesn’t the coffee spontaneously get warmer by cooling the cup?  If the cup cooled and put all that heat into the coffee, you could have piping hot coffee in an ice cold up, and energy would still be conserved.  Why does that never happen?  Likewise, if you set a cup of hot coffee on the table, why does it cool down?  If the coffee kept all its heat, energy would be conserved.  Why does the heat always flow from the hot coffee to the surrounding cool air?

Regardless of the mechanism, the second law of thermodynamics has some specific consequences.  One has to do with using heat to do work.  Suppose you wanted to make a steam engine.  There are lots of ways to do this, but they all boil down to a basic process.  For example: heat a volume of steam, let it expand doing mechanical work, let the steam cool, then compress it back to its original volume and heat it again.

From the first law of thermodynamics you are not creating energy, you are simply transforming heat energy into work energy.  But from the second law of thermodynamics, you can’t convert all of the heat to work.  In the example above, you have to let the expanded steam cool before you compress it back to its original volume.  If you don’t let it cool, then it would take just as much energy to compress the steam as you got by letting it expand.  But when you let the steam cool, the heat it releases is just wasted energy.  To get work from your steam engine, some of your energy is wasted.

This wasted energy is known as entropy.  So the second law of thermodynamics says that you can convert heat into work, but you can’t convert all of the heat to work.  Some of the original heat will become entropy.  This is true for more than simple steam engines.  For this reasons, the second law of thermodynamics is often stated as the fact that you can never do something with 100% efficiency, or more formally “the entropy of a system can never decrease.”

Because of this, entropy  is often expressed as the unusable part of a system, and this leads to a lot of misconceptions.  For example, there are those who state that the second law means that evolution can’t be true.  If the “unusable” portion of a system always increases, then it is surely impossible for simple cells to become complex humans.  That would be true if it were not for the Sun.  The Sun creates lots of usable energy for life on Earth, as well as lots of entropy.  Life can evolve on Earth because of the Sun.  The total entropy of the Sun, Earth and its living organisms continues to increase over time, even as life evolves.  The same argument could be made for a refrigerator.  If heat flows from hot to cold, then how can a fridge get cooler?  The answer is that the heat engine of the fridge uses energy to compress a fluid causing it to heat up.  It lets the compressed fluid cool to room temperature, then it expands the fluid causing it to cool, which it uses to cool the interior of the fridge.  A refrigerator can move heat from cold to hot, but it must use energy to do so, and it creates more waste heat than it removes from the fridge.  Entropy still increases.  The next time someone uses thermodynamics to deny evolution, explain to them that by the same argument their refrigerator shouldn’t exist.

Of course this definition for entropy is a bit nebulous.  The problem is that when entropy was first defined there wasn’t a good understanding of how materials are made of atoms and molecules.  Throughout most of the 1800s, the “atomist” view of matter was controversial.  In 1808, John Dalton demonstrated that materials were made of varying ratios of chemical elements, and proposed an atomic theory of matter.  While this theory was widely accepted by chemists, it was less accepted by physicists and those who studied thermodynamics.

Then in the late 1800s Ludwig Boltzmann developed a kinetic theory of gases.  He proposed that the properties of a gas, such as its temperature and pressure, were due to the the motion and interactions of atoms and molecules.  This had several advantages.  For example, the hotter a gas, the faster the atoms and molecules would bounce around, therefore temperature was a measure of the kinetic (moving) energy of the atoms.  The pressure of a gas is due to the atoms and molecules bouncing off the walls of the container.  If the gas is heated, the atoms move faster and bounce off the container walls harder and more frequently.  This explains why the pressure of an enclosed gas increases when you heat it.

Boltzmann’s kinetic theory not only explained how heat, work and energy are connected, it also gave a clear definition of entropy.  The pressure, temperature and volume of a gas is known as the state of the gas.  Since these are determined by the positions and speeds of all the atoms or molecules in the gas, Boltzmann called these the microstate of the gas (the state of all the microscopic particles).  For a given state of the gas, there are lots of ways the atoms could be moving and bouncing around.  As long as the average motion of all the atoms is about the same, then the pressure, temperature and volume of the gas will be the same.  This means there are lots of equivalent microstates for a given state of the gas.

Boltzmann proposed a connection between the entropy of a system and the number of equivalent microstates, as seen in the equation above.  In the equation, S is the entropy of the system, K is the number known as Boltzmann’s constant, W is number of equivalent microstates, and LOG represents the natural logarithm.  What the equation says is that the entropy of a system in a particular state depends on the number of equivalent microstates that state has.

But how do equivalent microstates relate to heat flowing from hot to cold?  Imagine an ice cube in a cup of warm water.  The water molecules in the ice cube are frozen in a crystal structure.  This structure is pretty rigid, so there aren’t a lot of ways for the water molecules to move.  This means the number of equivalent microstates is rather small.  As the ice melts the crystal structure breaks down, and the water molecules are much more free to move.  This means there are many more equivalent microstates for water than for ice.  So heat flows into the ice, which increases the number of equivalent microstates, so the entropy of the system increases.  The second law of thermodynamics applies both ways.

This has a very clear consequence for the universe.  In the earliest moments of the universe, immediately after the big bang, the number of possible states that could describe the universe was likely very small.  This means the entropy of the universe was very low.  Since the second law of thermodynamics says entropy can never decrease (but can increase), over time the entropy of the universe has increased, and its entropy will continue to increase.  But a consequence of this is that every cosmic process does what it does at the cost of increasing the entropy of the universe.  Gravity can coalesce clouds of hydrogen and helium into stars, but some heat energy will be wasted.  Stars can fuse hydrogen into higher elements, but it does so by releasing light and heat into the cosmos.  Some of that light and heat may warm planets.  Life can use that light and heat to evolve, but the star will eventually use up its useful energy.  Some stars will explode, and new stars form from the ashes, but none of this is perfectly efficient.  The entropy of the universe will continue to increase.  The stars will cool, the universe will expand.  Eventually even the black holes will radiate away their mass into a vast, dark and cold universe.  Entropy cannot decrease.  The second law of thermodynamics means that there will come a time when the light of the last star fades.  The dying of the light.

The second law also says that heat flows from hot to cold.  The warm cup of coffee in my hands tells me not only that life is good, but that life is short.  The physics that drives the heat of the coffee into my hands also drives me, you, the Sun, and the universe toward their inevitable end.

This end is known as the heat death of the universe.  There is still debate as to whether it is an accurate description of the fate of the universe.  There’s still a great deal we don’t understand about entropy, much less the universe as a whole.  But it is a real possibility.  The universe has a beginning, and it could well have an end.

Sometimes what we discover about the universe can be unsettling, even terrifying.  The universe is massive, complex, and subtle.  It is easy to look upon its majesty and despair.

Or we can stand together on our small planet, and look out into the night in wonder.  We can recognize that we few, we happy few, have a true understanding of what the universe is.

The universe is a wondrous thing.

And it is ours to explore.

Missed the beginning of this series?  The introduction starts here

The post Dying of the Light appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2014/03/31/dying-light/feed/ 4
Memory Hole https://briankoberlein.com/2014/03/30/memory-hole/ https://briankoberlein.com/2014/03/30/memory-hole/#respond Sun, 30 Mar 2014 19:00:13 +0000 https://briankoberlein.com/?p=2027

Part 5 of the equations series. Got something to hide? Toss your secrets into a black hole, and no one will ever know. Or will they?

The post Memory Hole appeared first on One Universe at a Time.

]]>

You’ve just committed the perfect crime.  No one saw you do the crime, and you left no trace.  The perfect crime.  The only way anyone could prove you did it is by finding the journal of your master plan.  Get rid of the journal, and you are scott free.  Of course you can’t simply toss the journal in the trash.  Someone might find it.  So maybe you should rip it to pieces and then toss it in the trash.  That would be better, but someone could take the pieces and carefully put them back together, and your crime would be revealed.  Maybe you should burn the journal.  Surely that would destroy it.  That would probably be good enough, but if someone observed the ash and smoke very carefully, and made really precise measurements they might be able to figure out where all of it came from and reconstruct the information in the book.  That’s very unlikely to happen, but this journal is the only thing standing between you and the perfect crime.  You want to be absolutely, 100% certain that the information it contains is permanently destroyed.  How do you get the job done?

This hypothetical story highlights a very real question in physics.  Is it possible to permanently destroy information?  Or is information, like mass-energy and charge, conserved?  The question is important because it strikes at the very heart of what science is.  Through science we develop theories about how the universe works.  These theories describe certain aspects of the universe.  In other words they contain information about the universe.  Our theories are not perfect, but as we learn more about the universe, we develop better theories, which contain more and more accurate information about the universe.  Presumably the universe is driven by a set of ultimate physical laws, and if we can figure out what those are, then we could in principle know everything there is to know about the universe.  If this is true, then anything that happens in the universe contains a particular amount of information.  For example, the motion of the Earth around the Sun depends on their masses, the distance between them, their gravitational attraction, and so on.  All of that information tells us what the Earth and Sun are doing.

Scientists generally assume information is conserved for two reasons.  The first is a principle known as determinism.  If you throw a baseball in a particular direction at a particular speed, you can figure out where it’s going to land.  Just determine the initial speed and direction of the ball, then use the laws of physics to predict what its motion will be.  The ball doesn’t have any choice in the matter.  Once it leaves your hand it will land in a particular spot.  Its motion is determined by the physical laws of the universe.  Everything in the universe is driven by these physical laws, so if we have an accurate description of what is happening right now, we can always predict what will happen later.  The future is determined by the present.

The second principle is known as reversibility.  Given the speed and direction of the ball as it hits the ground, we can use physics to trace its motion backwards to know where it came from.  By observing the ball now, we can know from where the ball was thrown.  The same applies for everything in the universe.  By observing the universe today we can know what happened billions of years ago.  The present is predicated by the past.

These two principles are just a precise way of saying the universe is predictable, but it also means information must be conserved.  If the state of the present universe is determined by the past, then the past must have contained all the information of the present universe.  Likewise, if the future is determined by the present, then the present must contain all the information of the future universe.  If the universe is predictable, then information must be conserved.

Now you might be wondering about quantum mechanics.  All that weird physics about atoms and such.  Isn’t the point of quantum mechanics that things aren’t predictable?  Not quite.  In quantum mechanics, individual outcomes might not be predictable, but the odds of those outcomes are predictable.  It’s kind of like a casino.  They don’t know which particular players will win or lose, but they know very precisely what percentage will lose, so the casino will always make money.  The baseball example was one of classical, everyday determinism.  To include quantum mechanics we need a more general, probabilistic determinism known as quantum determinism, but the result is still the same.  Information is conserved.

So it looks like you’re in trouble.  Since information is conserved, there is no way for you to destroy that incriminating journal.  You can make the information very difficult to find, but you can’t permanently erase it.  But being an evil genius, you have an idea.  You’ll simply chuck the journal into a black hole.  After all, nothing can escape a black hole, so once you’ve tossed it in, no one can ever get it back.  All that incriminating evidence destroyed forever.  The perfect crime.

Well, maybe…

It seems like a good idea.  According to Einstein’s theory of general relativity, a black hole has only three basic properties: mass, charge and rotation.  If you know those three things, you know everything there is to know about a black hole.  So if you toss your journal into a black hole, all those plots and plans of the perfect crime are reduced to mass, rotation and charge.  All of the information in the journal has been destroyed.

But Einstein didn’t account for quantum mechanics in his theory.  Through quantum mechanics, things can escape a black hole.

One of the fundamental principles of quantum theory is known as the uncertainty principle.  Basically, the uncertainty principle states that there is a limit to what you can know about an object.  This limit is not simply due to a lack of good measurements.  It is an absolute uncertainty built into the fabric of the universe.  This leads to some very strange phenomena.  For example, suppose you put a marble in a small box.  Seal up the box and the marble can’t get out, right?  According to the uncertainty principle, there’s a small chance that it could get out.  If the marble is in the box, then you know exactly where it is, but you’re not allowed to know exactly where it is, only probably where it is.  So there’s a very small chance that you may return to find the marble has escaped.  Strange as this seems, it is a very real effect is known as quantum tunneling.  For things like marbles the odds of it happening are so low that they are essentially zero, but for atoms and electrons it happens all the time.  Your computer wouldn’t work and the Sun wouldn’t shine without it.

A black hole is basically a gravitational box.  Anything put into a black hole should be trapped, but because of the uncertainty principle things can escape.  Over time, the mass and energy of the black hole will escape, and it will radiate away through a process known as Hawking radiation (named after Stephen Hawking). Through the uncertainty principle black holes gradually radiate away.

But this means that the information of a black hole is more than just mass, charge and rotation.  It must also contain the information of all the particles it will radiate away.  So just how much information does a black hole contain?  The answer is given by the equation above, known as the Bekenstein-Hawking equation.  Here S is the information of the black hole, C is the speed of light, h (with a line in the top) is a number known as Planck’s constant, and relates to the uncertainty principle, K is a number known as Boltzmann’s constant, G is Newton’s gravitational constant, and A is the area of the black hole’s event horizon, which is just another way to measure its mass.  What the equation says is the information contained within a black hole is proportional to its size.  If you toss something into a black hole, you increase the mass of the black hole, which increases the information contained in a black hole.  So tossing your journal into a black hole doesn’t make the information disappear.

But there is one last piece to this puzzle, and it’s a doozy.  The Bekenstein-Hawking equation states that the amount of information you toss into a black hole is the same as the amount of information a black hole contains.  But according to our understanding of gravity, Hawking radiation is perfectly random.  So the black hole will eventually release the right amount of information, but not the same information.  This means that information tossed into a black hole really is destroyed.  But according to quantum theory, the black hole must somehow retain the information of what is tossed into it.  This means Hawking radiation is not random, and the information is not destroyed.  This contradiction is known as the black hole information paradox, and we don’t yet know how to solve it.  Most scientists think quantum mechanics is probably right, but we can’t prove it yet.

So toss your journal into a black hole, and you may have committed your perfect crime…or not.

Tomorrow:  The end of the series.  Boltzmann opens our eyes to a world where the warmth of our morning coffee forces us to confront our own mortality.

The post Memory Hole appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2014/03/30/memory-hole/feed/ 0
Black Holes No More? Not Quite. https://briankoberlein.com/2014/01/30/black-holes-quite/ https://briankoberlein.com/2014/01/30/black-holes-quite/#respond Thu, 30 Jan 2014 21:56:33 +0000 https://briankoberlein.com/?p=1132

News has spread that Stephen Hawking has declared there are no black holes. That's not quite what Hawking said. Instead, Hawking proposes a radical new solution to the firewall paradox.

The post Black Holes No More? Not Quite. appeared first on One Universe at a Time.

]]>

This post was originally written for Universe Today.

Nature News has announced that there are no black holes.  This claim is made by none other than Stephen Hawking, so does this mean black holes are no more?  It depends on whether Hawking’s new idea is right, and on what you mean be a black hole.  The claim is based on a new paper by Hawking  that argues the event horizon of a black hole doesn’t exist.

The event horizon of a black hole is basically the point of no return when approaching a black hole.  In Einstein’s theory of general relativity, the event horizon is where space and time are so warped by gravity that you can never escape.  Cross the event horizon and you can only move inward, never outward.  The problem with a one-way event horizon is that it leads to what is known as the information paradox.

Professor Stephen Hawking during a zero-gravity flight. Image credit: Zero G.

Professor Stephen Hawking during a zero-gravity flight. Image credit: Zero G.

The information paradox has its origin in thermodynamics, specifically the second law of thermodynamics.  In its simplest form it can be summarized as “heat flows from hot objects to cold objects”.  But the law is more useful when it is expressed in terms of entropy.  In this way it is stated as “the entropy of a system can never decrease.”  Many people interpret entropy as the level of disorder in a system, or the unusable part of a system.  That would mean things must always become less useful over time.  But entropy is really about the level of information you need to describe a system.  An ordered system (say, marbles evenly spaced in a grid) is easy to describe because the objects have simple relations to each other.  On the other hand, a disordered system (marbles randomly scattered) take more information to describe, because there isn’t a simple pattern to them.  So when the second law says that entropy can never decrease, it is say that the physical information of a system cannot decrease.  In other words, information cannot be destroyed.

The problem with event horizons is that you could toss an object (with a great deal of entropy) into a black hole, and the entropy would simply go away.  In other words, the entropy of the universe would get smaller, which would violate the second law of thermodynamics.  Of course this doesn’t take into account quantum effects, specifically what is known as Hawking radiation, which Stephen Hawking first proposed in 1974.

The original idea of Hawking radiation stems from the uncertainty principle in quantum theory.  In quantum theory there are limits to what can be known about an object.  For example, you cannot know an object’s exact energy.  Because of this uncertainty, the energy of a system can fluctuate spontaneously, so long as its average remains constant.  What Hawking demonstrated is that near the event horizon of a black hole pairs of particles can appear, where one particle becomes trapped within the event horizon (reducing the black holes mass slightly) while the other can escape as radiation (carrying away a bit of the black hole’s energy).

Hawking radiation near an event horizon. Credit: NAU.

Hawking radiation near an event horizon. Credit: NAU.

Because these quantum particles appear in pairs, they are “entangled” (connected in a quantum way).  This doesn’t matter much, unless you want Hawking radiation to radiate the information contained within the black hole.  In Hawking’s original formulation, the particles appeared randomly, so the radiation emanating from the black hole was purely random.  Thus Hawking radiation would not allow you to recover any trapped information.

To allow Hawking radiation to carry information out of the black hole, the entangled connection between particle pairs must be broken at the event horizon, so that the escaping particle can instead be entangled with the information-carrying matter within the black hole.  This breaking of the original entanglement would make the escaping particles appear as an intense “firewall” at the surface of the event horizon.  This would mean that anything falling toward the black hole wouldn’t make it into the black hole.  Instead it would be vaporized by Hawking radiation when it reached the event horizon.  It would seem then that either the physical information of an object is lost when it falls into a black hole (information paradox) or objects are vaporized before entering a black hole (firewall paradox).

In this new paper, Hawking proposes a different approach.  He argues that rather than instead of gravity warping space and time into an event horizon, the quantum fluctuations of Hawking radiation create a layer turbulence in that region.  So instead of a sharp event horizon, a black hole would have an apparent horizon that looks like an event horizon, but allows information to leak out.  Hawking argues that the turbulence would be so great that the information leaving a black hole would be so scrambled that it is effectively irrecoverable.

If Stephen Hawking is right, then it could solve the information/firewall paradox that has plagued theoretical physics.  Black holes would still exist in the astrophysics sense (the one in the center of our galaxy isn’t going anywhere) but they would lack event horizons.  It should be stressed that Hawking’s paper hasn’t been peer reviewed, and it is a bit lacking on details.  It is more of a presentation of an idea rather than a detailed solution to the paradox.  Further research will be needed to determine if this idea is the solution we’ve been looking for.

The post Black Holes No More? Not Quite. appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2014/01/30/black-holes-quite/feed/ 0