physics – One Universe at a Time https://briankoberlein.com Brian Koberlein Thu, 21 Feb 2019 22:09:36 +0000 en-US hourly 1 https://wordpress.org/?v=5.1 The Magic Rock https://briankoberlein.com/2017/07/16/the-magic-rock/ https://briankoberlein.com/2017/07/16/the-magic-rock/#comments Sun, 16 Jul 2017 11:00:51 +0000 https://briankoberlein.com/?p=6695

There's a magic rock in France. It defines our standard of mass, and scientist would like to get rid of it.

The post The Magic Rock appeared first on One Universe at a Time.

]]>

There is a magic rock in St. Cloud, France. It’s made not of stone, but of a metallic alloy that’s 90% platinum and 10% iridium, and it’s magic not through some supernatural force, but because scientists have declared it to have a mass of exactly 1 kilogram. Now many scientists would like to get rid of it. 

Our civilization is built upon a system of measurement standards. If two people want to trade, they have to agree on what a pound is. If you pay a contractor to build a 100 foot tall building, you have to agree on the length of a foot. Throughout history humans have had standards of measurement, often dictated by governmental decree. But since the early 1800s there has been a quest to create a truly universal standard of measurements. This became the metric system, which was further standardized to the Système international d’unités (SI) in 1960. The SI standard has become the basis for measurement across the globe. They define the physical units we use to measure things. Even in the United States, quantities like the foot and pound are defined in terms of SI units.

The most common SI units are those of the meter (length), second (time) and kilogram (mass). In the 1800s these were based upon the physical characteristics of Earth. A meter was defined by declaring the circumference of Earth to be 40,000 kilometers. A second was defined by declaring the length of an average day to be 24 hours long. A kilogram was defined as the mass of a liter (1000 cubic centimeters) of water. While these definitions initially worked well, as our measurements became more precise things became problematic. As measurement of the Earth’s circumference improved, the length of a meter would necessarily change. Since the volume of a liter is defined in terms of length, the mass of a kilogram likewise shifted. Precise measurements of Earth’s rotation showed that the length of a day varied, so even the second wasn’t entirely fixed.

There are two ways to define a set of units that don’t vary. One is by defining a particular object to be an exact standard, and the other is to define units in terms of universal physical constants. The meter and second are now defined using the later method. For example, in Einstein’s theory of relativity, the speed of light in a vacuum is always the same. No matter where you are in the universe, or how you are moving through space, the speed of light never changes. It is an absolute physical constant. This has been verified through numerous experiments, and in 1983 it was given an exact value. By definition, the speed of light is 299,792,458 meters per second. By defining this value, we also defined the length of a meter. Since the speed of light is a constant, if you know how long a second is, you know the length of a meter.

Emission spectrum of a high pressure sodium lamp. Credit: Chris Heilman

The length of a second is also defined in terms of light. By the 1960s we had developed atomic clocks based upon cesium 133. Like all elements, Cesium 133 emits light at specific frequencies. Light is emitted from an atom when an electron moves from a higher energy quantum state to a lower one, and under the right conditions they are always the same. One particular emission from cesium 133 is known as the hyperfine ground state, and it is used to regulate an atomic clock the way the swing of a pendulum regulates a grandfather clock. In 1967 the frequency of light emitted by this hyperfine transition was defined to be 9,192,631,770 Hz. By measuring the frequency, you know the length of a second.

Since the meter and second are based upon physical constants, they don’t change. They can also be measured anywhere in the universe. If an alien civilization wanted to know what units we use, we could send them a radio message with the definition for meters and seconds, and the aliens could recreate those units. But since 1889, the kilogram has been defined by a specific chunk of metal known as the International Prototype of the Kilogram (IPK). If the aliens wanted to know the mass of a kilogram, they would have to make a trip to France.

The relative mass change of kilogram copies over time. Credit: Greg L at English Wikipedia

Besides the necessary road trip for aliens, there is a big problem with using a magic rock as the standard kilogram. Since the mass of the IPK is exact by definition, it cannot change under any circumstances. If someone were to shave off a bit of metal, it would still be one kilogram by definition. Shaving the IPK down a bit wouldn’t make the kilogram lighter, it would make everything else in the world a bit heavier. Of course, that doesn’t make any sense. Shaving down a bit of metal in France doesn’t make the Statue of Liberty weigh more. The problem is with our definition of mass. And in a sense this kind of thing actually happens. In addition to the official prototype kilogram, there are official copies all over the world. By comparing the copies to the IPK, we can determine the stability of its mass. This has only been done a few times over the years, but on average the mass of the copies has increased slightly compared to the IPK. Either the official kilogram is getting lighter, or the copies are getting heavier.

The standard kilogram hasn’t been replaced by a physical constant because we haven’t been able to measure them with enough precision. The obvious physical constant for mass would be the universal constant of gravity G. But gravity is a weak force, and measuring G is difficult. So far we’ve only measured it to about one part in 10,000, which isn’t nearly accurate enough to define mass. But there is another constant we could use, and it’s known as the Planck constant.

The Planck constant lies at the heart of quantum theory. It was first introduced by Max Planck in his study of light. When objects are heated, they emit light, and the color of that light depends upon the temperature of the object. This is known as blackbody radiation. According to classical theory, most of the light emitted should have very short wavelengths, but experimentally this wasn’t the case. Planck demonstrated that light must be quantized proportional to a small constant h, which we now call Planck’s constant. As our understanding of quantum theory grew, the Planck constant played a role not just in quantization, but quantum ideas of energy and momentum. In SI units, h has units of kg∙m2/s. If the Planck constant is defined to have an exact value, then the kilogram would be defined in terms of Planck’s constant as well as the meter and second.

In principle it’s a good idea, but it can only be done if we can measure it accurately. In 2014 the Conférence Générale des Poids et Mesures  (CGPM) decided that before such a definition could occur, three independent measurements of the Planck constant would need to be made, each with an accuracy of 50 parts per billion, and one accurate to 20 parts per billion. By June of this year three experiments have been done with uncertainties smaller than 20 parts per billion. The CGPM meets again in 2018, where it is expected they will officially define the Planck constant to be exactly 6.626069934 x 10−34 kg∙m2/s. When that happens the prototype kilogram will no longer be a magic rock, but simply a part of scientific history.

And the aliens won’t have to make that road trip after all.

The post The Magic Rock appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2017/07/16/the-magic-rock/feed/ 2
Doomsday Scenario https://briankoberlein.com/2017/03/14/doomsday-scenario/ https://briankoberlein.com/2017/03/14/doomsday-scenario/#comments Tue, 14 Mar 2017 15:42:51 +0000 https://briankoberlein.com/?p=6545

Could the Universe collapse and destroy everything? Probably not.

The post Doomsday Scenario appeared first on One Universe at a Time.

]]>

Humans are mortal. Not just as individuals, but also as a species. We can defend against many of the existential dangers to humanity. Threats such as global warming and pollution are well understood, and we can take steps to address them if we have the will. Even cosmic threats such as a civilization ending impact can be mitigated given time. But what about a deeper cosmic threat? What if the Universe could destroy not only our planet, but the entire galaxy, and what if we could never see it coming? 

Recently there’s been buzz about an idea known as the false vacuum scenario, and it’s terrifying to think of.

Usually a physical system will try to get to the lowest energy state it can, releasing that energy in some form. In classical physics, if a system reaches a state of low energy it will remain there even if a lower energy state is possible. Imagine a ball rolling into a small valley on the side of a mountain. If the ball could get out of the valley it would roll even farther down the mountain. But the ball has no way to get out of the valley, so it will remain their indefinitely.

However in quantum mechanics this isn’t the case. If a quantum system reaches a state of low energy, it might remain there for a time, but it won’t remain there forever. Because of an effect known as quantum tunneling, a quantum system can break out of its little valley and head toward an even lower energy state. Given enough time, a quantum system will eventually reach the lowest energy state possible.

The observed mass of the Higgs boson supports the idea that the Universe is in a metastable state. Credit: Wikipedia

Our Universe is a quantum system, so one of the big questions is whether it happens to be stable and in the lowest energy state, or in a higher energy state and only metastable. In the standard model of particle physics, the answer to this question can be answered by the mass of the Higgs boson and the top quark. These two masses can be used to determine if the the vacuum state of the electroweak force is stable or metastable. Current observations point to it being metastable, which means the current state of the Universe might be temporary. If so, the Universe could collapse into a lower energy state at any time. If it does, then everything in the Universe would be destroyed. And there would be no way to see it coming. We would just exist one moment, and dissolve into quantum chaos the next.

But how likely is such a scenario? It’s tempting to argue that since the Universe has existed just fine for nearly 14 billion years, it will probably exist for billions more. But that’s not how probability works. If you toss a fair coin ten times and each time comes up heads, that doesn’t mean it will likely come up heads the next ten times. The odds of each toss is 50/50, and just because you got lucky the first ten time doesn’t mean you will on toss eleven. However there is also the possibility that your coin isn’t fair, in which case you would expect to keep seeing heads. So if you get heads ten times in a row, what are the odds that the coin is fair?

The more likely the doomsday scenario, the less likely Earth would have formed later.

We can use this idea to estimate the likelihood of the false vacuum scenario. We live in a Universe that is about 14 billion years old, and Earth formed when the Universe was about 9 billion years old. If the false vacuum scenario were highly likely, then the odds of our planet forming so late in the game would be tiny. The more stable the Universe is likely to be, the more probable a late-forming Earth is. As with the coin toss, the fact that we live on a planet that only formed 5 billion years ago means the odds of cosmic destruction must be quite small. Doing the math, it comes out to a chance of about 1 in 1.1 billion years.

So even if the Universe is metastable (and we still don’t know for sure) it is at least very, very stable. There are lots of other existential threats that are more likely, and we would do well to focus on them. If we rise to the challenge there is still plenty of time to explore the stars.

Paper: Max Tegmark and Nick Bostrom. Is a doomsday catastrophe likely? Nature 438, 754 (2005)

The post Doomsday Scenario appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2017/03/14/doomsday-scenario/feed/ 8
Starry Fate https://briankoberlein.com/2017/02/12/starry-fate/ https://briankoberlein.com/2017/02/12/starry-fate/#comments Sun, 12 Feb 2017 12:00:27 +0000 https://briankoberlein.com/?p=6469

Quantum entanglement might be strange, but it doesn't decide the outcome hundreds of years in advance.

The post Starry Fate appeared first on One Universe at a Time.

]]>

Our fate is written in the stars, so the old stories go. It makes for thrilling drama, but it isn’t the way the Universe works. But there’s an interesting effect of quantum mechanics that might leave an opening for a starry fate, so a team of researchers decided to test the idea. 

The idea stems from a subtle effect of quantum physics demonstrated by the Einstein-Podolsky-Rosen (EPR) experiment. One of the basic properties of quantum objects is that their behavior isn’t predetermined. The statistical behavior of a quantum system is governed by the laws of quantum theory, but the specific outcome of a particular measurement is indefinite until it’s actually performed. This behavior manifests itself in things such as particle-wave duality, where photons and electrons can sometimes behave like particles and sometimes like waves.

One of the more subtle effects related to this property is known as entanglement, when two quantum objects have some kind of connection that allows you to gain information about object A by only interacting with object B. As a basic example, suppose I took a pair of shoes and sent one shoe to my brother in Cleveland, and the other to my sister in Albuquerque. Knowing what a prankster I am, when my sister opens the package and finds a left shoe, she immediately knows her brother was sent the right one. The fact that shoes come in pairs means they are an “entangled” system.

The difference between shoes and quantum entanglement is that the shoes already had a destined outcome. When I mailed the shoes days earlier, the die was already cast. Even if I didn’t know which shoe I sent to my brother and sister, I definitely sent one or the other, and there was always a particular shoe in each box. My sister couldn’t have opened the box to find a slipper. But with quantum entanglement, slippers are possible. In the quantum world, it would be like mailing the boxes where all I know is that they form a pair. It could be shoes, gloves or socks, and neither I nor my siblings would know what the boxes contain until one of them opens a box. But the moment my brother opens the box and finds a right-handed glove, he immediately knows our dear sister will be receiving its left-handed mate.

If all of this sounds really strange, you’re not alone. Even quantum physicists find it strange, and they have confirmed the effect countless times. It’s such a strange thing that some have argued that quantum objects must have some kind of secret information that lets them know what to do. We may not know what the outcome might be, but the two quantum objects do.

The key to doing the EPR experiment is to ensure that entangled objects are measured in a random way. That way the system is truly indefinite until one of the objects is measured. This is usually done by letting a random number generator decide the measurement after the experiment has begun.  But if you really want to be picky, you could argue that while the experiment is being set up, there is plenty of time for the system to know what is going on. Technically, the experiment, random generator, and scientist are all “entangled” as a single system, so the outcome may be pre-biased. What looks like a random choice made after the experiment started may not actually be random. This is known as the setting independence problem.

A light cone diagram showing the range of influence possible for the cosmic EPR experiment. Credit: Johannes Handsteiner, et al.

To address this issue, the team used distant stars to roll the dice for their experiment. Rather than using a local random generator, the team took real-time observations of two stars. One star is about 600 light years away, and the other is about 1,900 light years away. They took observations of each star at particular wavelengths to ensure the light wasn’t influenced by local effects such as Earth’s atmosphere, and use the observations as their random number generator. It would take hundreds of years for the quantum objects of experiment to entangle with these distant stars, so this solves the independence problem. What they found was that Bell’s inequality was violated in their experiment, just as it is in similar experiments, meaning that the system can’t have any hidden information to bias the outcome. So once again the EPR experiment shows there aren’t any hidden variables within the system.

Now it is true that this new experiment doesn’t fully solve the independence problem. Perhaps the experiment, scientists and the entire region of stars within hundreds of light years conspired to ensure the system had inside information. That’s theoretically possible, but it would have had to been given to the experiment at least 600 years ago. As the authors note, the experiment would have been given insider information about the time the Gutenberg Bible was being printed.

So we can safely assume that there aren’t any hidden variables within the system, and quantum theory acts just as we’d expect.

Paper: Johannes Handsteiner, et al. Cosmic Bell Test: Measurement Settings from Milky Way StarsarXiv:1611.06985 [quant-ph] (2017)

 

The post Starry Fate appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2017/02/12/starry-fate/feed/ 4
Violet Sky https://briankoberlein.com/2017/01/17/violet-sky/ https://briankoberlein.com/2017/01/17/violet-sky/#comments Tue, 17 Jan 2017 12:00:11 +0000 https://briankoberlein.com/?p=6428

You might know why the sky is blue, but why isn't the sky violet?

The post Violet Sky appeared first on One Universe at a Time.

]]>

Why is the sky blue? It’s a common question asked by children, and the simple answer is that blue light is scattered by our atmosphere more than red light, hence the blue sky. That’s basically true, but then why don’t we see a violet sky?

The blue sky we observe depends upon two factors: how sunlight interacts with Earth’s atmosphere, and how our eyes perceive that light.

When light interacts with our atmosphere it can scatter, similar to the way one billiard ball can collide with another, making them go off in different directions. The main form of atmospheric scattering is known as Rayleigh scattering. If you imagine photons bouncing off molecules of air, that’s a rough approximation. But photons and air molecules aren’t billiard balls, so there are differences. One of these is that the amount of scattering depends upon the wavelength (or color) of the light. The shorter the wavelength, the more the light scatters. Since the rainbow of colors going from red to violet corresponds with wavelengths of light going from long to short, the shorter blue wavelengths are scattered more. So our sky appears blue because of all the scattered blue light. This is also the reason why sunsets can appear red. Blue light is scattered away, leaving a reddish looking sunset.

But if that’s the case, why isn’t the sky violet? Sure, blue light is scattered more than red or green, but violet light has an even shorter wavelength, so violet should be scattered more than blue. Shouldn’t the sky appear violet, or at least a violet-blue? It turns out our sky is violet, but it appears blue because of the way our eyes work.

Color sensitivity of the cones and rods of the human eye. Credit: Wikipedia

We don’t see individual wavelengths. Instead, the retinas of our eyes have three types of color sensitive cells known as cones. One type is most sensitive to red wavelengths, while the other two are most sensitive to green and blue wavelengths. When light we look at something, the strength of signal from each type of cone allows our brains to determine the colors we see. These colors roughly correspond to the actual wavelengths we see, but there are subtle differences. While each type of cone has its peak sensitivity at red, green, or blue, they also detect light of other colors. Light with “blue” wavelengths stimulate blue cones the most, but they also stimulate red and green just a little bit. If it really was blue light that was scattered most, then we’d see the sky as a slightly greenish blue.

We don’t see the greenish hue, however, because of the sky’s violet light. Violet is scattered most by Earth’s atmosphere, but the blue cones in our eyes aren’t as sensitive to it. While our red cones aren’t good at seeing blue or violet light, they are a bit more sensitive to violet than our green cones. If only violet wavelengths were scattered, then we would see violet light with a reddish tinge. But when you combine the blue and violet light of the sky, the greenish tinge of blue and reddish tinge of violet are about the same, and wash out. So what we see is a pale blue sky.

As far as wavelengths go, Earth’s sky really is a bluish violet. But because of our eyes we see it as pale blue.

The post Violet Sky appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2017/01/17/violet-sky/feed/ 5
Antimatter Astronomy https://briankoberlein.com/2017/01/02/antimatter-astronomy/ https://briankoberlein.com/2017/01/02/antimatter-astronomy/#comments Mon, 02 Jan 2017 12:00:38 +0000 https://briankoberlein.com/?p=6416

Matter and antimatter emit the same spectra of light. So how do we know that distant galaxies aren't made of antimatter?

The post Antimatter Astronomy appeared first on One Universe at a Time.

]]>

In astronomy we study distant galaxies by the light they emit. Just as the stars of a galaxy glow bright from the heat of their fusing cores, so too does much of the gas and dust at different wavelengths. The pattern of wavelengths we observe tells us much about a galaxy, because atoms and molecules emit specific patterns of light. Their optical fingerprint tells us the chemical composition of stars and galaxies, among other things. It’s generally thought that distant galaxies are made of matter, just like our own solar system, but recently it’s been demonstrated that anti-hydrogen emits the same type of light as regular hydrogen. In principle, a galaxy of antimatter would emit the same type of light as a similar galaxy of matter, so how do we know that a distant galaxy really is made of matter? 

The basic difference between matter and antimatter is charge. Atoms of matter are made of positively charged nuclei surrounded by negatively charged electrons, while antimatter consists of negatively charged nuclei surrounded by positively charged positrons (anti-electrons). In all of our interactions, both in the lab and when we’ve sent probes to other planets, things are made of matter. So we can assume that most of the things we see in the Universe are also made of matter.

However, when we create matter from energy in the lab, it is always produced in pairs. We can, for example, create protons in a particle accelerator, but we also create an equal amount of anti-protons. This is due to a symmetry between matter and antimatter, and it leads to a problem in cosmology. In the early Universe, when the intense energy of the big bang produced matter, did it also produce an equal amount of antimatter? If so, why do we see a Universe that’s dominated by matter? The most common explanation is that there is a subtle difference between matter and antimatter. This difference wouldn’t normally be noticed, but on a cosmic scale it means the big bang produced more matter than antimatter.

But suppose the Universe does have an equal amount of matter and antimatter, but early on the two were clumped into different regions. While our corner of the Universe is dominated by matter, perhaps there are distant galaxies or clusters of galaxies that are dominated by antimatter. Since the spectrum of light from matter and antimatter is the same, a distant antimatter galaxy would look the same to us as if it were made of matter. Since we can’t travel to distant galaxies directly to prove their made of matter, how can we be sure antimatter galaxies don’t exist?

One clue comes from the way matter and antimatter interact. Although both behave much the same on their own, when matter and antimatter collide they can annihilate each other to produce intense gamma rays. Although the vast regions between galaxies are mostly empty, they aren’t complete vacuums. Small amounts of gas and dust drift between galaxies, creating an intergalactic wind. If a galaxy were made of antimatter, any small amounts of matter from the intergalactic wind would annihilate with antimatter on the outer edges of the galaxy and produce gamma rays. If some galaxies were matter and some antimatter, we would expect to see gamma ray emissions in the regions between them. We don’t see that. Not between our Milky Way and other nearby galaxies, and not between more distant galaxies. Since our region of space is dominated by matter, we can reasonably assume that other galaxies are matter as well.

It’s still possible that our visible universe just happens to be matter dominated. There may be other regions beyond the visible universe that are dominated by antimatter, and its simply too far away for us to see. That’s one possible solution to the matter-antimatter cosmology problem. But that would be an odd coincidence given the scale of the visible universe.

So there might be distant antimatter galaxies in the Universe, but we can be confident that the galaxies we do see are made of matter just like us.

The post Antimatter Astronomy appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2017/01/02/antimatter-astronomy/feed/ 18
Through The Looking Glass https://briankoberlein.com/2016/12/20/through-the-looking-glass/ https://briankoberlein.com/2016/12/20/through-the-looking-glass/#comments Tue, 20 Dec 2016 16:17:37 +0000 https://briankoberlein.com/?p=6386

Light from anti-hydrogen has been observed for the first time.

The post Through The Looking Glass appeared first on One Universe at a Time.

]]>

Hydrogen is the most abundant element in the Universe. It consists of a single proton paired with an electron. Since the proton and electron are bound together, the electron must reside in particular energy states. When the electron transitions from a higher energy state to a lower one, it releases light with a specific color. Each energy transition for the electron corresponds to a particular color, and together they form the emission spectrum of hydrogen. All atoms have nuclei of protons and neutrons bound to electrons, and the resulting emission spectra allow us to determine what makes up distant objects. It’s one of the most powerful tools in astronomy.

One of the ways we distinguish particles is by their charge. Protons are positively charged, while electrons are negatively charged. In 1932 Carl David Anderson discovered a particle that had the same mass as an electron, but with a positive charge. Later it was discovered that protons also had a twin with the same mass, but negatively charged. Such charge-reversed particles came to be known as antimatter. We found that matter and antimatter could annihilate each other to produce intense gamma ray light, and intense energy could create pairs of particles consisting of one matter and one antimatter particle.

In principle, from basic antimatter particles we could create anti-atoms and anti-molecules. If an anti-proton is bound to a positron (anti-electron) it would create anti-hydrogen. According to our understanding of physics, anti-atoms should be identical to their matter twins except for the reversal of their charges. The positrons of anti-hydrogen should be quantized into the same specific energy states as electrons in hydrogen, and when they transition between energy states they should create the same emission spectrum as hydrogen. At least that’s the theory. Proving it is a much harder challenge.

Although we’ve been creating antimatter in the lab since the 1930s, it wasn’t until 1995 that we were finally able to create anti-hydrogen. That’s because the antimatter we create tends to form with lots of kinetic energy, and getting a positron and anti-proton to slow down enough to bind together is difficult. Once they do form anti-hydrogen we face a second challenge, specifically that anti-hydrogen, like hydrogen, is electrically neutral. When particles are charged we can easily push them around with electric and magnetic fields. It’s harder to to contain neutral antimatter.

But recently we’ve been able to create and hold hundreds of anti-hydrogen atoms for more than 15 minutes. That might not seem like much, but it means we can finally start doing some experiments on anti-hydrogen. In a recent experiment, anti-hydrogen atoms were bombarded with laser light. The positrons of those atoms absorbed some of the light, putting them in excited states. After a bit the positrons transitioned back to a lower energy state and released light of their own. It’s the first case of light being emitted by anti-atoms in the lab. The team that achieved this also measured one emission line of anti-hydrogen, and found that it was the same as that of regular hydrogen.

Although this is only a basic first test of light from anti-hydrogen, it’s a pretty significant achievement. As we’re able to create and store more anti-hydrogen for longer periods of time, we’ll finally be able to test whether anti-hydrogen has subtle differences in its spectrum that points towards new physics through the looking glass.

Paper: M. Ahmadi, et al. Observation of the 1S–2S transition in trapped antihydrogen. Nature (2016) doi:10.1038/nature21040

The post Through The Looking Glass appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2016/12/20/through-the-looking-glass/feed/ 3
Doing The Wave https://briankoberlein.com/2016/12/04/doing-the-wave/ https://briankoberlein.com/2016/12/04/doing-the-wave/#comments Sun, 04 Dec 2016 12:00:28 +0000 https://briankoberlein.com/?p=6358

The pilot wave model of quantum theory is an interesting idea, but it won't save the EMDrive.

The post Doing The Wave appeared first on One Universe at a Time.

]]>

There has been a lot of digital ink spilled over the recent paper on the reactionless thrust device known as the EMDrive. While it’s clear that a working EM Drive would violate well established scientific theories, what isn’t clear is how such a violation might be resolved. Some have argued that the thrust could be an effect of Unruh radiation, but the authors of the new paper argue instead for a variation on quantum theory known as the pilot wave model. 

One of the central features of quantum theory is its counter-intuitive behavior often called particle-wave duality. Depending on the situation, quantum objects can have characteristics of a wave or characteristics of a particle. This is due to the inherent limitations on what we can know about quanta. In the usual Copenhagen interpretation of quantum theory, an object is defined by its wavefunction. The wavefunction describes the probability of finding a particle in a particular location. The object is in an indefinite, probabilistic state described by the wavefunction until it is observed. When it is observed, the wavefunction collapses, and the object becomes a definite particle with a definite location.

While the Copenhagen interpretation is not the best way to visualize quantum objects it captures the basic idea that quanta are local, but can be in an indefinite state. This differs from the classical objects (such as Newtonian theory) where things are both local and definite. We can know, for example, where a baseball is and what it is doing at any given time.

The pilot wave model handles quantum indeterminacy a different way. Rather than a single wavefunction, quanta consist of a particle that is guided by a corresponding wave (the pilot wave). Since the position of the particle is determined by the pilot wave, it can exhibit the wavelike behavior we see experimentally. In pilot wave theory, objects are definite, but nonlocal. Since the pilot wave model gives the same predictions as the Copenhagen approach, you might think it’s just a matter of personal preference. Either maintain locality at the cost of definiteness, or keep things definite by allowing nonlocality. But there’s a catch.

Although the two approaches seem the same, they have very different assumptions about the nature of reality. Traditional quantum mechanics argues that the limits of quantum theory are physical limits. That is, quantum theory tells us everything that can be known about a quantum system. Pilot wave theory argues that quantum theory doesn’t tell us everything. Thus, there are “hidden variables” within the system that quantum experiments can’t reveal. In the early days of quantum theory this was a matter of some debate, however both theoretical arguments and experiments such as the EPR experiment seemed to show that hidden variables couldn’t exist. So, except for a few proponents like David Bohm, the pilot wave model faded from popularity. But in recent years it’s been demonstrated that the arguments against hidden variables aren’t as strong as we once thought. This, combined with research showing that small droplets of silicone oil can exhibit pilot wave behavior, has brought pilot waves back into play.

How does this connect to the latest EM Drive research? In a desperate attempt to demonstrate that the EM Drive doesn’t violate physics after all, the authors spend a considerable amount of time arguing that the effect could be explained by pilot waves. Basically they argue that not only is pilot wave theory valid for quantum theory, but that pilot waves are the result of background quantum fluctuations known as zero point energy. Through pilot waves the drive can tap into the vacuum energy of the Universe, thus saving physics! To my mind it’s a rather convoluted at weak argument. The pilot wave model of quantum theory is interesting and worth exploring, but using it as a way to get around basic physics is weak tea. Trying to cobble a theoretical way in which it could work has no value without the experimental data to back it up.

At the very core of the EM Drive debate is whether it works or not, so the researchers would be best served by demonstrating clearly that the effect is real. While they have made some interesting first steps, they still have a long way to go.

Paper: Harris, D.M., et al. Visualization of hydrodynamic pilot-wave phenomena, J. Vis. (2016) DOI 10.1007/s12650-016-0383-5

The post Doing The Wave appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2016/12/04/doing-the-wave/feed/ 2
Jury Of One’s Peers https://briankoberlein.com/2016/11/25/jury-ones-peers/ https://briankoberlein.com/2016/11/25/jury-ones-peers/#comments Fri, 25 Nov 2016 12:00:15 +0000 https://briankoberlein.com/?p=6352

The EM drive has finally passed peer review. What now?

The post Jury Of One’s Peers appeared first on One Universe at a Time.

]]>

The reactionless thruster known as the EM Drive has stirred heated debate over the past few years. If successful it could provide a new and powerful method to take our spacecraft to the stars, but it has faced harsh criticism because the drive seems to violate the most fundamental laws of physics. One of the biggest criticisms has been that the work wasn’t submitted for peer review, and until that happens it shouldn’t be taken seriously. Well, this week that milestone was reached with a peer-reviewed paper. The EM Drive has officially passed peer review. 

It’s important to note that passing peer review means that experts have found the methodology of the experiments reasonable. It doesn’t guarantee that the results are valid, as we’ve seen with other peer-reviewed research such as BICEP2. But this milestone shouldn’t be downplayed either. With this new paper we now have a clear overview of the experimental setup and its results. This is a big step toward determining whether the effect is real or an odd set of secondary effects. That said, what does the research actually say?

The basic idea of the EMDrive is an asymmetrical cavity where microwaves are bounced around inside. Since the microwaves are trapped inside the cavity, there is no propellent or emitted electromagnetic radiation to push the device in a particular direction, standard physics says there should be no thrust on the device. And yet, for reasons even the researchers can’t explain, the EM Drive does appear to experience thrust when activated. The main criticism has focused on the fact that this device heats up when operated, and this could warm the surrounding air, producing a small thrust. In this new work the device was tested in a near vacuum, eliminating a major criticism.

The relation of thrust to power for the EM Drive. Credit: Smith, et al.

The relation of thrust to power for the EM Drive. Credit: Smith, et al.

What the researchers found was that the device appears to produce a thrust of 1.2 ± 0.1 millinewtons per kilowatt of power in a vacuum, which is similar to the thrust seen in air. By comparison, ion drives can provide a much larger 60 millinewtons per kilowatt. But ion drives require fuel, which adds mass and limits range. A functioning EM drive would only require electric power, which could be generated by solar panels. An optimized engine would also likely be even more efficient, which could bring it into the thrust range of an ion drive.

While all of this is interesting and exciting, there are still reasons to be skeptical. As the authors point out, even this latest vacuum test doesn’t eliminate all the sources of error. Things such as thermal expansion of the device could account for the results, for example. Now that the paper is officially out, other possible error sources are likely to be raised. There’s also the fact that there’s no clear indication of how such a drive can work. While the lack of theoretical explanation isn’t a deal breaker (if it works, it works), it remains a big puzzle to be solved.  The fact remains that experiments that seem to violate fundamental physics are almost always wrong in the end.

I’ve been pretty critical of this experiment from the get go, and I remain highly skeptical. However, even as a skeptic I have to admit the work is valid research. This is how science is done if you want to get it right. Do experiments, submit them to peer review, get feedback, and reevaluate. For their next trick the researchers would like to try the experiment in space. I admit that’s an experiment I’d like to see.

Paper: Harold White, et al. Measurement of Impulsive Thrust from a Closed Radio-Frequency Cavity in Vacuum. Journal of Propulsion and Power. DOI: 10.2514/1.B36120 (2016)

The post Jury Of One’s Peers appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2016/11/25/jury-ones-peers/feed/ 5
The Born Identity https://briankoberlein.com/2016/11/16/the-born-identity/ https://briankoberlein.com/2016/11/16/the-born-identity/#respond Wed, 16 Nov 2016 12:00:02 +0000 https://briankoberlein.com/?p=6344

The Born rule is a fundamental assumption of quantum theory. But could it be wrong?

The post The Born Identity appeared first on One Universe at a Time.

]]>

Quantum theory is probabilistic by nature. Because of fuzzy effects of quantum indeterminacy, the equations of quantum mechanics can’t tell us exactly what an object is doing, but only what the likely outcome will be when we interact with it. This probability is determined by the Born rule (named after physicist Max Born). The rule has various forms, but in the most common approach it means that squaring the wavefunction of an object yields the probability of a particular outcome. The Born rule works extraordinarily well, making quantum theory the most accurate scientific theory we have, but it is also an assumption. It’s a postulate of quantum theory rather than being derived formally from the model. So what if it’s wrong. 

Even if it is wrong on some level, the great success of quantum physics demonstrates that it certainly works in most cases. But we scientists love to test our assumptions even when they work, so there have been attempts to disprove the Born rule. One approach looked at a triple-slit experiment, which is a variation of the famous double-slit experiment.

Interference patterns from a double slit experiment. Credit: Pieter Kuiper

Interference patterns from a double slit experiment. Credit: Pieter Kuiper

In the double slit experiment, quantum objects such as photons or electrons are beamed through two closely spaced slits. Since we don’t know which slit each object passes through, the possibilities overlap to produce an interference pattern rather than two sharp lines. According to the Born rule, even when we run the experiment one object at a time, the probability distribution of each object follows this pattern. This is exactly what we see experimentally, making it an excellent demonstration of quantum theory.

The triple slit experiment uses three small openings instead of two. While it seems like a trivial change, if done correctly it allows for secondary interactions that could in principle violate the Born rule. Basically, if you just square the total wavefunction of the three slits, you get one probability distribution. If you calculate the secondary interactions you get a different distribution. The difference is extremely small, but in 2010 the experiment was performed, and found the Born rule held within experimental limits.

While this would seem to confirm the Born rule, the precision of the experiment was only to 1 part in 100, which isn’t very high. Unfortunately, even getting that level of precision is difficult with the triple-slit experiment. That’s because the test only works if the wavelength of the experiment is less than the width of the slits. But now a new paper proposes a different approach that might yield even greater precision.

This new approach is a double slit experiment with a twist. Rather than simply letting an object pass through the two slits, an extra level is introduced to shift objects from one slit to the other. This can be done for either slit or both, even without knowing which slit the object passes through. According to the Born rule, any such shift should have no effect on the outcome. If the shift affects the outcome, then the Born rule is violated. Doing this kind of shift in a real experiment will be tricky, but it’s not limited by the wavelengths of the objects, so potentially it would be much more precise than previous experiments.

Given the power of the Born rule thus far, I wouldn’t bet on seeing a violation. But this kind of experiment is a win-win. Either the Born rule continues to reign, or we discover a subtle violation that could lead to a better understanding of things like quantum gravity.

Paper: Sinha, U., et al. Ruling Out Multi-Order Interference in Quantum Mechanics. Science 329, 418-421 (2010).

Paper: James Q. Quach. Which-way double slit experiments and Born rule violationarXiv:1610.06401 (2016).

The post The Born Identity appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2016/11/16/the-born-identity/feed/ 0
Close Enough https://briankoberlein.com/2016/11/07/close-enough/ https://briankoberlein.com/2016/11/07/close-enough/#comments Mon, 07 Nov 2016 12:00:02 +0000 https://briankoberlein.com/?p=6339

To make a black hole, do we have to squeeze mass all the way to its limit, or do we just have to get close enough?

The post Close Enough appeared first on One Universe at a Time.

]]>

A black hole is an object that has gravitationally collapsed under its own weight. It could be formed from the the remains of a dead star, a dense central region of a galaxy, or perhaps even a small fluctuation in the early dense moments of the cosmos. Regardless of the cause, the trick is to compress a large enough mass into a small enough volume. In other words, if the density of matter is high enough, it will collapse into a black hole.

The critical volume for a given mass, known as the Schwarzschild radius is pretty easy to calculate for a non-rotating black hole. It turns out to be R = 2GM/c2, where G is Newton’s gravitational constant, c is the speed of light, and M is the mass. Compress the mass into a sphere of that radius, and you get a black hole. At least that’s how the story is told. Technically, if you compress a mass into a sphere of that volume, then it already is a black hole. But is there a minimum volume you could reach so that the mass is fated to become a black hole? Do you have to actively squeeze the mass all the way to a black hole, or can you squeeze it to a point and let nature takes its course?

This turns out to be a very interesting question. If we simply compress a certain amount of matter into an ever smaller volume, the matter itself will try to push back. As matter is squeezed, it heats up, so eventually our matter would heat to the point of vaporizing, and the gas pressure would try to oppose us. Squeeze hard enough and the nuclei of the material will start fusing, which would heat the mass further and generate more pressure. Squeeze even harder and the matter will eventually reach a point where the electrons of the material are moving at nearly the speed of light, and the quantum pressure of electrons will push against you. This is what happens within a white dwarf star. But there’s a limit to how strongly electron pressure can push back, known as the Chandrasekhar limit. If we squeeze matter harder than that, the electrons and nuclei of the material will collapse together, forming a sea of neutrons.

Since fast moving neutrons occupy less space than fast moving electrons, for a time the mass gets easier to compress. But eventually the neutrons start approaching the speed of light, and push against each other in much the same way as the electrons did. This neutron pressure is what keeps neutron stars from collapsing on themselves. As with electrons, there’s a limit to how strongly neutrons can push back, known as the Tolman-Oppenheimer-Volkoff (TOV) limit. Squeeze the mass beyond that limit, and the neutrons will collapse into each other.

According to our current understanding of physics, beyond the TOV limit the matter will collapse into a black hole. The observed upper mass of neutron stars is about twice the mass of our Sun. Such neutron stars are about 20 kilometers in diameter, while the Schwarzschild radius for such a mass is about 6 kilometers. This would imply that if we squeezed mass into a radius about 1.7 times larger than the Schwarzschild radius, then it’s doomed to become a black hole.

But what if the TOV limit isn’t the last line of defense against a black hole? What if the quarks that make up protons and neutrons behave in ways we don’t expect at really high densities, or what if quarks are comprised of something even more fundamental, and they have an even stronger limit. It is possible that something could oppose our squeezing? Could such it create so much pressure that a black hole is impossible to form?

It turns out the answer is no, and the reason is because of relativity. One of the key aspects of relativity is that energy and mass are related. Mass can be converted into energy, and energy can be converted into mass. When matter generates pressure to oppose our squeezing, that pressure has a certain energy, and that energy has a gravitational weight just like mass. So the more strongly matter pushes against us, the more gravity helps us. This is a game of diminishing returns, and there is a point where no matter how strongly the mass opposes us, gravity is even stronger. This limit is known as the Buchdahl limit. If the mass is spherical and of uniform density, this limit is 9/8 times the Schwarzschild radius. Squeeze past that point, and nothing can oppose the eventual formation of a black hole. There are more general calculations that don’t assume uniform density, but the end result is similar. So it turns out we don’t have to squeeze a ball of mass all the way to its Schwarzschild radius to make a black hole. We just have to get within about 10% of the radius the mass will collapse into a black hole on its own.

While this is a fun theoretical game, it is an excellent example of why black holes exist.

The post Close Enough appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2016/11/07/close-enough/feed/ 14
Quantum Teleportation Across The Dark Web https://briankoberlein.com/2016/10/30/quantum-teleportation-across-dark-web/ https://briankoberlein.com/2016/10/30/quantum-teleportation-across-dark-web/#respond Sun, 30 Oct 2016 11:00:41 +0000 https://briankoberlein.com/?p=6285

Quantum teleportation has been achieved over current internet infrastructure.

The post Quantum Teleportation Across The Dark Web appeared first on One Universe at a Time.

]]>

Quantum teleportation brings to mind Star Trek’s transporter, where crew members are disassembled in one location to be reassembled in another. Real quantum teleportation is a much more subtle effect where information is transferred between entangled quantum states. It’s a quantum trick that could give us the ultimate in secure communication. While quantum teleportation experiments have been performed countless times in the lab, doing it in the real world has proved a bit more challenging. But a recent experiment using a dark fibre portion of the internet has brought quantum teleportation one step closer to real world applications. 

The backbone of the internet is a network of optical fibre. Everything from your bank transactions to pictures of your cat travel as beams of light through this fibre network. However there is much more fibre that has been laid than is currently used. This unused portion of the network is known as dark fibre. Other than not being currently used, the dark fiber network has the same properties as the web we currently use. This new experiment used a bit of this dark web in Calgary to teleport a photon state under real world conditions.

The basic process of quantum teleportation begins with two objects (in this case photons) that are quantumly entangled. This basically means the state of these two objects are connected in such a way that a measurement of one object affects the state of the other. For quantum teleportation, one of these entangled objects is measured in combination with the object to be “teleported” (another photon). The result of this measurement is then sent to the other location, where a similar combined measurement is made. Since the entangled objects are part of both measurements, quantum information can be “teleported.” This might seem like an awkward way to send information, but it makes for a great way to keep your messages secret. Using this method, Alice can basically encrypt a message using the entangled objects, send the encrypted message to Bob, who can then make his own measurement of the entangled state to decode the message.

Using dark fibre to teleport photons. Credit: Raju Valivarthi, et al.

Using dark fibre to teleport photons. Credit: Raju Valivarthi, et al.

This new experiment used a variation of this method using three observers rather than two. Using the Bob and Alice analogy, Bob and Alice each make measurements of an entangled state and a photon, about 8 kilometers from each other. Their results are then sent to Charlie, who combines the two results to achieve quantum teleportation. This method assures that the experiment extends beyond a single lab location, and it was done using existing dark fibre and wavelengths of light commonly used in current fibre internet.

Overall the experiment demonstrates that quantum teleportation can be used as a way to encrypt messages over the web. The next big challenge will be to find a way to make it practical enough for everyone to use.

Paper: Raju Valivarthi, et al. Quantum teleportation across a metropolitan fibre network. Nature Photonics 10, 676–680 (2016) DOI:10.1038/nphoton.2016.180

The post Quantum Teleportation Across The Dark Web appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2016/10/30/quantum-teleportation-across-dark-web/feed/ 0
Why It Takes A Big Rocket To Reach Mars https://briankoberlein.com/2016/10/17/takes-big-rocket-reach-mars/ https://briankoberlein.com/2016/10/17/takes-big-rocket-reach-mars/#comments Mon, 17 Oct 2016 11:00:22 +0000 https://briankoberlein.com/?p=6292

SpaceX's Mars rocket will be huge. It will have to be to reach Mars.

The post Why It Takes A Big Rocket To Reach Mars appeared first on One Universe at a Time.

]]>

SpaceX has announced it’s Interplanetary Transport System (ITS), with the goal of sending humans to Mars. While there remains many questions about how such a mission will be achieved, one thing that’s very clear is that the ITS will be the biggest rocket ever constructed. It has to be. Basic physics requires it. 

The ITS is designed to have more than 13 million Newtons of thrust at sea level, compared to the 3.5 million Newtons of the Saturn V rockets used to send Americans to the Moon. All this while having only about 10% heavier. Such a big increase in thrust vs weight is necessary, because it determines not only how much mass you can lift into Earth orbit, but whether you can get that mass all the way to Mars.

Delta-V needed to reach Mars. Credit: Wikipedia user Wolfkeeper

Delta-V needed to reach Mars. Credit: Wikipedia user Wolfkeeper

It all comes down to delta-V, or how much you can change the velocity of your rocket. When it comes to reaching Earth orbit, bigger is better. The SpaceX ITS should be capable of lifting up to 550 tonnes of payload into low Earth orbit, compared to the 140 tonnes of the Saturn V. This is necessary because a trip to Mars isn’t a few-day trip to the Moon. It will require a larger crew and significantly more food and resources.

Once in Earth orbit, getting to Mars will require even more rocket power to overcome what is known as delta-V. This is the amount of speed a spacecraft needs to gain or lose to reach your destination. It takes much more delta-v to reach the surface of Mars than it does the surface of the Moon. To reach Mars you not only have to overcome Earth’s gravity, you have to overcome the Sun’s pull as you travel toward Mars. You also have to account for the fact that the orbital speed of Mars is slower than the orbital speed of Earth. Finally you have to overcome the gravity of Mars to land softly on its surface. All of this adds to the total amount of needed delta-V. To meet this need the SpaceX plans to refuel the ITS in Earth orbit with a second launch.

There are ways to minimize your delta-V requirements for an interplanetary mission. One way is to make a close flyby of a different planet. Basically, if you approach a planet in the direction of its orbit (coming up from behind, if you will), then the gravity between the planet and your spacecraft will cause the spacecraft to speed up at the cost of slowing down the planet by a tiny, tiny amount. Making a flyby in the opposite direction can cause your spacecraft to slow down. This costs you nothing in terms of fuel, but takes time because you need to orbit the Sun in just the right way. It’s a common trick used for robotic spacecraft, where we use a flyby of Earth to reach Mars or Jupiter, or a flyby of Jupiter to reach the outer solar system.

A Hohmann orbit between Earth and Mars. Image by the author.

A Hohmann orbit between Earth and Mars. Image by the author.

Flybys are cheap and easy for space probes, but they can add years to the time it takes to reach your destination. That’s a big problem for a crewed mission. So the alternative is to look at optimized orbital trajectories. For example, about every two years the positions of Earth and Mars are ideally suited so that a trip needs much less delta-V. This was actually discovered in 1925 by Walter Hohmann, who proposed a trajectory now known as the Hohmann transfer orbit. You could, for example, build a large spacecraft in such an orbit and use it as a shuttle between Earth and Mars. Such an idea was used in the book and movie The Martian.

There are other useful tricks, such as using a planets atmosphere to “aerobrake” a spacecraft, significantly reducing its delta-v once it reaches the planet. Since both Earth and Mars have atmospheres this can be used for landing spacecraft. You can also modify the flyby method by thrusting your spacecraft just as it makes its closest approach, in what is known as an Oberth maneuver (another trick used in The Martian). But these will only take you so far. To reach the surface of Mars in a reasonable time, any rocket will require more delta-v than we’ve ever had, which is why the ITS has to be so big.

The one up-side of all this is that once SpaceX, Blue Origin, or NASA builds a rocket with enough power to send humans to Mars, lots of other destinations open up as well. The delta-V requirements to reach the asteroids, Jupiter or Saturn aren’t significantly different. If we can land on Mars, we can reach the moons of Jupiter, or even start mining asteroids.

Mars is not only an awesome destination, it is also a gateway to the solar system.

The post Why It Takes A Big Rocket To Reach Mars appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2016/10/17/takes-big-rocket-reach-mars/feed/ 16