Computation – One Universe at a Time https://briankoberlein.com Brian Koberlein Thu, 21 Feb 2019 22:09:36 +0000 en-US hourly 1 https://wordpress.org/?v=5.1 Cosmic Cryptology https://briankoberlein.com/2015/11/30/cosmic-cryptology/ https://briankoberlein.com/2015/11/30/cosmic-cryptology/#comments Mon, 30 Nov 2015 21:09:57 +0000 https://briankoberlein.com/?p=5507

It's an interesting idea to use the secrets of the universe to keep your own secrets.

The post Cosmic Cryptology appeared first on One Universe at a Time.

]]>

If you want to keep information hidden, you’ll probably want to encrypt it. We do this all the time for things like credit card transactions, the data on your phone, and even this website. Encryption is a way to ensure that only the the intended recipient can get access to your information. That is, unless someone is able to crack the code.

One of the more common methods of encryption is known as public key encryption, where a large random number is entered into a key generator algorithm to create a pair of public and private keys. The public key can be used to encrypt a message which can only be decrypted with the private key. As long as the private key is kept private, this works pretty well. One one catch is that you need a large random number, and ideally it needs to be truly random. If someone could predict your random number, they could generate the same public and private key, and you’re out of luck.

But often “random” numbers are only pseudo-random. They look like random numbers, but use a particular algorithm to simulate randomness. To get better random numbers, you can use thermal fluctuations in your computer, or noise in weather data. Or, as in the case of a new paper, data from the cosmic microwave background. It might seem like the CMB is a really bad choice. After all, it can be seen by everyone, so if you use CMB data to create a random number why can’t someone else get the same number? But it turns out that’s not a problem.

The basic idea is to take a patch of sky and measure the distribution of energy from the CMB, specifically what’s known as the power spectrum. That spectrum is then compared to the theoretical ideal, and the difference creates a random number. Even if someone measured exactly the same patch of sky, they wouldn’t get the exact same result, and therefore wouldn’t get the same number. While the authors use the CMB as an example, they point out a similar method could be used to generate random numbers from the 21 centimeter line, supernova remnants, radio galaxies and other astrophysical phenomena. All you need is a basic radio telescope, and you have a random number generator.

It’s not likely that this astrophysical method is any better than what we use now. Thermal variations and weather patterns are pretty random as it is. But it’s an interesting idea to use the secrets of the universe to keep your own secrets.

Paper: Jeffrey S. Lee and Gerald B. Cleaver. The Cosmic Microwave Background Radiation Power Spectrum as a Random Bit Generator for Symmetric and Asymmetric-Key CryptographyarXiv:1511.02511 [cs.CR] (2015)

The post Cosmic Cryptology appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2015/11/30/cosmic-cryptology/feed/ 3
More Power https://briankoberlein.com/2014/12/19/power/ https://briankoberlein.com/2014/12/19/power/#comments Fri, 19 Dec 2014 12:00:17 +0000 https://briankoberlein.com/?p=4246

This month I've upgraded my home computer. My new desktop has faster processor, double the storage space, and quadruple the RAM as my venerable old laptop. I don't upgrade very often, so when it happens there's a very noticeable uptick in computing power. It's something we've become rather accustomed to. With each new phone, computer or tablet we have more power at our fingertips. This consequence of Moore's law has also revolutionized the way we do astronomy.

The post More Power appeared first on One Universe at a Time.

]]>

This month I’ve upgraded my home computer. My new desktop has a faster processor, double the storage space, and quadruple the RAM as my venerable old laptop. I don’t upgrade very often, so when it happens there’s a very noticeable uptick in computing power. It’s something we’ve become rather accustomed to. With each new phone, computer or tablet we have more power at our fingertips. This consequence of Moore’s law has also revolutionized the way we do astronomy.

The silicon revolution is what allowed deep space probes to exist. A round trip signal to the Moon takes about 2 seconds, but a round trip signal to Jupiter takes half an hour or more, which is simply too long to control a spacecraft in real time. Spacecraft need to be partly autonomous, and they need to be able to store data for later transmission back to Earth.

The Voyager spacecraft of the 1970s had about 32 kilobytes of storage, which is less than a singing birthday card these days.  By the 1990s, Pathfinder orbited Mars with about 64 megabytes of storage. Now New Horizons races toward Pluto with 8 gigabytes of memory. This kind of storage is absolutely necessary for New Horizons, since it will fly by Pluto so quickly that all of its stored data will have to be stored until it can slowly be radioed back to Earth from the edge of our solar system.

It took megabytes of data to produce this image. Credit: NASA

It took megabytes of data to produce this image. Credit: NASA

Processing power has also grown tremendously over the years, which has allowed us to analyze more data. The in the early 1990s, the COBE satellite pushed the envelope of astronomical data gathering when it collected about 46 megabytes of data per day. Over its lifetime COBE gathered nearly 300 gigabytes of data, necessary to measure the small fluctuation of the cosmic microwave background with precision. In contrast, the Planck satellite gathered data on the order of terabytes.

As storage size and computing power continue to rise, so will demands for more data and deeper analysis. The universe is a very big place, and there’s lots of things to study.

The post More Power appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2014/12/19/power/feed/ 2
Carbon Chain https://briankoberlein.com/2014/10/27/carbon-chain/ https://briankoberlein.com/2014/10/27/carbon-chain/#respond Mon, 27 Oct 2014 11:00:40 +0000 https://briankoberlein.com/?p=4051

One of the common ways we can map the distribution of matter in a galaxy is by observing the light emitted neutral hydrogen. This works pretty well because hydrogen is the most abundant element in the universe, and its emission lines are pretty distinctive. But for distant galaxies hydrogen emissions aren't very bright. To observe them you need really long exposure times, and that limits the amount of galaxies you can observe. One alternative is to look at the emissions of carbon instead.

The post Carbon Chain appeared first on One Universe at a Time.

]]>

One of the common ways we can map the distribution of matter in a galaxy is by observing the light emitted neutral hydrogen. This works pretty well because hydrogen is the most abundant element in the universe, and its emission lines are pretty distinctive. But for distant galaxies hydrogen emissions aren’t very bright. To observe them you need really long exposure times, and that limits the amount of galaxies you can observe. One alternative is to look at the emissions of carbon instead.

Carbon isn’t nearly as common as hydrogen, but its emission lines are brighter, particularly for distant galaxies where redshift is a factor. By mapping the distribution of carbon we can get an idea of the distribution of hydrogen. Of course this relies upon certain assumptions. For example, it’s generally thought that carbon and hydrogen are evenly mixed in a galaxy, so if you find lots of carbon there should also be lots of hydrogen.

Now a new paper introduces a method that greatly increases the precision of this method. The method uses computational simulations of galaxies and compares them to the distribution of carbon. In the the paper, the authors use a simulated observation of carbon emissions from the ALMA radio telescope array, and then ran hydrodynamic simulations to determine the distribution of hydrogen. They found that 80% of hydrogen in a galaxy could be mapped through carbon observations with significantly shorter exposure times.

The paper demonstrates that by combining observations and simulations we can probe young galaxies in more detail. This is particularly useful in studying galactic evolution. Now we’ll have to see how it works in the real world.

Paper: M. Tomassetti, et al. Atomic carbon as a powerful tracer of molecular gas in the high-redshift Universe: perspectives for ALMA. MNRAS Letters; doi: 10/193/mnras/slu137 (2014)

The post Carbon Chain appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2014/10/27/carbon-chain/feed/ 0
Reboot https://briankoberlein.com/2014/05/09/reboot/ https://briankoberlein.com/2014/05/09/reboot/#respond Fri, 09 May 2014 11:00:08 +0000 https://briankoberlein.com/?p=2811

One of the challenges faced by astrophysicists is that you can't repeat your experiments. With cosmology, that poses a particular challenge because we only have one observable universe. Not only can't we repeat the experiment, we only have one experiment to observe. What we can do, however, is simulate the universe and see how it compares to the real one.

The post Reboot appeared first on One Universe at a Time.

]]>

[av_video src=’http://youtu.be/SY0bKE10ZDM’ format=’16-9′ width=’16’ height=’9′]

One of the challenges faced by astrophysicists is that you can’t repeat your experiments.  If you observe a supernova explosion, you can’t put the star back together and watch it explode again. We can watch other stars explode, and from these combined observations we can gain a deeper understanding of just how stars explode, but a single star explodes only once.  With cosmology, that poses a particular challenge because we only have one observable universe. Not only can’t we repeat the experiment, we only have one experiment to observe.  What we can do, however, is simulate the universe and see how it compares to the real one.

Recently a team did just that, making the most extensive computational simulation of the universe thus far.  The results were published in Nature, but you can see a summary of the simulation in the video.  The team started with an initial state representing the universe only 12 million years old (before any stars or galaxies had formed) and 350 million light years wide. The then simulated cosmic evolution over 13 billion years.  This included not only the effects of gravity, dark matter and dark energy, but also effects such as active galactic nuclei and the enrichment of elements.

The simulation produces a range of galaxy types consistent with our own universe, as well as a cosmic structure that matches our own. The simulation wasn’t perfect, and some discrepancies with our universe appeared, such as the formation of low mass galaxies earlier than is observed in our universe. Still, it is a clear demonstration that the ΛCDM cosmic model (that of a universe with matter, dark matter and dark energy) is an accurate model of our universe.

Paper: M. Vogelsberger, et al. Properties of galaxies reproduced by a hydrodynamic simulation. Nature 509, 177–182 (2014)

The post Reboot appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2014/05/09/reboot/feed/ 0
Sim Universe https://briankoberlein.com/2014/04/26/sim-universe/ https://briankoberlein.com/2014/04/26/sim-universe/#comments Sat, 26 Apr 2014 19:00:59 +0000 https://briankoberlein.com/?p=2556

As computers have grown ever more powerful, astronomers and astrophysicists have increasingly used computers to model the complex systems they study. This can range from modelling the motions of planetary bodies in our solar system, to simulating the convection of plasma in the depth of a star. Perhaps the most ambitious computer modeling project, however, is the Millenium Project at the Max Planck Institute.

The post Sim Universe appeared first on One Universe at a Time.

]]>

As computers have grown ever more powerful, astronomers and astrophysicists have increasingly used computers to model the complex systems they study. This can range from modelling the motions of planetary bodies in our solar system, to simulating the convection of plasma in the depth of a star. Perhaps the most ambitious computer modeling project, however, is the Millenium Project at the Max Planck Institute.

The Millenium Project is an effort to model the entire universe computationally. The initial model simulated a cubic region of space about 2 billion light years across. In this volume was about 10 billion clumps of dark matter, each clump being about a billion solar masses. To this was added about 20 million galaxies.

You can see an image of the result above. One of the things that stands out in this image is that the galaxies are not uniformly distributed. Instead, they fall into clumps, with tendrils or filaments connecting them. This clumping pattern is seen in the real universe as well.

Since then the project has done a larger simulation, Millenium XXL, which used about 300 billion dark matter clumps. There are other universe simulations, such as Horizon Run 3, and the DEUS project (which bills itself as the first “full universe” simulation). Each of these have strengths and weaknesses, and it’s important to keep in mind that these are not simulations of the universe, but of a universe.

Where these models are useful is in making statistical comparisons between the results seen in the simulations and actual observations of the universe. For example, the “clumpy” nature of galaxies I mentioned above. Just how clumpy these galaxies are depends on how much dark matter and dark energy there is in the universe. We can measure the distribution of galaxies in the universe and determine how “clumpy” they are statistically. By comparing this with similar calculations done with the simulated models, we can see how well they agree. This helps us determine if our models for dark matter and dark energy agree with reality.

The Millenium project has taken this one step further, by creating the Millenium Run Observatory. This is a virtual observatory where you can “observe” different aspects of the Millenium simulation just as you would with real telescopes. This means you can also compare results more directly. You can, for example, compare a survey of galaxy observations with a simulated survey in the Millenium “universe”.

Universe simulation is still in its early stages, but it could prove to be a useful tool in studying cosmological models as we gather more and more observational data.

The post Sim Universe appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2014/04/26/sim-universe/feed/ 2
Order and Chaos https://briankoberlein.com/2013/09/21/order-and-chaos/ https://briankoberlein.com/2013/09/21/order-and-chaos/#respond Sat, 21 Sep 2013 19:00:38 +0000 https://briankoberlein.com/?p=358

How do you deal with chaos in computational astrophysics? It turns out there are ways to analyze the properties of a solution even if you don't know what the exact solution is.

The post Order and Chaos appeared first on One Universe at a Time.

]]>

A recurring theme in computational astrophysics (and physics in general) is the concept of chaos.  While many aspects of the universe are ordered and predictable, other aspects are quite chaotic.  Often things lie at a fine line between the two.

A good example of this can be seen in galactic motion.  On the one hand things are quite regular.  At a broad level stars move in a generally circular path around the galactic center.  This is analogous to our solar system, where planets move in (roughly) circular orbits around the sun.  This makes it easy to make a rough model of our galaxy as a fairly uniform disk of stars.

Of course when we look more closely things are not so simple.  For one our galaxy is not a uniform disk of stars, but rather lies mainly in spiral arms.  (Why this is the case is a topic for a future post.)  Then there is the motion of individual stars and star clusters themselves.

It turns out the motion of stars can be approximately described by a simple differential equation called the Henon-Heiles equation.  Unfortunately the solution to this equation is chaotic.  In other words the solution is very dependent on a star’s initial velocity and position.  Determining precise measurements of a star’s position and velocity can be quite a challenge.  So usually we have to look at general properties of the solution rather than finding a particular solution.

HHPoincare

The good news is that the Henon-Heiles equation as long been studied by mathematicians, so we actually know a great deal about it.  One common way to look at general solutions is to plot what is known as a Poincare map of solutions.  I’ve plotted one for the Henon-Heiles here.  A Poincare map help you determine certain aspects of a star’s motion.  For example, in the figure below you can see that the range is bounded to a particular region.  So we know the star won’t just wander off.  We can also see regions where the motion tends to cluster.  So even though we don’t know the exact motion of a star, we know its general motion.

Problems like this can’t be solved well analytically, so it is an area where computational methods shine.

The post Order and Chaos appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2013/09/21/order-and-chaos/feed/ 0
The Error of My Ways https://briankoberlein.com/2013/09/15/the-error-of-my-ways/ https://briankoberlein.com/2013/09/15/the-error-of-my-ways/#respond Sun, 15 Sep 2013 19:00:41 +0000 https://briankoberlein.com/?p=298

When you do computational astrophysics, you can get errors in your results. The trick is to recognize where those errors lie, and to learn to minimize them.

The post The Error of My Ways appeared first on One Universe at a Time.

]]>

One of the challenges of computational astrophysics is knowing how far off your answer is from the right one.  Whenever you approach a problem computationally, small errors can creep in.  This is similar to using 3.14 for pi, or 0.33 for 1/3.  Since computational values are finite decimals, they are usually a bit off from the exact value.  You might think the answer is just to carry values out to more decimal places.  While that can help, the more precise your values the more computing power it takes to calculate your problem.  Often you need to strike a balance between precision and computing time/cost.

GIRKA bigger challenge is making sure your calculation errors don’t build up over time. Most computational approaches are iterative.  So if you want to, say, calculate the motion of a satellite, you might calculate where it will be after 20 seconds, then use that answer to calculate the position 20 seconds after that, and so on. If your answer is a meter too far with the first step, it will be 2 meters off the second step, and so on.  Your error can build with each iteration, causing what is known as error drift.

So how do you prevent error drift?  One way is to look at conserved quantities (what we call invariants).  To use our satellite example, we know that its energy and angular momentum are constants over time.  So as you calculate the motion of the satellite with each iteration, you also calculate the energy and angular momentum.  If you have error drift, then the invariants will change over time, and you know there’s a problem.

SPRKInvariants also give you a way to test the accuracy of your calculations.  The less your invariants change, the better your accuracy.  In the figures above, I’ve plotted the error in energy (blue) and angular momentum (red) for a simple satellite problem.  In the first example, energy is held constant while the angular momentum is allowed to vary.  This gives an error of only a few parts in 100 billion, which is not too shabby.  In the second example, both invariants are allowed to change, but they are compared to invariants in other coordinate systems and minimized.  This gives errors of a few parts in  1000 trillion, which is even better.

There are lots of different computational approaches, and each have their advantages and disadvantages.  The trick is understanding those strengths and weaknesses.  It’s about using the right tool for the right job.

The post The Error of My Ways appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2013/09/15/the-error-of-my-ways/feed/ 0
Building a Better Star https://briankoberlein.com/2013/09/10/building-a-better-star/ https://briankoberlein.com/2013/09/10/building-a-better-star/#comments Tue, 10 Sep 2013 13:00:48 +0000 https://briankoberlein.com/?p=228

We revisit the simple model of a star to see how to make a better stellar model.

The post Building a Better Star appeared first on One Universe at a Time.

]]>

In an earlier post I talked about a very simple model of a star consisting simply of a mass of hydrogen and helium held together by gravity. The simple model ignored some important stellar properties such as the fact that stars radiate light and undergo nuclear fusion in their cores. So it wasn’t surprising that our model predicted a core temperature for the sun that was too cool by a factor of 100. But even this simple model demonstrated that the temperature and pressure of the sun is high enough to undergo nuclear fusion.

So how could we revise our model?  It turns out we can calculate the rate of energy produced by hydrogen fusion given a particular temperature, density, etc. So we can calculate the rate at which energy is produced at a given depth and include that in our model. We also have to account for the rate at which a star radiates energy away, and here we reach a bit of a snag. The mechanism by which energy (light) can escape a star depends on a star’s density. In a low mass star, energy is mainly produced uniformly throughout the core (a reaction known as a p-p cycle). As a result most of the energy leaves the core radiatively. In higher mass stars a more complex type of fusion occurs (known as the CNO cycle). This reaction is more powerful than the low mass process, which can make the center of a star very hot. As a result, energy leaves the core of a high mass star by convection.

interiortemp

Computational model of the Sun.

Our sun happens to be a mid-size star in this sense, which means the inner core is dominated by the CNO cycle (and is convective), while the outer core is dominated by the p-p cycle (and is radiative). So to make a model which accounts for energy production and radiation, we have to calculate both models and then match them up.

This revised model was first done by Martin Schwarzschild in 1958. He had to do it by hand, which was a huge challenge. With a computer it is a bit easier. Simply calculate a radiative core inward, a convective core outward, and then match them up. In the figure above you can see the resulting temperature graph.

As you can see, this time we get a core temperature pretty close to the accepted value of about 15 million K. So this is a pretty good rough model for the sun’s interior.

The post Building a Better Star appeared first on One Universe at a Time.

]]>
https://briankoberlein.com/2013/09/10/building-a-better-star/feed/ 1