Some things never change. physicists call them the constants of nature. Such quantities as the velocity of light, c, Newton’s constant of gravitation, G, and the mass of the electron, me, are assumed to be the same at all places and times in the universe. They form the scaffolding around which the theories of physics are erected, and they define the fabric of our universe. Physics has progressed by making ever more accurate measurements of their values. And yet, remarkably, no one has ever successfully predicted or explained any of the constants. Physicists have no idea why constants take the special numerical values that they do (given the choice of units). In SI units, c is 299,792,458; G is 6.673 × 10−11; and me is 9.10938188 × 10−31—numbers that follow no discernible pattern. The only thread running through the values is that if many of them were even slightly different, complex atomic structures such as living beings would not be possible. The desire to explain the constants has been one of the driving forces behind efforts to develop a complete unified description of nature, or “theory of everything.” Physicists have hoped that such a theory would show that each of the constants of nature could have only one logically possible value. It would reveal an underlying order to the seeming arbitrariness of nature. In recent years, however, the status of the constants has grown more muddied, not less. Researchers have found that the best candidate for a theory of everything, the variant of string theory called M-theory, is self-consistent only if the universe has more than four dimensions of space and time—as many as seven more. One implication is that the constants we observe may not, in fact, be the truly fundamental ones. Those live in the full higher-dimensional space, and we see only their three-dimensional “shadows.” Meanwhile physicists have also come to appreciate that the values of many of the constants may be the result of mere happenstance, acquired during random events and elementary particle processes early in the history of the universe. In fact, string theory allows for a vast number—10500—of possible “worlds” with different self-consistent sets of laws and constants. Thus far researchers have no idea why our combination was selected. Continued study may reduce the number of logically possible worlds to just one, but we have to remain open to the unnerving possibility that our known universe is but one of many—a part of a multiverse—and that different parts of the multiverse exhibit different solutions to the theory, our observed laws of nature being merely one edition of many systems of local bylaws. No further explanation would then be possible for many of our numerical constants other than that they constitute a rare combination that permits consciousness to evolve. Our observable universe could be one of many isolated oases surrounded by an infinity of lifeless space—a surreal place where different forces of nature hold sway and particles such as electrons or structures such as carbon atoms and DNA molecules could be impossibilities. If you tried to venture into that outside world, you would cease to be. Thus, string theory gives with the right hand and takes with the left. It was devised in part to explain the seemingly arbitrary values of the physical constants, and the basic equations of the theory contain few arbitrary parameters. Yet so far string theory offers no explanation for the observed values of the constants. A Ruler You Can Trust Indeed, the word “constant” may be a misnomer. Our constants could vary both in time and in space. If the extra dimensions of space were to change in size, the “constants” in our three-dimensional world would change with them. If we looked far enough out in space, we might begin to see regions where the “constants” have settled into different values. Ever since the 1930s researchers have speculated that the constants may not be constant. String theory gives this idea a theoretical plausibility and makes it all the more important for observers to search for deviations from constancy. Such experiments are challenging. The first problem is that the laboratory apparatus itself may be sensitive to changes in the constants. The size of all atoms could be increasing, but if the ruler you are using to measure them is getting longer, too, you would never be able to tell. Experimenters routinely assume that their reference standards—rulers, masses, clocks—are fixed, but they cannot do so when testing the constants. They must focus on constants that have no units—they are pure numbers—so their values are the same irrespective of the units system. An example is the ratio of two masses, such as the proton mass to the electron mass. One ratio of particular interest combines the velocity of light, c, the electric charge on a single electron, e, Planck’s constant, h, and the so-called vacuum permittivity, ε0. This famous quantity, alpha (α) = e2/2ε0hc, called the fine-structure constant, was first introduced in 1916 by Arnold Sommerfeld, a pioneer in applying the theory of quantum mechanics to electromagnetism. It quantifies the relativistic (c) and quantum (h) qualities of electromagnetic (e) interactions involving charged particles in empty space (ε0). Measured to be equal to 1/137.03599976, or approximately 1/137, α has endowed the number 137 with a legendary status among physicists (it usually opens the combination locks on their briefcases). If α had a different value, all sorts of vital features of the world around us would change. If the value were lower, the density of solid atomic matter would fall (in proportion to α3), molecular bonds would break at lower temperatures (α2), and the number of stable elements in the periodic table could increase (1/α). If α were too big, small atomic nuclei could not exist, because the electrical repulsion of their protons would overwhelm the strong nuclear force binding them together. A value as big as 0.1 would blow carbon apart. The nuclear reactions in stars are especially sensitive to α. For fusion to occur, a star’s gravity must produce temperatures high enough to force nuclei together despite their tendency to repel one another. If α exceeded 0.1, fusion would be impossible (unless other parameters, such as the electron-to-proton mass ratio, were adjusted to compensate). A shift of just 4 percent in α would alter the energy levels in the nucleus of carbon to such an extent that the production of this element by stars would shut down. Nuclear Proliferation The second experimental problem, less easily solved, is that measuring changes in the constants requires high-precision equipment that remains stable long enough to register any changes. Even atomic clocks can detect drifts in the fine-structure constant only over days or, at most, years. If α changed by more than four parts in 1015 over a three-year period, the best clocks would see it. None have. That may sound like an impressive confirmation of constancy, but three years is a cosmic eyeblink. Slow but substantial changes during the long history of the universe would have gone unnoticed. Fortunately, physicists have found other tests. During the 1970s scientists at the French atomic energy commission noticed something peculiar about the isotopic composition of ore from a uranium mine in Oklo, Gabon: it looked like the waste products of a nuclear reactor. About two billion years ago Oklo must have been the site of a natural reactor.

Credit: Alison Kendall; John K. Webb

In 1976 the late Alexander Shlyakhter of the Petersburg Nuclear Physics Institute in Russia and of Harvard University noticed that the ability of a natural reactor to function depends crucially on the precise energy of a particular state of the samarium nucleus that facilitates the capture of neutrons. And that energy depends sensitively on the value of α. So if the fine-structure constant had been slightly different, no chain reaction could have occurred. One did occur, however, which implies that the constant has not changed by more than one part in 108 over the past two billion years. (Physicists continue to debate the exact quantitative results because of the inevitable uncertainties about the conditions inside the natural reactor.) In 1962 P. James E. Peebles and Robert Dicke of Princeton University first applied similar principles to meteorites: the abundance ratios arising from the radioactive decay of different isotopes in these ancient rocks depend on α. The most sensitive constraint involves the beta decay of rhenium into osmium. According to work by Keith Olive of the University of Minnesota, Maxim Pospelov of the University of Victoria in British Columbia and their colleagues, at the time the rocks formed, α was within two parts in 106 of its current value. This result is less precise than the Oklo data but goes back further in time, to the origin of the solar system 4.6 billion years ago.

Credit: Alison Kendall

To probe possible changes over even longer time spans, researchers must look to the heavens. Light takes billions of years to reach our telescopes from distant astronomical sources. It carries a snapshot of the laws and constants of physics at the time when it started its journey or encountered material en route. Line Editing Astronomy first entered the constants story soon after the discovery of quasars in 1965. The idea was simple. Quasars had just been discovered and identified as bright sources of light located at huge distances from Earth. Because the path of light from a quasar to us is so long, it inevitably intersects the gaseous outskirts of young galaxies. That gas absorbs the quasar light at particular frequencies, imprinting a bar code of narrow lines onto the quasar spectrum [see box above]. Whenever gas absorbs light, electrons within the atoms jump from a low energy state to a higher one. These energy levels are determined by how tightly the atomic nucleus holds the electrons, which depends on the strength of the electromagnetic force between them—and therefore on the fine-structure constant. If the constant was different at the time when the light was absorbed or in the particular region of the universe where it happened, then the energy required to lift the electrons would differ from that required today in lab experiments, and the wavelengths of the transitions seen in the spectra would differ. The way in which the wavelengths change depends critically on the orbital configuration of the electrons. For a given change in α, some wavelengths shrink, whereas others increase. The complex pattern of effects is hard to mimic by data-calibration errors, which makes the test astonishingly powerful. Before we began our work two decades ago, attempts to perform the measurement had suffered from two limitations. First, lab researchers had not measured the wavelengths of many of the relevant spectral lines with sufficient precision. Ironically, scientists used to know more about the spectra of quasars billions of light-years away than about the spectra of samples here on Earth. We needed some high-precision lab measurements against which to compare the quasar spectra, so we persuaded experimenters to undertake them. Initial measurements were done by Anne Thorne and Juliet Pickering of Imperial College London, followed by groups led by the late Sveneric Johansson of Lund Observatory in Sweden, Ulf Griesmann of the National Institute of Standards and Technology, and Rainer Kling, now at the Karlsruhe Institute of Technology in Germany. The second problem was that previous observers had used so-called alkali-doublet absorption lines—pairs of absorption lines arising from the same gas, such as carbon or silicon. They compared the spacing between these lines in quasar spectra with lab measurements. This method, however, failed to take advantage of one particular phenomenon: a change in α shifts not just the spacing of atomic energy levels relative to the lowest energy level, or ground state, but also the position of the ground state itself. In fact, this second effect is even stronger than the first. Consequently, the highest precision observers achieved was only about one part in 104. In 1999 one of us (Webb) and Victor V. Flambaum of the University of New South Wales in Sydney came up with a method to take both effects into account. The result was a breakthrough: it meant 10 times higher sensitivity. Moreover, the method allows different species (for instance, magnesium and iron) to be compared, which allows additional cross-checks. Putting this idea into practice took complicated numerical calculations to establish exactly how the observed wavelengths depend on α in all different atom types. Combined with modern telescopes and detectors, the new approach, known as the many-multiplet method, has enabled us to test the constancy of α with unprecedented precision. Changing Minds When embarking on this project, we anticipated establishing that the value of the fine-structure constant long ago was the same as it is today; our contribution would simply be higher precision. To our surprise, the first results, in 1999, showed small but statistically significant differences. Further data confirmed this finding. Based on a total of 128 quasar absorption lines, we found an average increase in α of close to six parts in a million over the past six billion to 12 billion years. Extraordinary claims require extraordinary evidence, so our immediate thoughts turned to potential problems with the data or the analysis methods. These uncertainties can be classified into two types: systematic and random. Random uncertainties are easier to understand; they are just that—random. They differ for each individual measurement but average out to be close to zero over a large sample. Systematic uncertainties, which do not average out, are harder to deal with. They are endemic in astronomy. Lab experimenters can alter their instrumental setup to minimize them, but astronomers cannot change the universe, and so they are forced to accept that all their methods of gathering data have an irremovable bias. For example, any survey of galaxies will tend to be overrepresented by bright galaxies because they are easier to see. Identifying and neutralizing these biases present a constant challenge.

Credit: Scientific American; Source: John K. Webb

The first one we looked for was a distortion of the wavelength scale against which the quasar spectral lines were measured. Such a distortion might conceivably be introduced, for example, during the processing of the quasar data from their raw form at the telescope into a calibrated spectrum. Although a simple linear stretching or compression of the wavelength scale could not precisely mimic a change in α, even an imprecise mimicry might be enough to explain our results. To test for problems of this kind, we substituted calibration data for the quasar data and analyzed them, pretending they were quasar data. This experiment ruled out simple distortion errors with high confidence. For more than two years we put up one potential bias after another, only to rule it out after detailed investigation as too small an effect. So far we have identified just one potentially serious source of bias. It concerns the absorption lines produced by the element magnesium. Each of the three stable isotopes of magnesium absorbs light of a different wavelength, but the three wavelengths are very close to one another, and quasar spectroscopy generally sees the three lines blended as one. Based on lab measurements of the relative abundances of the three isotopes, researchers infer the contribution of each. If these abundances in the young universe differed substantially—as might have happened if the stars that spilled magnesium into their galaxies were, on average, heavier than their counterparts today—those differences could simulate a change in α.

Credit: Alison Kendall

By mid-2010 we completed the analysis of a large amount of new data from the Very Large Telescope (VLT) operated by the European Southern Observatory and obtained 153 new measurements. All the data our group had previously analyzed had come from the Keck telescopes on Mauna Kea in Hawaii. For these new VLT data, everything was different—the telescopes, the spectrograph, the detectors and the software used for the initial stages of the data analysis. These VLT data therefore provided a beautiful cross-check with our results from the Keck telescopes. We thought it was possible that the new data would show no change in α at all or that they would show the same effect the Keck data did—with α appearing smaller at higher redshifts. What we actually found was truly astonishing and, if correct, will revolutionize some of our most fundamental concepts in physics. The new VLT data showed not a smaller value of α at high redshift but a larger value, larger by just about the same amount as the Keck data are smaller. How can this be? Our immediate thought was that we were seeing evidence for systematic problems in both data sets. Add the Keck and VLT samples together, and to a good approximation, the combined sample shows no change in α with redshift. Problem solved. The constants are really constant after all. But if that is the explanation, it requires two different systematic effects, one for each telescope, such that both effects are, independently, of the same magnitude but opposite sign. This is not impossible, although so far we have not managed to identify what this unknown pair of systematic effects could be. We have discovered another curiosity, however. The Keck data cover a largish portion of the sky in the Northern Hemisphere, large enough to ask whether there is any “preferred direction” for the change in α seen with that sample. Put another way: Could it be that α changes not with redshift but with position on the sky? A simple analysis identified one particular direction for which that might be the case. Surprisingly, when the VLT data are analyzed independently, the same direction pops up. The VLT is in Chile and, on average, points to a very different part of the universe than the Keck telescopes do. Another coincidence? Possibly, but that now makes two coincidences. What happens when we merge the old Keck and the new VLT samples? The result is positively intriguing: the directional dependence becomes highly significant. Deriving such a result by chance appears to be extremely unlikely. If the result is a fluke, we might expect a subset of the data to be generating a rogue result. With this in mind, we devised a simple test to iteratively reduce the sample, discarding one point at a time, to see how much data we needed to eliminate before the apparent spatial dependence of α vanishes. We found that we needed to throw away half the data before the chance probability reduced to a sufficiently unimpressive level! Again, perhaps this is a fluke. Despite extensive attempts, however, we have yet to find a combination of systematic effects in the data that could mimic a spatial dependence. Alpha appears to change spatially—across, perhaps, the entire observable universe. Any change with time is smaller and is currently below our detection sensitivity. Reforming the Laws If our findings prove to be right, the consequences are enormous, though only partially explored. Until quite recently, all attempts to evaluate what happens to the universe if the fine-structure constant changes were unsatisfactory. They amounted to nothing more than assuming that α became a variable in the same formulas that had been derived assuming it was a constant. This is a dubious practice. If α varies, then its effects must conserve energy and momentum, and they must influence the gravitational field in the universe. In 1982 Jacob D. Bekenstein of the Hebrew University of Jerusalem was the first to generalize the laws of electromagnetism to handle inconstant constants rigorously. Bekenstein’s theory elevates α from a mere number to a so-called scalar field, a dynamic ingredient of nature. His theory did not include gravity, however. Almost 20 years ago one of us (Barrow), with João Magueijo of Imperial College London, and Håvard B. Sandvik, then also at Imperial, extended it to do so.

Credit: Alison Kendall, Richard Sword

This theory makes appealingly simple predictions. Variations in α of a few parts per million should have a completely negligible effect on the expansion of the universe. That is because electromagnetism is much weaker than gravity on cosmic scales. But although changes in the fine-structure constant do not affect the expansion of the universe significantly, the expansion affects α. Changes to α are driven by imbalances between the electric field energy and magnetic field energy. During the first tens of thousands of years of cosmic history, radiation dominated over charged particles and kept the electric and magnetic fields in balance. As the universe expanded, radiation thinned out, and matter became the dominant constituent of the cosmos. The electric and magnetic energies became unequal, and α started to increase very slowly, growing as the logarithm of time. About six billion years ago dark energy took over and accelerated the expansion, making it difficult for all physical influences to propagate through space. So α became nearly constant again. This predicted pattern was consistent with our earlier data from the Keck telescopes, which seemed to indicate that a redshift dependence of α might be linked to a time variation. But the VLT data throw a large wrench in the works. If the Keck data are right and if the VLT data are also right, even if time variation does take place, it must be small compared with the spatial variation we may now be seeing. Alpha Is Just the Beginning Any theory worthy of consideration does not merely reproduce observations; it must make novel predictions. The above theory suggests that varying the fine-structure constant makes objects fall differently. Galileo predicted that bodies in a vacuum fall at the same rate no matter what they are made of—an idea known as the weak equivalence principle, which was famously demonstrated when Apollo 15 astronaut David Scott dropped a feather and a hammer and saw them hit the lunar dirt at the same time. But if α varies, that principle no longer holds exactly. The variations generate a force on all charged particles. The more protons an atom has in its nucleus, the more strongly it will feel this force. If our quasar observations are correct, then the accelerations of different materials differ by about one part in 1014—too small to see in the lab by a factor of about 100 but large enough to show up in planned missions such as STEP (space-based test of the equivalence principle). So where does this flurry of activity leave science as far as α is concerned? We and many others are eager to either confirm or disprove that α varies at the level claimed. Interestingly, the holdup on new results is not for lack of astronomical data. The Keck and VLT archives already contain plenty of quasar spectra waiting for analysis. The European Southern Observatory has built a new instrument, called ESPRESSO, with higher calibration precision for new measurements of α in the early universe. If there are plenty of data available with new and better data on the way, what else do we need? The existing measurements took an enormous effort and a great deal of time. The analysis of just one single quasar spectrum is time-consuming and complicated, requires considerable expertise, and involves human—that is, subjective—decision making. To solve this bottleneck, one of us (Webb) worked with then doctoral student Matthew Bainbridge at New South Wales to develop a new artificial-intelligence procedure capable of carrying out fully automated analyses of quasar spectra, producing results that are more objective and reproducible than a human doing the same job. An enormous number of calculations are required for this AI analysis, and supercomputers must be used. That work is now under way, and calculations are being performed on supercomputers in Australia and Britain. The main focus is on α, over the other constants of nature, simply because it is possible to build up a statistical sample of measurements, mapping the laws of physics throughout the distant cosmos in greater detail. If α is susceptible to change, however, other constants should vary as well, making the inner workings of nature more fickle than scientists ever suspected.

And yet, remarkably, no one has ever successfully predicted or explained any of the constants. Physicists have no idea why constants take the special numerical values that they do (given the choice of units). In SI units, c is 299,792,458; G is 6.673 × 10−11; and me is 9.10938188 × 10−31—numbers that follow no discernible pattern. The only thread running through the values is that if many of them were even slightly different, complex atomic structures such as living beings would not be possible. The desire to explain the constants has been one of the driving forces behind efforts to develop a complete unified description of nature, or “theory of everything.” Physicists have hoped that such a theory would show that each of the constants of nature could have only one logically possible value. It would reveal an underlying order to the seeming arbitrariness of nature.

In recent years, however, the status of the constants has grown more muddied, not less. Researchers have found that the best candidate for a theory of everything, the variant of string theory called M-theory, is self-consistent only if the universe has more than four dimensions of space and time—as many as seven more. One implication is that the constants we observe may not, in fact, be the truly fundamental ones. Those live in the full higher-dimensional space, and we see only their three-dimensional “shadows.”

Meanwhile physicists have also come to appreciate that the values of many of the constants may be the result of mere happenstance, acquired during random events and elementary particle processes early in the history of the universe. In fact, string theory allows for a vast number—10500—of possible “worlds” with different self-consistent sets of laws and constants. Thus far researchers have no idea why our combination was selected. Continued study may reduce the number of logically possible worlds to just one, but we have to remain open to the unnerving possibility that our known universe is but one of many—a part of a multiverse—and that different parts of the multiverse exhibit different solutions to the theory, our observed laws of nature being merely one edition of many systems of local bylaws.

No further explanation would then be possible for many of our numerical constants other than that they constitute a rare combination that permits consciousness to evolve. Our observable universe could be one of many isolated oases surrounded by an infinity of lifeless space—a surreal place where different forces of nature hold sway and particles such as electrons or structures such as carbon atoms and DNA molecules could be impossibilities. If you tried to venture into that outside world, you would cease to be.

Thus, string theory gives with the right hand and takes with the left. It was devised in part to explain the seemingly arbitrary values of the physical constants, and the basic equations of the theory contain few arbitrary parameters. Yet so far string theory offers no explanation for the observed values of the constants.

A Ruler You Can Trust

Indeed, the word “constant” may be a misnomer. Our constants could vary both in time and in space. If the extra dimensions of space were to change in size, the “constants” in our three-dimensional world would change with them. If we looked far enough out in space, we might begin to see regions where the “constants” have settled into different values. Ever since the 1930s researchers have speculated that the constants may not be constant. String theory gives this idea a theoretical plausibility and makes it all the more important for observers to search for deviations from constancy.

Such experiments are challenging. The first problem is that the laboratory apparatus itself may be sensitive to changes in the constants. The size of all atoms could be increasing, but if the ruler you are using to measure them is getting longer, too, you would never be able to tell. Experimenters routinely assume that their reference standards—rulers, masses, clocks—are fixed, but they cannot do so when testing the constants. They must focus on constants that have no units—they are pure numbers—so their values are the same irrespective of the units system. An example is the ratio of two masses, such as the proton mass to the electron mass.

One ratio of particular interest combines the velocity of light, c, the electric charge on a single electron, e, Planck’s constant, h, and the so-called vacuum permittivity, ε0. This famous quantity, alpha (α) = e2/2ε0hc, called the fine-structure constant, was first introduced in 1916 by Arnold Sommerfeld, a pioneer in applying the theory of quantum mechanics to electromagnetism. It quantifies the relativistic (c) and quantum (h) qualities of electromagnetic (e) interactions involving charged particles in empty space (ε0). Measured to be equal to 1/137.03599976, or approximately 1/137, α has endowed the number 137 with a legendary status among physicists (it usually opens the combination locks on their briefcases).

If α had a different value, all sorts of vital features of the world around us would change. If the value were lower, the density of solid atomic matter would fall (in proportion to α3), molecular bonds would break at lower temperatures (α2), and the number of stable elements in the periodic table could increase (1/α). If α were too big, small atomic nuclei could not exist, because the electrical repulsion of their protons would overwhelm the strong nuclear force binding them together. A value as big as 0.1 would blow carbon apart.

The nuclear reactions in stars are especially sensitive to α. For fusion to occur, a star’s gravity must produce temperatures high enough to force nuclei together despite their tendency to repel one another. If α exceeded 0.1, fusion would be impossible (unless other parameters, such as the electron-to-proton mass ratio, were adjusted to compensate). A shift of just 4 percent in α would alter the energy levels in the nucleus of carbon to such an extent that the production of this element by stars would shut down.

Nuclear Proliferation

The second experimental problem, less easily solved, is that measuring changes in the constants requires high-precision equipment that remains stable long enough to register any changes. Even atomic clocks can detect drifts in the fine-structure constant only over days or, at most, years. If α changed by more than four parts in 1015 over a three-year period, the best clocks would see it. None have. That may sound like an impressive confirmation of constancy, but three years is a cosmic eyeblink. Slow but substantial changes during the long history of the universe would have gone unnoticed.

Fortunately, physicists have found other tests. During the 1970s scientists at the French atomic energy commission noticed something peculiar about the isotopic composition of ore from a uranium mine in Oklo, Gabon: it looked like the waste products of a nuclear reactor. About two billion years ago Oklo must have been the site of a natural reactor.

Credit: Alison Kendall; John K. Webb

In 1976 the late Alexander Shlyakhter of the Petersburg Nuclear Physics Institute in Russia and of Harvard University noticed that the ability of a natural reactor to function depends crucially on the precise energy of a particular state of the samarium nucleus that facilitates the capture of neutrons. And that energy depends sensitively on the value of α. So if the fine-structure constant had been slightly different, no chain reaction could have occurred. One did occur, however, which implies that the constant has not changed by more than one part in 108 over the past two billion years. (Physicists continue to debate the exact quantitative results because of the inevitable uncertainties about the conditions inside the natural reactor.)

In 1962 P. James E. Peebles and Robert Dicke of Princeton University first applied similar principles to meteorites: the abundance ratios arising from the radioactive decay of different isotopes in these ancient rocks depend on α. The most sensitive constraint involves the beta decay of rhenium into osmium. According to work by Keith Olive of the University of Minnesota, Maxim Pospelov of the University of Victoria in British Columbia and their colleagues, at the time the rocks formed, α was within two parts in 106 of its current value. This result is less precise than the Oklo data but goes back further in time, to the origin of the solar system 4.6 billion years ago.

Credit: Alison Kendall

To probe possible changes over even longer time spans, researchers must look to the heavens. Light takes billions of years to reach our telescopes from distant astronomical sources. It carries a snapshot of the laws and constants of physics at the time when it started its journey or encountered material en route.

Line Editing

Astronomy first entered the constants story soon after the discovery of quasars in 1965. The idea was simple. Quasars had just been discovered and identified as bright sources of light located at huge distances from Earth. Because the path of light from a quasar to us is so long, it inevitably intersects the gaseous outskirts of young galaxies. That gas absorbs the quasar light at particular frequencies, imprinting a bar code of narrow lines onto the quasar spectrum [see box above].

Whenever gas absorbs light, electrons within the atoms jump from a low energy state to a higher one. These energy levels are determined by how tightly the atomic nucleus holds the electrons, which depends on the strength of the electromagnetic force between them—and therefore on the fine-structure constant. If the constant was different at the time when the light was absorbed or in the particular region of the universe where it happened, then the energy required to lift the electrons would differ from that required today in lab experiments, and the wavelengths of the transitions seen in the spectra would differ. The way in which the wavelengths change depends critically on the orbital configuration of the electrons. For a given change in α, some wavelengths shrink, whereas others increase. The complex pattern of effects is hard to mimic by data-calibration errors, which makes the test astonishingly powerful.

Before we began our work two decades ago, attempts to perform the measurement had suffered from two limitations. First, lab researchers had not measured the wavelengths of many of the relevant spectral lines with sufficient precision. Ironically, scientists used to know more about the spectra of quasars billions of light-years away than about the spectra of samples here on Earth. We needed some high-precision lab measurements against which to compare the quasar spectra, so we persuaded experimenters to undertake them. Initial measurements were done by Anne Thorne and Juliet Pickering of Imperial College London, followed by groups led by the late Sveneric Johansson of Lund Observatory in Sweden, Ulf Griesmann of the National Institute of Standards and Technology, and Rainer Kling, now at the Karlsruhe Institute of Technology in Germany.

The second problem was that previous observers had used so-called alkali-doublet absorption lines—pairs of absorption lines arising from the same gas, such as carbon or silicon. They compared the spacing between these lines in quasar spectra with lab measurements. This method, however, failed to take advantage of one particular phenomenon: a change in α shifts not just the spacing of atomic energy levels relative to the lowest energy level, or ground state, but also the position of the ground state itself. In fact, this second effect is even stronger than the first. Consequently, the highest precision observers achieved was only about one part in 104.

In 1999 one of us (Webb) and Victor V. Flambaum of the University of New South Wales in Sydney came up with a method to take both effects into account. The result was a breakthrough: it meant 10 times higher sensitivity. Moreover, the method allows different species (for instance, magnesium and iron) to be compared, which allows additional cross-checks. Putting this idea into practice took complicated numerical calculations to establish exactly how the observed wavelengths depend on α in all different atom types. Combined with modern telescopes and detectors, the new approach, known as the many-multiplet method, has enabled us to test the constancy of α with unprecedented precision.

Changing Minds

When embarking on this project, we anticipated establishing that the value of the fine-structure constant long ago was the same as it is today; our contribution would simply be higher precision. To our surprise, the first results, in 1999, showed small but statistically significant differences. Further data confirmed this finding. Based on a total of 128 quasar absorption lines, we found an average increase in α of close to six parts in a million over the past six billion to 12 billion years.

Extraordinary claims require extraordinary evidence, so our immediate thoughts turned to potential problems with the data or the analysis methods. These uncertainties can be classified into two types: systematic and random. Random uncertainties are easier to understand; they are just that—random. They differ for each individual measurement but average out to be close to zero over a large sample. Systematic uncertainties, which do not average out, are harder to deal with. They are endemic in astronomy. Lab experimenters can alter their instrumental setup to minimize them, but astronomers cannot change the universe, and so they are forced to accept that all their methods of gathering data have an irremovable bias. For example, any survey of galaxies will tend to be overrepresented by bright galaxies because they are easier to see. Identifying and neutralizing these biases present a constant challenge.

Credit: Scientific American; Source: John K. Webb

The first one we looked for was a distortion of the wavelength scale against which the quasar spectral lines were measured. Such a distortion might conceivably be introduced, for example, during the processing of the quasar data from their raw form at the telescope into a calibrated spectrum. Although a simple linear stretching or compression of the wavelength scale could not precisely mimic a change in α, even an imprecise mimicry might be enough to explain our results. To test for problems of this kind, we substituted calibration data for the quasar data and analyzed them, pretending they were quasar data. This experiment ruled out simple distortion errors with high confidence.

For more than two years we put up one potential bias after another, only to rule it out after detailed investigation as too small an effect. So far we have identified just one potentially serious source of bias. It concerns the absorption lines produced by the element magnesium. Each of the three stable isotopes of magnesium absorbs light of a different wavelength, but the three wavelengths are very close to one another, and quasar spectroscopy generally sees the three lines blended as one. Based on lab measurements of the relative abundances of the three isotopes, researchers infer the contribution of each. If these abundances in the young universe differed substantially—as might have happened if the stars that spilled magnesium into their galaxies were, on average, heavier than their counterparts today—those differences could simulate a change in α.

By mid-2010 we completed the analysis of a large amount of new data from the Very Large Telescope (VLT) operated by the European Southern Observatory and obtained 153 new measurements. All the data our group had previously analyzed had come from the Keck telescopes on Mauna Kea in Hawaii. For these new VLT data, everything was different—the telescopes, the spectrograph, the detectors and the software used for the initial stages of the data analysis. These VLT data therefore provided a beautiful cross-check with our results from the Keck telescopes.

We thought it was possible that the new data would show no change in α at all or that they would show the same effect the Keck data did—with α appearing smaller at higher redshifts. What we actually found was truly astonishing and, if correct, will revolutionize some of our most fundamental concepts in physics.

The new VLT data showed not a smaller value of α at high redshift but a larger value, larger by just about the same amount as the Keck data are smaller. How can this be? Our immediate thought was that we were seeing evidence for systematic problems in both data sets. Add the Keck and VLT samples together, and to a good approximation, the combined sample shows no change in α with redshift. Problem solved. The constants are really constant after all.

But if that is the explanation, it requires two different systematic effects, one for each telescope, such that both effects are, independently, of the same magnitude but opposite sign. This is not impossible, although so far we have not managed to identify what this unknown pair of systematic effects could be.

We have discovered another curiosity, however. The Keck data cover a largish portion of the sky in the Northern Hemisphere, large enough to ask whether there is any “preferred direction” for the change in α seen with that sample. Put another way: Could it be that α changes not with redshift but with position on the sky? A simple analysis identified one particular direction for which that might be the case. Surprisingly, when the VLT data are analyzed independently, the same direction pops up. The VLT is in Chile and, on average, points to a very different part of the universe than the Keck telescopes do. Another coincidence? Possibly, but that now makes two coincidences.

What happens when we merge the old Keck and the new VLT samples? The result is positively intriguing: the directional dependence becomes highly significant. Deriving such a result by chance appears to be extremely unlikely. If the result is a fluke, we might expect a subset of the data to be generating a rogue result. With this in mind, we devised a simple test to iteratively reduce the sample, discarding one point at a time, to see how much data we needed to eliminate before the apparent spatial dependence of α vanishes. We found that we needed to throw away half the data before the chance probability reduced to a sufficiently unimpressive level! Again, perhaps this is a fluke. Despite extensive attempts, however, we have yet to find a combination of systematic effects in the data that could mimic a spatial dependence. Alpha appears to change spatially—across, perhaps, the entire observable universe. Any change with time is smaller and is currently below our detection sensitivity.

Reforming the Laws

If our findings prove to be right, the consequences are enormous, though only partially explored. Until quite recently, all attempts to evaluate what happens to the universe if the fine-structure constant changes were unsatisfactory. They amounted to nothing more than assuming that α became a variable in the same formulas that had been derived assuming it was a constant. This is a dubious practice. If α varies, then its effects must conserve energy and momentum, and they must influence the gravitational field in the universe. In 1982 Jacob D. Bekenstein of the Hebrew University of Jerusalem was the first to generalize the laws of electromagnetism to handle inconstant constants rigorously. Bekenstein’s theory elevates α from a mere number to a so-called scalar field, a dynamic ingredient of nature. His theory did not include gravity, however. Almost 20 years ago one of us (Barrow), with João Magueijo of Imperial College London, and Håvard B. Sandvik, then also at Imperial, extended it to do so.

Credit: Alison Kendall, Richard Sword

This theory makes appealingly simple predictions. Variations in α of a few parts per million should have a completely negligible effect on the expansion of the universe. That is because electromagnetism is much weaker than gravity on cosmic scales. But although changes in the fine-structure constant do not affect the expansion of the universe significantly, the expansion affects α. Changes to α are driven by imbalances between the electric field energy and magnetic field energy. During the first tens of thousands of years of cosmic history, radiation dominated over charged particles and kept the electric and magnetic fields in balance. As the universe expanded, radiation thinned out, and matter became the dominant constituent of the cosmos. The electric and magnetic energies became unequal, and α started to increase very slowly, growing as the logarithm of time. About six billion years ago dark energy took over and accelerated the expansion, making it difficult for all physical influences to propagate through space. So α became nearly constant again.

This predicted pattern was consistent with our earlier data from the Keck telescopes, which seemed to indicate that a redshift dependence of α might be linked to a time variation. But the VLT data throw a large wrench in the works. If the Keck data are right and if the VLT data are also right, even if time variation does take place, it must be small compared with the spatial variation we may now be seeing.

Alpha Is Just the Beginning

Any theory worthy of consideration does not merely reproduce observations; it must make novel predictions. The above theory suggests that varying the fine-structure constant makes objects fall differently. Galileo predicted that bodies in a vacuum fall at the same rate no matter what they are made of—an idea known as the weak equivalence principle, which was famously demonstrated when Apollo 15 astronaut David Scott dropped a feather and a hammer and saw them hit the lunar dirt at the same time. But if α varies, that principle no longer holds exactly. The variations generate a force on all charged particles. The more protons an atom has in its nucleus, the more strongly it will feel this force. If our quasar observations are correct, then the accelerations of different materials differ by about one part in 1014—too small to see in the lab by a factor of about 100 but large enough to show up in planned missions such as STEP (space-based test of the equivalence principle).

So where does this flurry of activity leave science as far as α is concerned? We and many others are eager to either confirm or disprove that α varies at the level claimed. Interestingly, the holdup on new results is not for lack of astronomical data. The Keck and VLT archives already contain plenty of quasar spectra waiting for analysis. The European Southern Observatory has built a new instrument, called ESPRESSO, with higher calibration precision for new measurements of α in the early universe.

If there are plenty of data available with new and better data on the way, what else do we need? The existing measurements took an enormous effort and a great deal of time. The analysis of just one single quasar spectrum is time-consuming and complicated, requires considerable expertise, and involves human—that is, subjective—decision making. To solve this bottleneck, one of us (Webb) worked with then doctoral student Matthew Bainbridge at New South Wales to develop a new artificial-intelligence procedure capable of carrying out fully automated analyses of quasar spectra, producing results that are more objective and reproducible than a human doing the same job. An enormous number of calculations are required for this AI analysis, and supercomputers must be used. That work is now under way, and calculations are being performed on supercomputers in Australia and Britain.

The main focus is on α, over the other constants of nature, simply because it is possible to build up a statistical sample of measurements, mapping the laws of physics throughout the distant cosmos in greater detail. If α is susceptible to change, however, other constants should vary as well, making the inner workings of nature more fickle than scientists ever suspected.