Thursday, September 7, 2023

The Amazing Theory of (Almost) Everything

Features Cover story The Amazing Theory of (Almost) Everything IN 1973, physicist Steven Weinberg gave a talk in Aix-en-Provence, France. It was there, according to Weinberg, that he first used the term “standard model” to describe the nascent description of the fundamental constituents of the universe and their interactions. Fifty years on, the standard model of particle physics is a stunningly accurate picture of what everything is made of and how it all works to produce reality. Practically everything, anyway. Because although the 50th anniversary is well worth celebrating, it is impossible to ignore the fact that the theory is incomplete. It doesn’t explain gravity, or why we have so much matter in the universe and so little antimatter. And it says nothing about so-called dark matter and dark energy, postulated to explain why the cosmos behaves in certain ways. This is why physicists are casting around for clues that could lead us to a better theory. But which, if any, will deliver an upgrade to the standard model? How do we find the deluxe version? Sign up to the Wild Wild Life Newsletter A monthly celebration of the biodiversity of our planet’s animals, plants and other organisms. Sign up to newsletter ➔ Over the following article, we tell the story of how this amazing theory of (almost) everything was put together, introduce a new version of the typical standard model visualisation and let six of today’s leading physicists explain how they think we will finally discover a more complete picture of reality. How we built the standard model The finest theory in physics was gradually assembled over many decades by scientists across the world LOOK closely enough and almost everything we know of in the universe boils down to a handful of elementary particles. These entities constitute individual threads of the scientific masterpiece that is the standard model of particle physics, our current best picture of matter and its workings. Its roots lay in the quantum revolution early in the 20th century, where the classical, common-sense notion that everything is predictable was unceremoniously thrown out. By contrast, the development of the standard model was anything but a revolution. Instead, it was more like the gradual forming of a new order, constructed piece by piece by dozens of physicists across decades. Many expected the new order to fail. But it didn’t. In fact, the standard model has survived every test we have thrown at it, including attempts to create new particles or to find new forces that it doesn’t predict (see “How to break the standard model”). So how, exactly, did physicists working throughout the 20th century come up with such an unbreakable framework? This is the story of the most successful theory we have ever devised. Life was simpler in the 1920s. As far as anyone knew, the only elementary particles were photons, which made up light; protons, found in the centres of atoms; and electrons, which orbited around atomic nuclei. It was a simple picture, but a troublingly immutable one. According to quantum physics, which arose in that decade, there was no way for these particles to be created or destroyed. Yet, for example, when you shine a torch, electrons in the torch seemingly create and spew out photons. It was mathematician Paul Dirac who realised that the potential for light exists everywhere, in the form of an underlying field. This field is so weak as to be invisible, but given the right energy – if it interacts with an electron, say – it will develop a spike, or excitation, which we observe as a photon. Dirac called this theory quantum electrodynamics (QED). It was the first quantum field theory – one in which invisible fields, not particles, are the ultimate fundamental entities. Such theories would come to be the bread and butter of particle physics; the standard model itself is merely a more elaborate one. Successful as they have been, however, they are hard to stomach. When a field isn’t excited into particles proper, it is frustratingly unobservable – a background murmuring of not-quite-somethings that physicists have come to call “virtual particles”. Within a few years of Dirac’s QED, other theorists had extended the field concept to electrons and protons. This allowed physicist Enrico Fermi to explain how radioactive materials can emit electrons when their atomic nuclei don’t actually contain any. The unchanging picture of subatomic matter was gone. Building on work by theorist Wolfgang Pauli, Fermi showed that electrons could be created, so long as sprightly, neutral particles called neutrinos are created simultaneously. The journal Nature rejected Fermi’s paper on this, saying “it contained speculations too remote from reality to be of interest”. Yet the neutrino, experimentally discovered in the 1950s, was only one of many strange new particles that physicists would have to come to terms with. THE first addition, courtesy of British experimentalist James Chadwick in 1932, was the neutron, a neutral particle around the mass of the proton. Across the Atlantic, a young US physicist called Carl Anderson had spotted some backwards-looking tracks in photographs of electron ionisations. These turned out to be something Dirac had predicted: examples of the first antimatter particle. At this point, the world of particle physics still just about made sense. There were atoms, made of protons, neutrons and electrons; there was light, or photons; and there were positrons, or antimatter. But in 1936, in partnership with his colleague Seth Neddermeyer, Anderson found something no one expected: a seemingly heavier version of the electron called the muon. This unexpected muon was the least of theorists’ worries, though. A few years earlier, J. Robert Oppenheimer, an intense, straight-shooting physicist, had unearthed a rather large problem with QED. He realised there were infinitely many ways an electron could emit and reabsorb virtual photons – meaning its energy ought to be infinite, which made no sense. Soon, the infinities were cropping up everywhere. The theory was “in a hell of a way”, said Oppenheimer. The arrival of the second world war didn’t help matters. When that conflict was over, three different methods to constrain the infinities came to light: two in the US, devised by Julian Schwinger and Richard Feynman, and one in Japan, conceived by Sin-Itiro Tomonaga. In the end, mathematician Freeman Dyson realised they all amounted to the same thing. “It came bursting into my consciousness, like an explosion,” he later recalled. DYSON’S singular method was called renormalisation. It works by ignoring the lion’s share of the electron’s energy, which isn’t measurable in an experiment. Only calculate what can be measured and the total energy is no longer infinite. It was a ridiculously simple idea, but it worked for a time. By the 1950s, two other forces of nature beyond the electromagnetic force at the heart of QED were coming under increasing scrutiny: the weak force, which is behind Fermi’s radioactive decay and neutrinos, and the strong force, which holds protons and neutrons together in atomic nuclei. For these forces, renormalisation just didn’t work. This time, the crucial insight came from Chen-Ning “Frank” Yang, a mild-mannered Chinese theorist who had relocated to the US. He had been dwelling on symmetries in particle interactions, a property that implies certain values are preserved during a transformation of some sort (see “Upside down, back to front”, below). For instance, when photons interact with electrons, the equations naturally balance without needing the electron charge to vary. In this sense, the raison d’être of photons is to conserve the electron charge. We call this charge symmetry. Working with his colleague Robert Mills, Yang introduced the idea that force-carrying particles similar to photons must exist in order to conserve other particle qualities. The trouble was, he couldn’t make any definite predictions – much to the annoyance of Pauli, who heckled Yang so much during a presentation that the latter fell silent and nervously sat down. Oppenheimer, chairing the seminar, came to his defence, declaring “we should let Frank proceed”. It was a good job they did. By showing how the concept of symmetry in QED (a renormalisable theory) can, in principle, be applied elsewhere, Yang had laid the groundwork for a new class of theories that would form the backbone of the standard model. IN THE 1960s, Steven Weinberg, together with fellow physicists Sheldon Glashow and Abdus Salam, created a renormalisable field theory that encompassed both QED and the weak force. It predicted that certain qualities of neutrinos are set by three new particles, known as the W+, W- and Z bosons. These are what we call carriers of the weak force. Meanwhile, physicists including Murray Gell-Mann were fashioning a new theory of the strong force that holds protons and neutrons together in atomic nuclei. This theory, known as quantum chromodynamics, showed that protons and neutrons are made up of smaller entities known as quarks, the qualities of which are set by yet more new particles called gluons, the carrier particles of the strong force. By now, with particle colliders becoming a standard tool for discovery, the list of particles being found was rapidly proliferating. Most of these were composite, formed of quarks in different combinations. Although quarks in protons and neutrons come in either “up” or “down” flavours, more flavours were needed to account for the diversity of other composites. PHYSICISTS eventually settled on there being six quarks across three generations of elementary matter particles. In the first generation were the up and down quarks, along with the electron and the electron’s associated neutrino. The second and third-generation particles appeared to be progressively heavier copies of the first. To this day, no one knows why, when it comes to elementary particles, nature is so profligate. “If I could remember the names of all these particles,” said Fermi, “I would have been a botanist.” There was just one more to complete the set: the Higgs boson, the particle that fixes the mass of all others. Hypothesised back in 1964 by theorist Peter Higgs, among others, it was only in the early 1970s that it came to be taken seriously. It was discovered another 40 years after that. “It is nice to be right about something sometimes,” said Higgs after it was found in 2012. With the theorised Higgs admitted to the stable, Weinberg delivered his 1973 talk introducing the name for the collection of entities that govern the most fundamental aspects of reality: the standard model of particle physics. The various patches of research had been stitched into a complete account of all the elementary particles, which, to this day, has never been contradicted by any experimental data. “The theory we now have is an integral work of art,” Glashow wrote a few years later, “the patchwork quilt has become a tapestry.” Jon Cartwright How to break the standard model Six leading researchers describe what they believe are the most promising routes to new physics Collisions at the energy frontier 
Jon Butterworth 
University College London IT IS always risky to bet against the standard model of particle physics. Historically, most people who have done so have lost. But over the next decade and a half, the Large Hadron Collider (LHC) will continue smashing protons together and examining the messy aftermath. And it is there, within the details of these collisions, that I will be looking most closely for evidence of new physics beyond the standard model. I work on what we call the “energy frontier”, where concentrating a lot of energy into colliding subatomic particles can give us access to new physics. This works in two related ways. Firstly, if there is a new, heavy particle out there, then we might be able to make it if we have enough energy. Secondly, the highest-energy colliders are, in a sense, the highest-resolution microscopes. As we get to higher energies, the resolution with which we can probe the structure of matter increases. We have pretty much maxed out on the energy we can get to, but we still plan to record around 10 billion more collisions. And more is better. Determining whether we have some physics beyond the standard model is like trying to determine whether a dice is fair or not. Six rolls of the dice will tell you next to nothing, but 6 million will give you a good idea. A plethora of different things can happen when we collide protons at the LHC. Multiple Higgs bosons, W and Z bosons, very high-energy photons and jets of hadrons arising from scattered quarks and gluons can all be produced. So far, we know the distribution of these is about that expected of the standard model. But in some cases, that is a very approximate “about”. Some important types of collision – for example, the production of pairs of Higgs bosons – indicated by the standard model are so rare that they haven’t yet been seen. Many ideas for physics beyond the standard model make predictions for things that could easily hide under that “about”. Over the next several years, we will make measurements that could, and I hope will, flush them out. This effort requires a couple of things to happen. We experimentalists need to make more measurements to quantify how well the standard model is really doing, and they must be as independent of the theory as possible. We can’t avoid making theoretical assumptions sometimes, we just have to minimise their impact. On the theory side, the predictions need to increase in precision. We don’t directly measure the Higgs mass, for example, but infer it from particles produced in a collision. Making more precise predictions, which can be compared directly against experiments, means we can make better inferences about the underlying processes. Both of these are already happening. I look forward in the next few years to heated exchanges about the level of agreement, or disagreement, between theory and experiment along the lines of recent discussions about the anomalous “magnetic moment” of the muon (see “Muons behaving oddly”), for example. Even if everything we measure agrees with the standard model, that will still be important. It will mean we have established that physics beyond the standard model lies far above the Higgs mass and hence the capabilities of current particle colliders. That may not help us much in understanding dark matter or other questions the standard model leaves unanswered. But we don’t get to choose nature, all we can do is explore it to the best of our ability. Hunting cosmological chameleons 
Clare Burrage 
University of Nottingham, UK THE standard model fails to account for 95 per cent of the contents of the universe. Cosmologists split this unknown portion of the cosmos into 27 per cent dark matter, which clumps together under gravity, and 68 per cent dark energy, which causes the expansion of the universe to accelerate. We have a plethora of theories for what dark matter could be. Dark energy, on the other hand, remains more mysterious. I have chosen to look for a new force, carried by a new particle, that could explain dark energy. We expect particles associated with the acceleration of the expansion of the universe to have two properties. Firstly, they should be very light, and secondly, they should mediate a “fifth” force – beyond those of gravity, electromagnetism and the strong and weak forces – across cosmological distances. Many precise experiments have looked for light particles and long-range fifth forces without seeing them. Over the past few decades, however, we have realised the environment might affect the behaviour of a fifth force. In particular, if a particle that transmits the fifth force can change its mass as the density of its environment changes, the force can only act over short distances in dense regions, but over long ranges in less dense ones. The ability to change their properties to evade detection has earned these proposed particles the name “chameleons”. This doesn’t mean they are impossible to detect, we just have to carefully design our experiments. Some objects can be so small that the chameleon doesn’t have enough room inside it to change its mass, so the behaviour of the force also doesn’t change. Because of this, if we drop a tennis ball and an atom side by side, we would expect the atom to fall faster than the ball if this fifth force exists. Dropping individual atoms is hard to do, but, with new experimental techniques, we can measure how atoms fall extremely precisely. We cool clouds of atoms to make them as still as possible. Then, we shine a laser at them to excite an electron orbiting the nucleus of an atom from one energy level to another. When this atom absorbs a photon from the laser beam, an electron is excited, but the atom also starts moving. At this point, it is possible for the atom to be in a superposition of two states at the same time, one in which it absorbed the photon and is moving, and the other where it didn’t and isn’t. By pulsing the laser, we can spatially separate these two states and then recombine them. In between the pulses, the atoms fall freely under the influence of gravity and the hypothetical fifth force. We try to detect if the dark energy force is causing atoms to fall faster than we would expect. So far, experiments haven’t seen any evidence of new forces, constraining our models. Within the next few years, I hope we will detect chameleon particles or rule out this model altogether. Either way, we will be closer to understanding a mysterious chunk of the universe that can’t be described by the standard model. Quantum sensors 
Surjeet Rajendran 
Johns Hopkins University, Maryland PARTICLE colliders have had a revolutionary impact on our understanding of the universe, but these wonderful machines aren’t the right tool to search for a specific kind of new physics: particles that interact weakly with the electrons or protons being smashed together. Finding these so-called weakly coupled particles is an important avenue to explore, since much of our observational evidence for physics beyond the standard model is “dark” physics – relating to dark matter and dark energy. This “darkness” means they don’t interact much with the electrons, protons, neutrons and photons we use in experiments. To detect weakly interacting particles, not only do sensors need to measure minuscule effects, but they must also be immune to noise. The art of creating technologies that solve these problems is called precision sensing. Explosive developments in this field over the past two decades have led to astonishingly accurate sensors by exploiting the wonders of quantum mechanics. To create an accurate sensor, we need a near-perfect yardstick to use for the measurements. Consider the amazing fact that there are 10⁸⁰ hydrogen atoms in the observable universe and every one is identical. A quantum sensor exploits this fact to create robust yardsticks that enable high-precision sensing. An example is the use of synchronised atomic clocks, which “tick” based on the behaviour of electrons within certain atoms, to search for dark matter. A grand scientific success of precision sensing is the LIGO experiment. This uses devices called interferometers to search for gravitational waves, ripples in space-time. Interferometers contain waves, usually of light, which interfere to create patterns that can be studied to reveal forces or fields that have moved through the device. The discoveries of gravitational waves by the LIGO collaboration could be a harbinger of the kind of physics we may go on to discover using other quantum sensors. Physics is an experimental science, so we aren’t going to know what is there without experimental exploration. This is a frustrating fact for the theorist who might want to discover the theory of everything from their armchair. But, sobering as this is, I see it as a call to action: there is new physics out there and our job is to find it. Rethinking time 
Emily Adlam 
Chapman University, California WE TEND to assume that time works in a linear way. Even the standard model is traditional when it comes to how we think of time: some “input” evolves into an “output”. Yet many other parts of modern physics, such as Albert Einstein’s equations of general relativity, aren’t a good fit with this simple time-evolution picture. This is why I believe that, to find physics beyond the standard model, we should move away from the time-evolution paradigm. There are a few approaches that might replace it. One possibility is a retrocausal perspective, where the future plays some role in “producing” the past. Another is a so-called block universe approach, in which the whole of history co-exists in some way. We live in a universe in which entropy, a measure of disorder, increases over time. This suggests the universe must have begun in a very special kind of state with very low entropy. Within the standard model, it seems the only prospect for explaining this is to imagine a process producing a large number of possible universes, or multiverses. Since we find ourselves in this universe, the argument goes, we must be in one of the universes with a low-entropy initial state. But if we don’t start from a time-evolution picture, there may be a way to explain the special initial state without imagining other universes. The block universe approach also seems like a promising route to one of the largest problems that the standard model can’t solve – unifying gravity and quantum mechanics – since our best theory of gravity, general relativity, seems most at home in a block universe picture. Excitingly, we may soon have new experimental data. Recently, there have been proposals for experiments testing, for example, whether space-time can enter superpositions. One experiment involves putting two masses into quantum superpositions where they are effectively in two different positions at once, then seeing if it results in quantum entanglement. If entanglement is created in that experiment, this probably indicates that gravity itself has quantum properties. This kind of result would certainly represent a significant departure from the intuitive time-evolution picture used in the standard model. My hope is that these experiments will provide a direct window into the nature of time in our universe. Muons behaving oddly 
Alex Keshavarzi 
The University of Manchester, UK FOR the past six years, a team of scientists, myself included, has been searching for new physics using an unfamiliar fundamental particle. It turns out the muon, a heavier cousin of the electron, is very useful for searching for new physics outside of the standard model. Our experiment, based at the Fermi National Accelerator Laboratory (Fermilab), Illinois, measures a property called the muon’s “magnetic moment”. In a magnetic field, muons can interact with a sea of virtual particles that pop in and out of existence. Some might be particles we know of, but unknown particles could pop up, too. These interactions cause the muons to wobble, or “precess”: the more interactions the muon experiences, the faster it wobbles. Measuring this wobble can indirectly tell us how many virtual particles the muon has interacted with, and the number of forces through which it has done so. By comparing this measurement with the standard model prediction, we can see whether the muon has interacted with new particles or forces. Back in 2004, a measurement of the muon’s magnetic moment at the Brookhaven National Laboratory, New York, found the muon’s magnetic moment was larger than the standard model prediction. It had a precision of 0.5 parts per million, as precise as measuring the length of a stadium to the width of a human hair. But higher precision was needed to prove new physics. Part of the Brookhaven experiment was moved more than 4800 kilometres to Fermilab and the Muon g-2 experiment was born. Our first result in 2021 confirmed the Brookhaven finding, and in August 2023, we released another consistent result with a precision of 0.19 parts per million: the most precise measurement ever made at a particle accelerator. If there is new physics beyond the standard model, our measurement is precise enough to confirm it for the first time. But that rests on the standard model prediction itself, which has evolved since 2021 to have different results depending on which method is used to calculate it. Some align more with our experiment and therefore suggest there is no new physics. The rest, enticingly, are beyond the threshold needed to claim that first ever discovery of new physics. An international collaboration of theoretical physicists known as the Muon g-2 Theory Initiative, of which I am also a member, is working very hard to understand and resolve these differences, hopefully within the next few years. Whatever the outcome, however, the Muon g-2 experiment will continue to release new results at even higher precision, with our last batch of data now ready to be analysed. I am excited to see whether our final result, expected in the next couple of years, could irrefutably confirm physics beyond the standard model for the first time. A new paradigm 
Matt Strassler 
Harvard University IN THE 19th century, scientists understood most waves as we understand them today: rhythmic disturbances of a medium, like sound waves through air. Once they realised light was a wave, it seemed obvious it would have a medium, too, which they called the luminiferous aether. Earth’s motion through the aether was expected to cause light’s speed to change slightly depending on its direction of motion, just as sound’s speed does for moving observers. In an experiment in 1887, physicists Albert Michelson and Edward Morley found no such dependence. This “null result” puzzled physicists until Albert Einstein proposed an explanation in 1905: that the speed of light never changes. Ever since then, particle physics has been driven by a sequence of puzzles and clues that has led scientists from one discovery to the next (see “How we built the standard model”). Unfortunately, this trend seems to have ended in 2012 with the discovery of the Higgs boson. I view the absence of further discoveries at the Large Hadron Collider (LHC) as potentially among the most important null results in the history of physics, comparable with the Michelson-Morley measurement. Just as the aether result violated simple, general reasoning about waves, the LHC’s null result also goes against certain basic reasoning about quantum fields. Quantum field theory seems to imply, and experiments seem to confirm, that it is highly unusual to have a Higgs field and Higgs boson that exist entirely on their own. There ought to be a whole set of additional fields and particles, of which the Higgs field and Higgs boson are just the first. Some of them should be observable at the LHC. So far, the collider has seen nothing of the kind. While its search is far from over, many physicists like me have been wondering if the reasoning is misguided. Perhaps, as with Michelson-Morley, we might be seeing the first signs of a paradigm breaking down. Calculations concerning the Higgs field rely on three assumptions: quantum field theory is valid; it can be used without having to account for phenomena at energies vastly larger than those probed directly by the LHC; and events in the universe’s distant past don’t affect the calculations. Although no one has a reasonable alternative to quantum field theory yet, it behoves us to question the other assumptions. One option involves hidden symmetries relating traits of both known and unknown particles. These could lead to “magic zeroes”, causing effects that would destabilise a lone Higgs field to be much smaller than expected. Or quantum gravity may impose unknown constraints on quantum field theories. It might be that only a few quantum field theories are consistent with gravity, and in those that resemble the standard model, the LHC’s null result is automatic. Another possibility is that the universe’s history has left us with a world where this null result was necessary for our existence. The simplest version of this idea suggests our universe is a mostly uninhabitable multiverse, in which the rare habitable places are described by very unusual quantum field theories. While similar reasoning potentially explains why the cosmological constant that drives the universe’s accelerating expansion is very small (see “Hunting cosmological chameleons”), it fails here, merely replacing one puzzle with another. Another idea is that the universe passed through many phase transitions, at each stage being described by a more and more unusual quantum field theory. When it reached the most unusual phase, the transitions stopped, leaving us with a surprising universe. Yet this proposal, which was made explicit through an idea known as the “relaxion”, has difficulty solving the cosmological constant problem. All of these ideas seem rather desperate, indicative of a moment when the concepts underlying particle physics may need to be rethought. If so, particle physicists face a tremendous challenge. The path forwards is unclear, making it difficult to guess which experimental efforts are worthwhile. Perhaps classic particle physics methods are no longer the answer, but what to replace or enhance them with is anyone’s guess. Physicists of the 1890s faced such obstacles, too. But they were lucky. Cathode rays and radioactivity opened up entirely new classes of experiments that could reveal the structure of atoms and the properties of particles moving at near light speed. Whether today’s physicists will be so lucky remains entirely unclear. WHO INVENTED THE STANDARD MODEL? There was no single moment when the standard model came into being, so the date for its anniversary is up for debate. A meeting in 2018 celebrating 50 years of the standard model took its genesis as physicist Steven Weinberg’s 1967 paper “A model of leptons”. The term was introduced in scientific literature in 1975, where it was mentioned in passing, suggesting it was already in circulation. Weinberg said he first used the term at a talk in 1973 in Aix-en-Provence, France. But even he expressed doubts about its origin. “I think I’m the one who gave it that name,” he said in 2010, before swiftly adding: “I’ve never been quite sure about that.” UPSIDE DOWN, BACK TO FRONT The standard model was guided by a principle shown by mathematician Emmy Noether in 1918: that every symmetry implies a conservation law. The symmetry of physics in time implies energy is conserved, for instance, and symmetry in space implies momentum is conserved. This proved indispensable to early particle physicists (see main story) who spotted plenty of new symmetries. From these, they inferred conservation laws that revealed which interactions were possible. A symmetry in the behaviour of electrons and neutrinos, for instance, led to electromagnetism and the weak nuclear force coming together into electroweak theory. Taking symmetry much further, we get “supersymmetry”, an extension to the standard model that says every particle has a heavier superpartner. So far, there is no evidence for this. THE STANDARD MODEL REIMAGINED The standard model of particle physics is often illustrated as a simple grid showing the 17 basic particles (above). But an alternative way of visualising it reveals the complex rules that govern how the particles and forces interact. The conventional grid shows three generations of quarks (which feel the strong force) and leptons (which don’t). Then there are the bosons that mediate three of the fundamental forces of nature – the strong and weak nuclear forces and electromagnetism. But it doesn’t give us the full picture. For one, there are parts missing, like the fact that most particles can occur in two forms of a property called handedness: right-handed and left-handed. It also tells us nothing about which particles feel which forces. There are mysteries it glosses over, too, like the fact there are no right-handed neutrinos, at least that we know of. “The standard grid, as lovely as it is, looks finished,” says Chris Quigg, a theoretical physicist at the Fermi National Accelerator Laboratory in Illinois. “But the standard model is not finished.” Quigg thought we needed a new way to visualise the theory that reflected its messiness. In 2005, he came up with his answer: the double simplex (below). Made of two pyramids, linked by the Higgs boson, one half represents left-handed particles and the other right-handed ones. Each pyramid vertex groups generations of related particles together. By using lines of different colours, the double simplex shows how particles interact through different forces. The lack of right-handed neutrinos is shown by the smaller number of particles in the right-handed pyramid. Here, too, you can see there are no weak interactions inside each shell, meaning that right-handed quarks and leptons can’t switch between different flavours or generations (see main story). “The difference between left-handed and right-handed particles is one of the big mysteries of the standard model,” says Quigg. In the usual grid, these complexities are hidden, but the double simplex celebrates them. New Scientist audio You can now listen to many articles – look for the headphones icon in our app newscientist.com/app

No comments:

Post a Comment