Argonne aurora will accelerate particle physics discoveries at CERN

July 22, 2021 – The U.S. Department of Energy’s (DOE) Argonne National Laboratory will house one of the nation’s first exascale supercomputers when Aurora arrives in 2022. To prepare code for the architecture and System-wide, 15 research teams are taking part in the Aurora Early Science program through the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science user facility. With access to pre-production time on the supercomputer, these researchers will be among the first in the world to use an exascale machine for science.

Early philosophers first formulated the idea of ​​the atom around the fifth century BCE. And just when we thought we had understood its basic structure – protons, neutrons and electrons – theories and technologies emerged to prove us wrong. It turns out that there are still more fundamental particles, like quarks, bound together by aptly named gluons.

Physicists discovered many of these and other particles in the huge beasts of machinery we call colliders, helping to develop what we know today as the Standard Model of physics. But there are questions that keep nagging: is there something even more fundamental? Is the standard model all there is?

Determined to find out, the high-energy physics community is working to integrate ever larger colliders and more sophisticated detectors with exascale computing systems. Among them is Walter Hopkins, assistant physicist at Argonne National Laboratory and collaborator on the ATLAS experiment at CERN’s Large Hadron Collider (LHC), near Geneva, Switzerland.

In collaboration with researchers from Argonne and Lawrence Berkeley National Lab, Hopkins is leading an Aurora Early Science Program project through the ALCF to prepare software used in LHC simulations for exascale computing architectures, including the upcoming exascale machine d ‘Argonne, Aurora. With a trillion calculations per second, Aurora is at the frontier of supercomputing and at the height of the next particle physics challenge of gargantuan scale.

The project was started several years ago by physicist and Argonne Emeritus James Proudfoot, who saw the distinct advantages of exascale in enhancing the impact of such complex science.

Align the codes on the new architecture

The collisions produced in the LHC occur in one of the many detectors. The one the team is focusing on, ATLAS, witnesses billions of particle interactions every second and the new particle signatures these collisions create in their wake.

One type of code the team is focusing on, called event generators, simulates the underlying physical processes that occur at interaction points in the 17-mile-circumference collider ring. Aligning the physics produced by the software with that of the Standard Model helps researchers accurately simulate collisions and predict the types, trajectories and energies of residual particles.

Detecting physics in this way creates a mountain of data and requires equally significant computational time. And now CERN is upping the ante as it prepares to upgrade the LHC’s luminosity, enabling more particle interactions and a 20-fold increase in data output.

As the team looks to Aurora to handle this increase in its simulation needs, the machine is not without some challenges.

Workers inside ATLAS, one of several primary detectors at CERN’s Large Hadron Collider. ATLAS witnesses a billion particle interactions every second and the signatures of new particles created in proton-proton collisions at near light speed. (Picture: CERN)

Until recently, event generators ran on computer processors (central processing units). Although they work fast, a processor can usually only perform several operations at a time.

Aurora will feature both CPUs and GPUs (graphics processing units), the choice of gamers around the world. GPUs can handle many operations by breaking them up into thousands of smaller tasks spread across many cores, the engines that drive both types of units.

But it takes a lot of effort to move CPU-based simulations to GPUs efficiently, Hopkins notes. So making this move to prepare for both Aurora and the onslaught of new LHC data presents several challenges, which have become a central part of the team’s goal.

“We want to be able to use Aurora to help us meet these challenges,” says Hopkins, “but that forces us to study computing architectures that are new to us and our code base. For example, we focus on a generator used in ATLAS, called MadGraph, which runs on GPUs, which are more parallel and have different memory management requirements.

A particle interaction simulation code, MadGraph, has been written by an international team of high-energy physics theorists and meets the simulation needs of the LHC.

Simulation and AI support experimental work

The LHC has played an important role in bringing the predictions to fruition. Even more famously, the Standard Model predicted the existence of the Higgs boson, which imparts mass to all fundamental particles; ATLAS and its counterpart detector, CMS, confirmed the existence of Higgs in 2012.

But, as is often the case in science, big discoveries can lead to more substantial questions, many of which are not predicted by the Standard Model. Why does the Higgs have the mass it is? What is dark matter?

“The reason for this very large LHC upgrade is that we hope to find that needle in the haystack, that we will find an anomaly in the dataset that offers a hint of physics beyond the Standard Model,” says Hopkins. .

A combination of computing power, simulation, experience, and artificial intelligence (AI) will greatly aid this research by providing accuracy in both prediction and identification.

When the ATLAS detector witnesses these particle collisions, for example, it records them as electronic signals. These are reconstructed in the form of pixels of bursts of energy which can correspond to the passage of an electron.

“But just like in AI, where the canonical example identifies cats and dogs in images, we have algorithms that identify and reconstruct those electronic signals into electrons, protons, and other things,” says Taylor Childers, computer scientist at the ALCF, member of the team. .

Data reconstructed from real crash events is then compared to simulated data to look for differences in patterns. This is where the accuracy of the physical models comes in. If they perform well and the real and simulated data do not match, you continue to measure and eliminate anomalies until it is likely that you found that needle, that something that doesn’t fit the standard pattern.

The team is also using AI to quantify uncertainty, to determine the likelihood that they have correctly identified a particle.

Humans are able to identify particles to a limited extent – several parameters like moment and position could tell us that a certain particle is an electron. But base that characterization on 10 interrelated parameters, then that’s another story altogether.

“This is where artificial intelligence really shines, especially if these input parameters are correlated, like the momentum of the particles around an electron and the momentum of the electron itself,” says Hopkins. “These correlations are hard to deal with analytically, but because we have so much simulation data, we can teach artificial intelligence and it can tell us, it’s an electron with this probability because I have all this information from Entrance. “

Exascale computing and the way forward

In preparation for Aurora, the team continues to work on programming languages ​​for new architectures and code to run on Intel hardware that will be used on Aurora, as well as hardware from other vendors.

“Part of the R&D we do with our partner Intel is to make sure the hardware does what we expect it to do and does it efficiently,” says Childers. “Having a machine like Aurora will give us lots of computing power and lots of nodes to effectively reduce solution time, especially when we move to the upgraded LHC.”

The solution is an answer to a fundamental question: is there more beyond the standard model? — and one that could have repercussions unimaginable a hundred years from now, Hopkins notes.

“Basic research can give us knowledge that can lead to societal transformation, but if we don’t do research, it won’t lead anywhere,” he says.

The ALCF is a DOE Office of Science user facility.

Funding for this project was provided by the DOE Office of Science: Offices of High Energy Physics and Advanced Scientific Computing Research. ATLAS is an international collaboration supported by the DOE.

About the ALCF

The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding across a wide range of disciplines. Supported by the Advanced Scientific Computing Research (ASCR) program of the U.S. Department of Energy’s (DOE) Office of Science, the ALCF is one of two DOE advanced computing facilities dedicated to open science.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts cutting-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state, and municipal agencies to help them solve their specific problems, advance American scientific leadership, and prepare the nation for a better future. With employees in more than 60 countries, Argonne is managed by UChicago Argonne, LLC for the US Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the largest supporter of basic physical science research in the United States and works to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

Click here to find out more.


Source: JOHN SPIZZIRRI, ALCF

Comments are closed.