Machine learning proliferates in particle physics

Experiments at the Large Hadron Collider produce about one million gigabytes of data every second. Even after reduction and compression, the data accumulated in just one hour at the LHC is similar to the volume of data that Facebook collects in an entire year.

Fortunately, particle physicists do not have to manage all this data themselves. They partner with a form of artificial intelligence that learns to perform complex analyzes on its own, called machine learning.

“Compared to a traditional computer algorithm that we design to perform a specific analysis, we design a machine learning algorithm to figure out on its own how to perform various analyses, which potentially saves us countless hours of design work. and analysis,” says College of Alexander Radovic, a William & Mary physicist working on the NOvA neutrino experiment.

Radovic and a group of researchers summarize the current applications and future prospects of machine learning in particle physics in an article published today in Nature.

Sifting through big data

To manage the huge volumes of data produced in modern experiments like those at the LHC, researchers apply what they call “triggers”: dedicated hardware and software that decides in real time what data to keep for analysis and what data to discard. .

In LHCb, an experiment that could shed light on why there is so much more matter than antimatter in the universe, machine learning algorithms make at least 70% of those decisions, says the scientist from LHCb Mike Williams of the Massachusetts Institute of Technology, one of the authors of the Nature summary. “Machine learning plays a role in almost every aspect of experience data, from triggers to analyzing the remaining data,” he says.

Machine learning has proven to be extremely effective in the field of analytics. The gigantic ATLAS and CMS detectors of the LHC, which made it possible to discover the Higgs boson, each have millions of detection elements whose signals must be assembled to obtain significant results.

“These signals make up a complex data space,” says Michael Kagan of the US Department of Energy’s SLAC National Accelerator Laboratory, who works on ATLAS and was also one of the authors of the Nature review. “We need to understand the relationship between them to come to conclusions – for example, that a certain particle trace in the detector was produced by an electron, a photon or something else.”

Neutrino experiments also benefit from machine learning. NOvA, which is run by the Fermi National Accelerator Laboratory, studies how neutrinos change from one type to another as they pass through the Earth. These neutrino oscillations could potentially reveal the existence of a new type of neutrino that some theories predict to be a dark matter particle. NOvA’s detectors watch for charged particles produced when neutrinos strike the detector material, and machine learning algorithms identify them.

From machine learning to deep learning

Recent developments in machine learning, often referred to as “deep learning”, promise to push particle physics applications even further. Deep learning generally refers to the use of neural networks: computer algorithms whose architecture is inspired by the dense neural network of the human brain.

These neural networks teach themselves to perform certain analysis tasks during a training period in which they are shown sample data, such as simulations, and told how well they performed. .

Until recently, the success of neural networks was limited because they were very difficult to train, says co-author Kazuhiro Terao, a SLAC researcher working on the MicroBooNE neutrino experiment, which studies neutrino oscillations in the as part of Fermilab’s short-base neutrino program and will become a component of the future deep underground neutrino experiment at the long-base neutrino facility. “These difficulties limited us to neural networks that were only a few layers deep,” he says. “Thanks to advances in algorithms and hardware, we now know much better how to build and train better performing networks across hundreds or thousands of layers.”

Many advances in deep learning are spurred by commercial applications from tech giants and the explosion of data they have generated over the past two decades. “NOvA, for example, uses a neural network inspired by the architecture of GoogleNet”, explains Radovic. “It enhanced the experience in a way that otherwise could only have been achieved by collecting 30% more data.”

Fertile ground for innovation

Machine learning algorithms are becoming more sophisticated and refined by the day, opening up unprecedented opportunities for solving particle physics problems.

Many of the new tasks they could be used for are related to computer vision, Kagan says. “It’s similar to facial recognition, except that in particle physics, image features are more abstract and complex than ears and noses.”

Some experiments like NOvA and MicroBooNE produce data that can easily be translated into real images, and AI can be easily used to identify features. In LHC experiments, by contrast, images must first be reconstructed from an obscure set of data generated by millions of sensor elements.

“But even if the data doesn’t look like images, we can still use computer vision methods if we are able to process the data in the right way,” says Radovic.

One area where this approach could be very useful is the analysis of the particle jets produced in large numbers at the LHC. Jets are narrow sprays of particles whose individual trajectories are extremely difficult to separate. Computer vision technology could help identify the characteristics of the jets.

Another emerging application of deep learning is the simulation of particle physics data that predicts, for example, what happens in particle collisions at the LHC and can be compared to real data. Simulations like these are usually slow and require immense computing power. AI, on the other hand, could perform simulations much faster, potentially complementing the traditional approach.

“Just a few years ago, no one would have thought that deep neural networks could be trained to ‘hallucinate’ data from random noise,” says Kagan. “While this is very preliminary work, it shows great promise and could help address future data challenges.”

Boasting a healthy skepticism

Despite all the obvious advances, machine learning enthusiasts often face skepticism from their collaboration partners, in part because machine learning algorithms operate primarily as “black boxes” that provide very little information about the how they came to a certain conclusion.

“Skepticism is very healthy,” Williams says. “If you’re using machine learning for triggers that delete data, like we do in LHCb, you have to be extremely careful and set the bar very high.”

Therefore, implementing machine learning in particle physics requires constant efforts to better understand the inner workings of the algorithms and to cross-check against real data whenever possible.

“We should always try to understand what a computer algorithm is doing and always evaluate its outcome,” says Terao. “This is true for all algorithms, not just machine learning. So being skeptical shouldn’t stop progress.

The rapid progress has some researchers dreaming of what might become possible in the near future. “Today, we mainly use machine learning to find features in our data that can help us answer some of our questions,” says Terao. “A decade from now, machine learning algorithms may be able to independently ask their own questions and recognize when they find new physics.”

Comments are closed.