High-energy Physics: Predicting the Emergence of Jets
Jets resulting from particle collisions, like those taking place at the Large Hadron Collider (LHC) housed at CERN near Geneva, Switzerland, are quite possibly the single most important experimental signatures in high-energy physics. Virtually every final-state, high-energy particle produced will be part of a jet.
“It's actually very rare that these collisions produce a non-strongly interacting particle,” says Stefan Höche, a theoretical physicist at SLAC National Accelerator Laboratory at Stanford University in California, US. “So we have to describe the emergence of jets very precisely. Even if an interesting new particle is created, it will predominantly decay into jets, and we have to discern such decays from an overwhelming background.”
Höche and his colleagues developed and are refining Sherpa, a Monte Carlo event generator that simulates high-energy reactions resulting from particle collision events. Physicists use these simulations to compute how particles interact according to a given theory, and then compare simulated collisions with those measured by experiments at the LHC.
The scientists can make predictions for quarks and gluons, which are the elementary particles of quantum chromodynamics, but they cannot make predictions for jets — at least not based on first principles. That’s where the Monte Carlo comes in, relating the quarks and gluons to the jets in the LHC experiment. Each highly energetic quark or gluon becomes the seed for one or more jets.
In decades past, simple event generators were used, based on two particles in the final state which can describe two jets quite well. The massive amount of energy used to produce collisions today, however, results in hundreds or thousands of observed particles, which may form a dozen or more jets. Event generators must take into account many more quarks and gluons to describe this situation precisely, and thus perform much more complicated calculations.
Compared to other event generators, this is what makes Sherpa unique. “It can simulate the highest multiplicity final states,” explains Höche. “Sherpa can also interface to other computer programs like BlackHat, to incorporate loop processes and combine high multiplicity with lower multiplicity processes,” adds Höche. “This is crucial to get a coherent picture of how jets emerge at the LHC.”
Using Sherpa, in combination with a program called OpenLoops, Höche and his colleagues’ calculations yielded a unified description of top-quark-pair production in conjunction with up to two jets. Top-quark production — one of the most important reactions at the LHC — and its precise simulation is mandatory for many experimental measurements. Höche says that his calculations using Sherpa gain them about a precision factor of two. This may mean experiments lead to a potential new theory of nature twice as fast — and bring us closer, for example, to solving the mystery that surrounds dark matter.
“To make these calculations, as well as tune our models, we need lots of computational power,” says Höche. “The Open Science Grid (OSG) allows us to do these very high particle multiplicity calculations, which helps us reduce the theoretical uncertainties of our predictions. We could produce scientifically relevant results in very short time with state-of-the-art tools.”
The ATLAS, LHC, and CMS experiments at CERN each put their measured collision results into a code framework called Rivet, which interfaces with Sherpa. Höche and his colleagues can then pipe their simulated results directly into Rivet, and compare their calculations with experimental measurements. “If we get results that don’t agree, we know our models were wrong,” he says. “It’s important that we get this feedback quickly.”
Last year, the group had the opportunity to run their codes at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory in California, US. “We implemented MPI and figured that it was working great. But our code is so complicated; it often takes at least 20 minutes just to start up the executable. At high-performance computing centers short jobs perform better, which is sometimes not practical for the simulations we’re working with.”
Particle-level LHC calculations are therefore running primarily on the OSG. Each prediction needs about 250,000 CPU hours. “We have dozens of predictions to make; the OSG helps us produce these results in as timely a manner as possible.” What would really be ideal in the future, Höche explains, is if OSG had many more small-scale high-performance resources, like 32 or 64 cores. “Or, if we could send the optimization stage of our code to NERSC or other HPC centers, and then send the production stage to OSG.”
Amber Harmon is the US desk editor of iSGTW. This article originally appeared in iSGTW on June 4, 2014.