Machine Learning, But Make It Hardware: Embedded FPGAs for Particle Detectors
By: Julia Gonski
October 22, 2024
We’ve all heard about the AI revolution: machine learning algorithms are better and faster than ever before, and they are everywhere. Those of us working in particle physics research eagerly watch, think, study, and participate. If there’s a way to advance algorithms and computation that allows us to better mine the troves of Large Hadron Collider (LHC) proton collision data for hints of exotic new physics and potentially unlock the secrets of the universe, we want in.
Machine learning is nothing new to the particle physics crowd. The earliest uses of neural networks in high energy physics (HEP) go back to the 1980s, where they enhanced performance at basic collision analysis tasks such as signal-to-noise classification. But working with offline data is only half the story: datasets at today’s particle colliders arise from collision byproducts hurtling through sensitive detectors that tell us where, when, and what particles were produced. But to turn particle interactions with matter into datasets, a system of custom microelectronics must be designed and implemented right on the detector.
These so-called readout electronics have the job of doing some basic processing at the source of the data (think amplifying, digitizing, serializing), and sending this collected data stream off the detector for more intensive processing. Just like our offline analyses can benefit from ML, these on-detector algorithms can too. However, the technical requirements of the readout system challenge even the most innovative of today’s microelectronics technologies. Protons at the LHC collide every 25 nanoseconds, and the collision byproducts create an intense high-radiation environment that would quickly destroy your typical CPU or GPU. So, how can we harness the power of machine learning in the challenging environment of on-detector electronics?
Consider two core hardware platforms currently used for HEP data processing: the regular FPGA, and the application-specific integrated circuit (ASIC). FPGAs are great for ML calculations that need to be more power-efficient and faster than can be offered by a CPU or GPU, but they can’t survive LHC cavern radiation doses. ASICs are fast and can be designed for radiation-tolerance, and thus are the core technology comprising on-detector readout electronics today. But as the name suggests, they are specific to running a particular algorithm with limited adaptations. This doesn’t lend itself well to the flexibility required by ML algorithms, whose weights and structure may require updating throughout the experiment’s lifetime.
Image of the 28nm eFPGA and test setup.
Enter the embedded field programmable gate array, or eFPGA. The eFPGA has a central FPGA fabric that is flexible and configurable, just like an FPGA is, but it’s “embedded” into an ASIC design. In this way, it’s the best of both worlds: reconfigurability of an FPGA with the efficiency and robustness of an ASIC. What’s more is that many eFPGA design frameworks are open-source, meaning that aspirational ASIC designers can create custom designs for their needs while avoiding expensive licensing fees or substantial engineering support.
At SLAC National Accelerator Laboratory in California, I work alongside a team of researchers to design, fabricate, and test eFPGAs, with the aim of developing the technology to meet the needs of typical HEP applications. Our recent results document the design of two eFPGAs with 130 nm and 28 nm transistor size technology nodes. The latter eFPGA had a very small but workable logical capacity, enabling our team to develop a toy algorithm for processing charged particle signals in a futuristic silicon pixel sensor. Specifically, we leveraged simple ML models to classify high momentum particle tracks from soft tracks that don’t need to be saved, offering a method for data reduction at source. We were able to synthesize this model to the 28nm eFPGA and run it on the chip, obtaining model outputs that were 100% accurate with respect to the expectation from simulation. This study gave us two key findings: open-source frameworks for eFPGA designs can be verified in silicon, and the applications for advanced HEP detectors are just beginning.
While readout systems in today’s detectors at the LHC are already built and operational, opportunities abound to work the power of eFPGAs into future next-generation particle detector designs. The Future Circular Collider (FCC), an ultra-precise electron-positron collider proposed to be hosted at CERN with international collaboration, could use at-source data reduction to reduce the material budget of transmission cables in the sensitive detector regions. Detectors at the SuperKEKB collider and the Deep Underground Neutrino Experiment (DUNE) have teams that are similarly developing interest in eFPGAs to expedite data acquisition and enhance the quality of physics results.
As machine learning and silicon microelectronics continue their impressive trajectories, their overlap in the area of advanced ML hardware platforms will only grow in size and impact. Scientific research can both inform and benefit from these advances. This is where the A3D3 Institute comes in, which can facilitate communication between experts across academia and industry to generate better scientific ideas and translate academic research into practice across the broad ML community. Looking forward, it will be on all of us to keep our front-row seat, making sure the cutting-edge technologies of today are incorporated for bigger and better science tomorrow.
Learn more here: https://arxiv.org/abs/2404.17701