By: Julia Gonski
October 22, 2024

We’ve all heard about the AI revolution: machine learning algorithms are better and faster than ever before, and they are everywhere. Those of us working in particle physics research eagerly watch, think, study, and participate. If there’s a way to advance algorithms and computation that allows us to better mine the troves of Large Hadron Collider (LHC) proton collision data for hints of exotic new physics and potentially unlock the secrets of the universe, we want in.

Machine learning is nothing new to the particle physics crowd. The earliest uses of neural networks in high energy physics (HEP) go back to the 1980s, where they enhanced performance at basic collision analysis tasks such as signal-to-noise classification. But working with offline data is only half the story: datasets at today’s particle colliders arise from collision byproducts hurtling through sensitive detectors that tell us where, when, and what particles were produced. But to turn particle interactions with matter into datasets, a system of custom microelectronics must be designed and implemented right on the detector.

These so-called readout electronics have the job of doing some basic processing at the source of the data (think amplifying, digitizing, serializing), and sending this collected data stream off the detector for more intensive processing. Just like our offline analyses can benefit from ML, these on-detector algorithms can too. However, the technical requirements of the readout system challenge even the most innovative of today’s microelectronics technologies. Protons at the LHC collide every 25 nanoseconds, and the collision byproducts create an intense high-radiation environment that would quickly destroy your typical CPU or GPU. So, how can we harness the power of machine learning in the challenging environment of on-detector electronics?

Consider two core hardware platforms currently used for HEP data processing: the regular FPGA, and the application-specific integrated circuit (ASIC). FPGAs are great for ML calculations that need to be more power-efficient and faster than can be offered by a CPU or GPU, but they can’t survive LHC cavern radiation doses. ASICs are fast and can be designed for radiation-tolerance, and thus are the core technology comprising on-detector readout electronics today. But as the name suggests, they are specific to running a particular algorithm with limited adaptations. This doesn’t lend itself well to the flexibility required by ML algorithms, whose weights and structure may require updating throughout the experiment’s lifetime.


Image of the 28nm eFPGA and test setup.

Enter the embedded field programmable gate array, or eFPGA. The eFPGA has a central FPGA fabric that is flexible and configurable, just like an FPGA is, but it’s “embedded” into an ASIC design. In this way, it’s the best of both worlds: reconfigurability of an FPGA with the efficiency and robustness of an ASIC. What’s more is that many eFPGA design frameworks are open-source, meaning that aspirational ASIC designers can create custom designs for their needs while avoiding expensive licensing fees or substantial engineering support.

At SLAC National Accelerator Laboratory in California, I work alongside a team of researchers to design, fabricate, and test eFPGAs, with the aim of developing the technology to meet the needs of typical HEP applications. Our recent results document the design of two eFPGAs with 130 nm and 28 nm transistor size technology nodes. The latter eFPGA had a very small but workable logical capacity, enabling our team to develop a toy algorithm for processing charged particle signals in a futuristic silicon pixel sensor. Specifically, we leveraged simple ML models to classify high momentum particle tracks from soft tracks that don’t need to be saved, offering a method for data reduction at source. We were able to synthesize this model to the 28nm eFPGA and run it on the chip, obtaining model outputs that were 100% accurate with respect to the expectation from simulation. This study gave us two key findings: open-source frameworks for eFPGA designs can be verified in silicon, and the applications for advanced HEP detectors are just beginning.
While readout systems in today’s detectors at the LHC are already built and operational, opportunities abound to work the power of eFPGAs into future next-generation particle detector designs. The Future Circular Collider (FCC), an ultra-precise electron-positron collider proposed to be hosted at CERN with international collaboration, could use at-source data reduction to reduce the material budget of transmission cables in the sensitive detector regions. Detectors at the SuperKEKB collider and the Deep Underground Neutrino Experiment (DUNE) have teams that are similarly developing interest in eFPGAs to expedite data acquisition and enhance the quality of physics results.

As machine learning and silicon microelectronics continue their impressive trajectories, their overlap in the area of advanced ML hardware platforms will only grow in size and impact. Scientific research can both inform and benefit from these advances. This is where the A3D3 Institute comes in, which can facilitate communication between experts across academia and industry to generate better scientific ideas and translate academic research into practice across the broad ML community. Looking forward, it will be on all of us to keep our front-row seat, making sure the cutting-edge technologies of today are incorporated for bigger and better science tomorrow.

Learn more here: https://arxiv.org/abs/2404.17701

By: Miles Cochran-Branson
July 26, 2024

Following the three-day US-ATLAS conference held at the University of Washington (UW) Seattle campus, students met for a brief tutorial on tools available for US-ATLAS members. After introductions in computing resources, Yuan-Tang Chou—a postdoc at UW and member of the A3D3 team—gave a presentation on GPU resources and utilization with a focus on new applications to particle-physics workflows. 

Chou gives a brief presentation on applications of the NVIDIA Triton server for deploying models as-a-service.

The presentation focused on deploying models on accelerators such as GPUs, as-a-service (aaS) using the NVIDIA Triton Inference Server. Chou discussed the merits of heterogeneous computing—the most straightforward way to deploy algorithms where the CPU and GPU both are connected on a single node. He noted that for many physics processes such as Graph-Neural-Network (GNN) based tracking, flavor tagging, and detector simulation, heterogeneous computing could be “inefficient and very expensive to scale.” Hence, offloading expensive tasks to a GPU server could streamline the deployment of large models important for ATLAS physics. 

Sample architecture of as-a-service model deployment from CPU-only client nodes or CPU / GPU client nodes to a GPU server.

After motivating why deploying models as-a-service could be beneficial in physics analysis, Chou gave a brief demo on deploying GNN-tracking aaS. This was followed by a hands-on tutorial deploying the resnet50 image recognition deep neural network as-a-service on computing resources at CERN. The tutorial material focused on building the proper model repository structure and configuration for image detection on a GPU server. Students set-up a work environment, deployed a backend on the server, and sent an image to the server to be classified. 

Students work on deploying a backend on a server and sending images to the server to be classified. 

By the end of the tutorial, students had successfully deployed a backend and received image classifications back from the image recognition model. Interested students were connected with experts currently working in ongoing development of aaS tools in algorithm development of ATLAS algorithms.

Tutorial Resources: https://hrzhao76.github.io/AthenaTriton/triton-lxplusGPU.htmlDeveloped by Yuan-Tang Chou, Miles Cochran-Branson (UW), Xiangyang Ju (LBNL), and Haoran Zhao (UW)

Written by Miles Cochran-Branson, PhD student at University of Washington

By Angela Tran
May 2024

A3D3 recently engaged in one of the largest U.S. national physics conferences—the American Physical Society’s April meeting in Sacramento. A variety of A3D3 students presented their talks on institute-related activities.

A3D3 member Ethan Marx from The Massachusetts Institute of Technology brought his passion for data’s potential to bring new discoveries. His APS talk was about applying machine learning (ML) techniques to search for gravitational waves in LIGO-Virgo-Kagra (LVK) data. He says, “Specifically, we validated our algorithm on archival data from the third LVK observing run. The end goal of this work is to deploy this algorithm in real-time in order to issue alerts for electromagnetic astronomers to follow-up. This work is a direct implementation of the institute’s aim for real time processing of large datasets.” In his research, he enjoys the challenge of applying new cutting edge statistical/ML techniques. After being part of A3D3 for about three years, he says he likes that the Institute emphasizes the similarities between seemingly distinct fields.

A3D3 member Will Benoit from The University of Minnesota contributed his enjoyment of how his field probes the behavior of the most extreme universe matter. His APS talk was about demonstrating the production-readiness of a custom AI algorithm for the real-time detection of gravitational waves. He says, “Gravitational waves are one of the messengers in multi-messenger astronomy, and the ability to detect them more quickly will allow other instruments to follow-up more quickly.” In his two and a half years with A3D3, he has especially appreciated the interdisciplinary collaboration with fellow students working on similar technical problems, and seeing how results from different fields can apply to one another.A3D3 member Jared Burleson from The University of Illinois at Urbana-Champaign works in Experimental High Energy Particle Physics to answer the big questions about the fundamental laws of the universe. His APS talk was about the use of machine learning and artificial intelligence for track reconstruction for upgrades to the Large Hadron Collider, which he says “will see a drastic increase in the amount of real-time data gathered. My work aims to utilize AI for processing large data in real time with a focus on discovery in high energy physics.” After about a year with A3D3, he says, “I really enjoy being able to connect with other people in my field and outside my field who are interested in data-driven computing solutions to problems.”

Links to some of the talks presented at APS by A3D3 members are listed below.

  • D14.1 Jared Burleson, Track reconstruction for the ATLAS Phase-II High-Level Trigger using Graph Neural Networks on FPGAs with detector segmentation and regional processing.
  • G13.1 Will Benoit, A machine-learning pipeline for real-time detection of gravitational waves from compact binary coalescences
  • D03.2 Ethan Marx, A search for binary mergers in archival LIGO data using aframe, a machine learning detection pipeline
  • DD03 Yuan-Tang Chou, NSF HDR ML Anomaly Detection Challenge
  • DD03 Haoran Zhao, Graph Neural Network-based Track finding as a Service with ACTS

By: Dylan Rankin
February 21, 2024

The Reach of High Energy Physics

The A3D3 institute innovates in AI to enable discovery across science, with high energy physics (HEP) being one of the three research thrusts. Within the field of HEP there are a dizzying array of research topics and the tools physicists must use to study them are equally so. There are experiments in HEP that collide particles traveling near the speed of light and study the aftermath of the collisions to search for hints of new phenomena. Other experiments observe the vast reaches of space to try to understand the forces that shaped our universe. And still others seek to detect and measure incredibly rare interactions of elusive particles with our world that might shed light on the most mysterious known particle, the neutrino. These experiments study our universe from the quantum realm of subatomic particles to the astronomical realm of galaxies and black holes.

The Particle Physics Prioritization Panel Report

Once every decade a group of high energy physicists is charged by the Department of Energy (DOE) with evaluating the projects on the horizon across HEP and charting a fiscally responsible path for the next ten years. This involves a multi-year process and hundreds of studies by the community but allows every group within HEP to make the case for the projects and actions they believe to be the most exciting. This report, called the Particle Physics Project Prioritization Panel (P5) report, was released on December 8th, 2023, and sets out the goals across the field for the next decade. While members of A3D3 participate in many projects outside of HEP, the P5 report is very important in that it helps guide a lot of focus inside HEP for the next 10 years.

A3D3 Members Collaborate on Projects Recognized in Report

One major outcome of the P5 report is the suggestions regarding how best to utilize the funding we expect over the next ten years. While it would be wonderful if there were enough funding for all the great ideas in HEP, in practice some tough decisions must be made. Some projects are deemed more critical or cost-effective than others. Some projects must be prioritized now and others must be delayed for the future. A3D3 members contributed heavily to the planning process and many projects that count A3D3 members as leaders were strongly endorsed in the report, demonstrating major support for our work. These projects include the High-Luminosity Large Hadron Collider upgrade, multiple phases of the Deep Underground Neutrino Experiment, the IceCube experiment, and, implicitly, the Laser interferometer for Gravitational-wave Observation experiment. 

The P5 report is more than just a priority list of HEP projects. It also attempts to take a wide-angle look at the field as a whole and provide guidelines for areas of growth. One of the most overarching callouts in the P5 report is the use of Artificial Intelligence and Machine Learning (AI/ML). AI/ML is obviously a main component of the work in A3D3. Even more so, the ways in which A3D3 is pioneering AI/ML usage were called out as future directions for investment. These include the major A3D3 work on real-time systems like the ATLAS and CMS trigger systems, as well as significant computational work being spearheaded by A3D3 members related to the effective use of emerging hardware. 

The High-Luminosity Large Hadron Collider Upgrade

As the most powerful particle collider ever built the LHC is capable of producing conditions unlike any other machine on earth. It made the discovery of the Higgs boson possible and continues to allow many searches for new particles. But in order to continue to push the boundaries of the so-called energy frontier of HEP, the collider and the detectors need to be upgraded to allow us to collect even more data, or luminosity. This upgrade is called the High-Luminosity Large Hadron Collider, and its successful execution is one of the utmost importance according to the P5 report. The contributions of A3D3 members who work within the ATLAS and CMS experiments will be critical to the success of the HL-LHC upgrade.

The Deep Underground Neutrino Experiment

The Deep Underground Neutrino Experiment (DUNE) at Fermi National Accelerator Laboratory will allow incredibly precise measurements of elusive neutrinos. The detector is comprised of multiple stages, each with its own role to play in helping enable these measurements. Multiple future phases of the Deep Underground Neutrino Experiment (DUNE) were a focus of the report. The endorsement is a testament to the importance of neutrinos for a long time to come in US HEP research.

The IceCube Experiment

The IceCube experiment at the South Pole also seeks to study the properties of neutrinos. However, while the neutrinos at the DUNE detector are produced by the accelerator complex at Fermi National Accelerator Laboratory, those detected by IceCube are produced in space and can have energies higher even than particles produced at the LHC. The current IceCube detector instruments a volume of roughly 1 cubic kilometer, but a proposed IceCube Gen-2 would increase this volume ten-fold to further study of these astrophysical neutrinos. This upgrade is recommended by the P5 report to unlock the wide set of physics it can enable.

Multi-Messenger Astronomy and the Laser Interferometer for Gravitational-wave Observation

Although the Laser Interferometer for Gravitational-wave Observation (LIGO) is not funded by the DOE, the physics that it and other similar gravitational-wave experiments enable was strongly supported in the P5 report. Specifically, the report calls out the emerging field of Multi-Messenger Astronomy (MMA) that seeks to observe astronomical sources through both gravitational waves and electromagnetic signals. This field has largely been born out of the success of the LIGO experiment, and the strong endorsement of MMA from the P5 report represents a strong endorsement of the future of LIGO.

A3D3 Shows Promise of Bridging the Gap Between Hardware and Fundamental Science

It is clear that the work represented in A3D3 is strongly aligned not only with the endorsed experiments but also with the modes of discovery. The report comments that “upgraded detectors and advances in software and computing, including artificial intelligence/machine learning (AI/ML), will enable the experiments to detect rare events with higher efficiency and greater purity.” The connections in A3D3 between the hardware and the fundamental science are intended to facilitate the advances that the P5 report notes.

Much of the importance of this work was demonstrated in studies performed by A3D3 members during the planning process. In addition to work on the experiments above, A3D3 members led a review of the community needs, tools, and resources for AI/ML across HEP [https://arxiv.org/abs/2203.16255], which was a primary resource of its kind. The alignment of the P5 recommendations with the work done by A3D3 members is a strong demonstration of the support in the HEP community for this sort of work.

Finally, one major component of the work in A3D3 is its cross-disciplinary nature. Solutions to problems in HEP are likely to come not only from within the field but through collaboration with other domains. The sharing of tools, problems, and expertise has the potential to unlock solutions across traditional research boundaries. The study of neural computations involved in sensory and motor behavior is seemingly far removed from the trigger systems in CMS and ATLAS. However, both areas of research require solutions for data processing that are capable of extremely high throughput. This connection has been strengthened through A3D3 work to enable ultrafast recurrent neural networks [https://arxiv.org/abs/2207.00559]. The P5 report makes explicit mention of collaborations to take advantage of these sorts of connections and suggests increasing their prevalence. This signals strong support for the sorts of trans-disciplinary connections A3D3 has fostered, both now and in the future.

By: Deep Chatterjee

December 27, 2023

New Orleans, LA – The 37th Conference on Neural Information Processing Systems was held in New Orleans between Dec 10 – 16, 2023. The Machine Learning and the Physical Sciences Workshop at NeurIPS brought together researchers pursuing applications of Machine Learning techniques in Physics, and developing new techniques based on physical concepts like conversation laws and symmetries. There was a total of 250 accepted papers for the workshop poster session – both in-person and remote. While most papers were related to Physics and Astronomy, there were several papers on applications in medical sciences, material science, and earth sciences. A3D3 had a prominent presence at the workshop, with a contributed talk, 4 papers from A3D3 members, and 2 papers from A3D3 Steering Board committee members’ team. 

Elham E Khoda from UW Seattle presented on the implementation of Transformers on FPGA using HLS4ML, for low-latency applications like L1 triggers at the LHC and ATLAS, and searches for anomalous signals in gravitational-wave data. 

Elham Khoda presents a contributed talk on the implementation of transformers in HLS4ML

Deep Chatterjee from MIT presented a poster on optimizing likelihood-free inference by marginalizing nuisance parameters using self-supervision. [paper link

Niharika Sravan from Drexel University presented Pythia – a reinforcement learning model that maximizes the search for kilonovae in the presence of several contaminant objects from the Zwicky Transient Facility. [paper link

Anni Li from UCSD gave a poster on Induced Generative Adversarial Particle Transformers on the use of induced particle-attention blocks to surpass existing particle simulation generative models. [paper link

Deep Chatterjee (left) and Alex Gagliano (right) present posters.

There were two posters presented from A3D3 steering board members. Ashley Villar (along with Alex Gagliano) gave a poster on convolutional variational autoencoder to estimate the redshift, stellar mass, and star-formation rates of galaxies from multi-band imaging data. [paper link] Nhan Tran (along with C. Xu) have a poster on a Proximal Policy Optimization (PPO) algorithm to uniform proton beam intensity for the Mu2e experiment at Fermilab. [paper link

By: Katya Govorkova and Yuan-Tang Chou
November 27, 2023

The “AI and the Uncertainty Challenge in Fundamental Physics” workshop was a dynamic experience filled with experts from different areas of applied AI in science. Experts from fundamental science, computer science, and statistics exchange ideas on how to incorporate uncertainty in AI. The workshop took place at the Sorbonne Center for Artificial Intelligence (SCAI) in Paris and at Institut Pascal Université Paris-Saclay in Orsay, France.

Researcher Mark Neubauer (Faculty at the University of Illinois at Urbana-Champaign, A3D3 co-PI) led the discussion in the Uncertainty Quantification session on Tuesday and emphasized the idea of explainable AI.

The session on Wednesday afternoon was specifically dedicated to the FAIR Universe ML challenge, which addresses uncertainties in physics through AI. A3D3, represented by Yuan-Tang Chou (A3D3 Postdoc at the University of Washington) and Katya Govorkova (A3D3 Postdoc at the Massachusetts Institute of Technology), shared valuable insights from organizing an anomaly detection data challenge, emphasizing the necessity of robust frameworks and collaboration. Lessons learned from Katya’s experiences underscored the crucial role challenges play in advancing the field. She also underlined what can be improved in future challenges. 

Katya Govorkova presenting Exploring Data Challenges and Leveraging Codabench: A Practical Journey with unsupervised New Physics detection at 40 MHz.

Yuan-Tang highlights another ML challenge example for NSF HDR institute beyond particle physics. The challenge utilized the Neuroscience dataset provided by Prof. Dadarlat’s lab ( Purdue University, A3D3), which tried to decode limb trajectories from neuro activity with ML.

Yuan-Tang presenting ML challenge using Neuroscience dataset

The Codabench platform, noted for its versatility, was highlighted as instrumental in organizing public challenges. The public showed a great interest in hosting and participating in challenges. Challenges were recognized as bridges connecting different disciplines, particularly physics and computer science. The event highlighted those challenges.

The participants provided many great ideas and suggestions during the one-week workshop. Elham E Khoda (Postdoc at the University of Washington, A3D3) gave an excellent summary on the last day to discuss the lesson learned during the week and the next step to improve the HiggsML Uncertainty Challenge. “We definitely want not only people outside the particle physics to join the ML Challenge,” Elham said. “We also encourage participation outside of the domain who can think differently and come up with innovative ideas.”

By: Patrick McCormack (Postdoc MIT, A3D3)

October 30, 2023

At the workshop, seven A3D3 trainees gave presentations on their work.  Leading off, Farouk Mokhtar of UCSD and Santosh Parajuli of UIUC presented their work on implementing machine learning models for the LHC experiments CMS and ATLAS, respectively.  Though working on independent efforts, both use graph neural networks to efficiently and scalably reconstruct particles. Alongside the first smatterings of autumn leaves, an international assortment of more than 150 Physicists, Computer Scientists, Engineers (and more) descended upon Imperial College London (ICL) this past week.  There they enjoyed the crisp weather and the fourth iteration of the Fast Machine Learning for Science (FastML) Workshop, which ran from September 25-28.

The FastML workshop series was born in 2019 as a small and informal workshop focused on High Energy Physics (HEP), but it has since grown to include participants from diverse fields, such as medicine, astrophysics, and statistics.  Unsurprisingly, a workshop centered on this multidisciplinary approach to accelerated machine learning drew the participation of several members of A3D3.  And just as A3D3 revolves around mutual support and cross-disciplinary efforts, participants in the workshop were intrigued to see how the same techniques and algorithms were found in diverse applications across different disciplines.

“The workshop brings together researchers from very different specialties who do not typically have a chance to come together and exchange ideas,” said Fermilab’s Nhan Tran, one of the original FastML organizers.  “Despite this, it was so refreshing to see many amazing talks and enthusiastic discussion from all the workshop participants willing to get out of their comfort zones and expand their research.  I really appreciate that spirit and it makes the workshop series very unique and fun.”

Emphasizing the increased scope of the workshop series, Fermilab’s Kevin Pedro said that he attended the workshop “to learn about new cutting-edge computational techniques that are accelerating ML throughout many scientific fields.”

At the workshop, seven A3D3 trainees gave presentations on their work.  Leading off, Farouk Mokhtar of UCSD and Santosh Parajuli of UIUC presented their work on implementing machine learning models for the LHC experiments CMS and ATLAS, respectively.  Though working on independent efforts, both use graph neural networks to efficiently and scalably reconstruct particles.

Jeffrey Krupa presents his work on developing a Sparse Point Voxel CNNSantosh Parajuli presents his work on implementing graph neural networks for Event Filter Tracking in ATLAS.  In keeping with the “Fast” theme of the workshop, he is developing a VHDL implementation of his algorithm to run on FPGAs. for machine-learning-based clustering in hadronic calorimeters.

Next up, Patrick McCormack and Jeffrey Krupa, both of MIT, gave talks about two projects that they both work on.  One is a CMS effort to implement GPU acceleration for ML-methods via an Inference as a Service scheme, and the other is an implementation of a Sparse Point Voxel CNN (SPVCNN) for determining clusters of energy in hadron calorimeters for LHC experiments.  According to Krupa, “the SPVCNN algorithm is a first-time use of HCAL depth segmentation in clustering for CMS, and it removes the latency associated with HCAL clustering from reconstruction workflows.”

Jeffrey Krupa presents his work on developing a Sparse Point Voxel CNN for machine-learning-based clustering in hadronic calorimeters.

The last A3D3 talk from the workshop’s first day was given by Duc Hoang, also from MIT, who presented his work on algorithms for the CMS Layer-1 Trigger. These algorithms must be able to produce inferences at 40 MHz, such that one must balance algorithmic complexity with speed.  Thanks to the efforts of Duc and his collaborators, the algorithms that they have implemented on FPGAs for both bottom quark and tau lepton identification will increase the efficiency of the CMS trigger system for rare processes with these particles.

During the second day of the workshop, the focus shifted away from the LHC.  The gravitational waves side of A3D3 was represented by Katya Govorkova and Eric Moreno of MIT.  Katya presented her work on the development of Gravitational Wave Anomalous Knowledge (GWAK), a method for tagging gravitational waves from anomalous sources.  This algorithm is related to the Quasi-Anomalous Knowledge (QUAK) technique that was developed by A3D3 members for application in LHC contexts.  The GWAK algorithm has since evolved and can be used in real time to help distinguish between truly anomalous gravitational waves and meaningless detector glitches.

Katya Govorkova presents “Gravitation Wave Anomalous Knowledge”, or GWAK.  As she was giving the first talk about gravitational waves at the workshop, she covered the basic physics behind the LIGO experiment.

Eric’s talk focused on the Machine Learning for Gravitational Waves (ML4GW) package, which is a suite of tools enabling real-time machine learning for gravitational wave (GW) experiments, such as LIGO.  These tools have accelerated GW-detection, parameter estimation for events, noise regression, and anomaly detection via GWAK.

The workshop’s concluding remarks were given by A3D3’s deputy director and MIT professor Phil Harris.  He exhorted the audience with this playful paradox: “In order to go fast, we have to go slow.  By which I mean that designing an algorithm or workflow for the fastest possible performance takes time and careful consideration.”  In his 20 minute talk, he pointed out many of the similarities and common tools being used across disciplines, such as sparsification and quantization of neural networks, hardware-based acceleration using GPUs and FPGAs, and deep learning architectures such as transformers and graph neural networks.  A complete summary of the topics covered in the workshop would be far too long for this article, but the workshop’s timetable, along with links to most presentations can be found here.

“I really enjoyed that the workshop has a very diverse group of participants and talks that I found very inspiring,” said Professor Mia Liu from Purdue reflecting on the workshop.  “I am learning and thinking of new ways of accelerating science by developing appropriate algorithms and learning methods, in addition to my current research in real-time ML in low latency and high throughput systems.  For example, robust learning methods for embedding of scientific data, that can account for the variance due to the nature of the physical object and the measurement methods etc, is challenging but crucial for broader and long lasting impact of ML on scientific discoveries.”

The workshop also included tutorials on state-of-the-art deployment techniques, such as the hls4ml package for creating firmware implementations of machine learning algorithms, Intel AI Suite for deploying algorithms on Intel FPGAs, and the use of Intelligence Processing Units (IPUs) from Graphcore.  There were also informal tours of several of the labs at ICL.

On a lighter note, some workshop participants found time to explore the sights and sounds of London.  A3D3 members Eric and Duc also put together a public lecture from rapper Lupe Fiasco (Wasalu Jaco), who is currently a visiting professor at MIT.  He discussed some of his work with Google on creating TextFX, which is a large language model for exploring relationships between words and generating phonetically, syntactically, or semantically linked phrases.  They also managed to bring in Irving Finkel of the British Museum, who discussed the history of the game of Ur, which is a shared love with Lupe, Eric, and Duc.

Duc, Eric, Irving Finkel, and Lupe Fiasco discussed the game of Ur in a panel after Lupe’s public lecture on the relationship between rap and large language models.

The workshop proved to be a valuable experience for A3D3 attendees, and I suspect that future iterations will be well attended by our members.  “Attending the workshop was an incredible experience for me,” said Santosh Parajuli.  “The FastML workshop provided a unique platform to learn from experts, exchange ideas, and explore the latest advancements in different fields.  Additionally, I had a chance to share our exciting work on using advanced technology and machine learning to improve how we track particles in high-energy physics, which could help us make big discoveries in the future!”