Tutorials

In this tutorial page, you will discover introductions to several machine learning projects across high energy physics, multi-messenger astrophysics, and systems neuroscience, accompanied by corresponding hands-on exercises and demonstrations. The tutorials are designed for newcomers to each project and also assist trainees who have the time to expand their perspectives and draw inspiration from other projects.

Some of these tutorials have been presented at recent conferences and workshops. Additionally, we plan to update this page with more tutorials in the future. Please contact the authors of the tutorials or the members of the A3D3 outreach and education committee if you have any questions about the tutorials or would like to share your feedback on any of them.

HLS4ML

In this tutorial, you will get familiar with the hls4ml library. This library converts pre-trained Machine Learning models into FPGA firmware, targeting extreme low-latency inference to stay within the strict constraints imposed by the CERN particle detectors. You will learn techniques for model compression, including how to reduce the footprint of your model using state-of-the-art techniques such as quantization. Finally, you will learn how to synthesize your model for implementation on the chip. Familiarity with Machine Learning using Python and Keras is beneficial for participating in this tutorial but not required.
https://github.com/fastmachinelearning/hls4ml-tutorial
A previous zoom recording of a live tutorial section is available at Snowmass CSS 2022:
https://indico.cern.ch/event/1176254/

ML4GW/HEREMES

Real-time gravitational wave astronomy stands to benefit substantially from the adoption of machine learning algorithms, which have demonstrated an ability to model complex signals, even in the presence of considerable noise, with minimal run-time latency and compute requirements. Moreover, many gravitational wave event morphologies and noise sources are well understood and easily simulated, acting as physical priors which can be exploited to regularize training to produce more robust models. However, adoption of production ML systems in this setting has been impeded by a lack of software tools simplifying the development of experimental and deployment pipelines that leverage these priors in a computationally efficient manner. In this demo, we’ll introduce ml4gw and hermes, two libraries for accelerating training and inference of models in the context of gravitational waves, and show how they can be combined with other infrastructure tools to build, evaluate, and deploy a competitive model for detecting binary black hole mergers in real LIGO gravitational strain data.
https://github.com/alecgunny/adass-2023-ml4gw-demo/tree/main

A previous zoom recording of a live tutorial section is available at ADASS 2023:
https://adass2023.lpl.arizona.edu/events/focus-demo-f401

Using GPUs for distributed training

This tutorial will give an overview of using pytorch lightning for building and training neural networks. A simple problem in inference: measuring the parameters of a line will be presented. The tutorial will also introduce distributed training with pytorch lightning.
https://github.com/deepchatterjeeligo/iap-2024/tree/main

This tutorial is part of the Workshop on Basic Computing Services in the Physics Department – subMIT. The workshop has other helpful tutorials, including batch job and workflow management, software management, etc.
https://indico.mit.edu/event/956/

SONIC

SONIC is the short name for Service for Optimized Network Inference on Co-processors. It is based on inference as a service. Instead of the usual case where the co-processors (GPUs, FPGAs, ASICs) are directly connected to the CPUs, as-a-Service connects the CPUs and co-processors via networks. With as-a-Service computing, clients only need to communicate with the server and handle the IOs, and the servers will direct the co-processors for computing. In the CMS Software (CMSSW), we set up the SONIC workflow to run inference as a service. The clients are deployed in CMSSW to handle the IOs; an Nvidia Triton inference server is chosen to run inferences for Machine-Learning models (and also classical domain algorithms).

This tutorial is intended to provide you with some basic familiarity with SONIC and Machine learning inference as a Service. Some examples for models, producers, and configs are discussed.
https://yongbinfeng.gitbook.io/sonictutorial/

Future tutorials

• ACTS-as-a-Service

ACTS Common Tracking Software is an experiment-independent toolkit for (charged) particle track reconstruction in (high energy) physics experiments.

The ACTS project provides high-level track reconstruction modules that can be used for any tracking detector. The tracking detector geometry description is optimized for efficient navigation and fast extrapolation of tracks. Converters for several common geometry description packages are available. In addition to the algorithmic code, this project also provides an event data model for the description of track parameters and measurements.

ACTS-as-a-Service connects the CPUs and co-processors via networks. Nvidia Triton inference server is chosen to run inferences and accept inference requests from ACTS users.

The tutorial is under development.