This page showcases our cutting-edge software projects funded by a3d3 institute. Some of these software packages have additional training material, which you can find on our tutorials page.


HLS4ML (High-Level Synthesis For Machine Learning) is a package designed for facilitating machine learning inference on FPGAs. It enables the creation of firmware implementations of machine learning algorithms using the HLS language. By translating models from traditional open-source machine learning packages into HLS, HLS4ML offers a solution that can be configured to suit specific use cases.

License Github Github


NMMA (Nuclear Multi Messenger Astronomy) is a fully featured, Bayesian multi-messenger pipeline targeting joint analyses of gravitational-wave and electromagnetic data (focusing on the optical). Using bilby, a Bayesian inference library originally put together for gravitational-wave analyses, as the back-end, the software is capable of sampling these data sets using a variety of samplers. It uses chiral effective field theory based neutron star equation of states when performing inference, and is also capable of estimating the Hubble Constant.

License Github Github


ML4GW (Machine Learning For Gravitational Waves) provides multiple libraries for using ML frameworks into GW searches. It includes ML pipelines for denoising of gravitational-wave data  (time-series), transient-finding ones for both modeled and unmodeled sources (anomaly detection) as well as for parameter estimation of gravitational-wave intrinsic and extrinsic source physical quantities. The repository also includes libraries for overall astrophysical signal generation, streamlining training as well as incorporating  Inference-as-a-Service along the lines of the SONIC services in CMS.

License Github Github


SONIC is the short name for Service for Optimized Network Inference on Co-processors. It is based on inference as a service. Instead of the usual case where the co-processors (GPUs, FPGAs, ASICs) are directly connected to the CPUs, as-a-Service connects the CPUs and co-processors via networks. With as-a-Service computing, clients only need to communicate with the server and handle the IOs, and the servers will direct the co-processors for computing. In the CMS Software (CMSSW), we set up the SONIC workflow to run inference as a service. The clients are deployed in CMSSW to handle the IOs; an Nvidia Triton inference server is chosen to run inferences for Machine-Learning models (and also classical domain algorithms).

License Github Github