Contributed Talk

Improved simulation time- and length-scales with Allegro and NequIP


Marc L. Descoteaux
Harvard University
  • TBA
  • TBA

Training and deploying machine-learned interatomic potentials (MLIPs) can require significant effort and non-trivial software infrastructure. In this talk, I will discuss the impact of the recent overhaul of the NequIP code [1] for training and deploying deep equivariant MLIPs, in particular the message-passing NequIP and the strictly local Allegro. This software development effort aims to produce a modular and extensible framework to keep up with the rapid advancements in machine learning for materials and molecules. Supporting distributed data parallel training, archival model packages, torch.compile, custom tensor product kernels, and more, the code is positioned to enable efficient large-scale training and deployment of MLIPs. We demonstrate the software’s capabilities by training foundation-scale Allegro models on the SPICE 2 dataset of organic molecular systems. When combined with LAMMPS to perform molecular dynamics simulations using Frontier and Perlmutter, we find that the deployed Allegro models are accelerated by 5 to 18 times and can handle significantly larger system sizes before running out of memory.

[1] C.W. Tan, M.L. Descoteaux et al. “High-performance training and inference for deep equivariant interatomic potentials” https://arxiv.org/abs/2504.16068 (2025).