PANNA: Properties from Artificial Neural Network Architectures
R Lot and F Pellegrini and Y Shaidu and E Kucukbenli, COMPUTER PHYSICS COMMUNICATIONS, 256, 107402 (2020).
Prediction of material properties from first principles is often a computationally expensive task. Recently, artificial neural networks and other machine learning approaches have been successfully employed to obtain accurate models at a low computational cost by leveraging existing example data. Here, we present a software package "Properties from Artificial Neural Network Architectures'' (PANNA) that provides a comprehensive toolkit for creating neural network models for atomistic systems following the Behler-Parrinello topology. Besides the core routines for neural network training, it includes data parser, descriptor builder for Behler-Parrinello class of symmetry functions and forcefield generator suitable for integration within molecular dynamics packages. PANNA offers a variety of activation and cost functions, regularization methods, as well as the possibility of using fullyconnected networks with custom size for each atomic species. PANNA benefits from the optimization and hardware-flexibility of the underlying TensorFlow engine which allows it to be used on multiple CPU/GPU/TPU systems, making it possible to develop and optimize neural network models based on large datasets. Program summary Program title: PANNA-Properties from Artificial Neural Network Architectures CPC Library link to program files: http://dx.doi.org/10.17632/mcryj6cnnh.1 Licensing provisions: MIT Programming language: Python, C++ Nature of problem: A workflow for machine learning atomistic properties and interatomic potentials using neural networks. Solution method: This package first transforms the user supplied data into pairs of precomputed input (Behler-Parrinello 1 class of symmetry functions) and target output (energy and forces) for the neural network model. The data are then packed to enable efficient reading. A user-friendly interface to TensorFlow 2 is provided to instantiate and train neural network models with varying architectures within Behler-Parrinello topology and with varying training schedules. The training can be monitored and validated with the provided tools. The derivative of the target output with respect to the input can also be used jointly in training, e.g. in the case of energy and force training. The interface with molecular dynamics codes such as LAMMPS 3 allows the neural network model to be used as an interatomic potential. Additional comments including restrictions and unusual features: The underlying neural network training engine, TensorFlow, is a prerequisite of PANNA. While there is a special LAMMPS integration performed via a patch distributed within PANNA, the network potentials can be deposited into OpenKIM 4 database and can be used with a wide range of molecular dynamics codes. The package allows different network architectures to be used for each atomic species, with different trainability setting for each network layer. It provides tools of exchanging weights between atomic species, and provides the option of building a Radial Basis Function network. The software is parallelized to take advantage of hardware architectures with multiple CPU/GPU/TPUs. (C) 2020 Elsevier B.V. All rights reserved.
Return to Publications page