Multi-fidelity neural network-based prediction of tensile strength of high-entropy alloy (FeNiCoCrCu) using molecular dynamics data

NEEK Chowdhury and A Jawad and A Rahman and MJA Khan, JOURNAL OF MOLECULAR MODELING, 31, 214 (2025).

DOI: 10.1007/s00894-025-06439-z

ContextHigh-entropy alloys (HEAs) represent a class of advanced materials with superior mechanical, thermal, and chemical properties. FeNiCoCrCu HEA has been of particular interest due to its excellent tensile strength, corrosion resistance, and thermal stability. However, it is a significant challenge to understand and optimize the mechanical properties of such alloys due to the complex structure. Molecular dynamics (MD) is a popular choice in investigating atomic-scale characteristics but is computationally costly for large polycrystal systems. Machine learning approaches have seen growing interest as surrogate models that can produce accurate predictions and lower computational costs. This study demonstrates the first application of Multi-fidelity Physics Informed Neural Network (MPINN) model for predicting the tensile strength of FeNiCoCrCu. This study generates a large dataset of tensile strength for different compositions of FeNiCoCrCu HEA and uses it to train a MPINN model. The MPINN model successfully predicts the tensile strength of FeNiCoCrCu for different compositions and validates the effectiveness of the MD data-enabled MPINN model in making accurate predictions of material properties.MethodsThis study uses LAMMPS for the molecular dynamics simulations and TensorFlow for building and running the machine learning models. The low-fidelity (LF) and high-fidelity (HF) data for the machine learning model are obtained from MD simulations of small single crystals and large polycrystals, respectively. MD simulation systems are created using Atomsk, and EAM potential is used for the forcefield. The simulations are visualized using OVITO. The MPINN model utilizes both linear and non-linear relations between LF and HF data. In TensorFlow, the machine learning model is optimized using the Adam optimizer, and L2 regularization is used to prevent overfitting.

Return to Publications page