Training-efficient density quantum machine learning

B Coyle and S Raj and N Mathur and E Cherrat and N Jain and S Kazdaghli and I Kerenidis, NPJ QUANTUM INFORMATION, 11, 172 (2025).

DOI: 10.1038/s41534-025-01099-6

Quantum machine learning (QML) requires powerful, flexible and efficiently trainable models to be successful in solving challenging problems. We introduce density quantum neural networks, a model family that prepares mixtures of trainable unitaries, with a distributional constraint over coefficients. This framework balances expressivity and efficient trainability, especially on quantum hardware. For expressivity, the Hastings-Campbell Mixing lemma converts benefits from linear combination of unitaries into density models with similar performance guarantees but shallower circuits. For trainability, commuting-generator circuits enable density model construction with efficiently extractable gradients. The framework connects to various facets of QML including post-variational and measurement-based learning. In classical settings, density models naturally integrate the mixture of experts formalism and offer natural overfitting mitigation. The framework is versatile-we uplift several quantum models into density versions to improve model performance, or trainability, or both. These include Hamming weight-preserving and equivariant models, among others. Extensive numerical experiments validate our findings.

Return to Publications page