Trade-off between gradient measurement efficiency and expressivity in Quantum Neural Networks

Trade-off between gradient measurement efficiency and expressivity in deep quantum neural networks

Quantum Neural Networks (QNNs) show promise for solving problems intractable for classical computers, functioning analogously to classical deep learning models but in the quantum domain. However, while classical neural networks benefit from efficient backpropagation for gradient estimation, QNNs face unique challenges due to quantum measurement limitations.

The key obstacle in QNN training lies in gradient estimation. Unlike classical networks, quantum states collapse upon measurement, preventing an efficient gradient measurement process with the same computational scaling as classical backpropagation. The conventional parameter-shift method requires measurements proportional to the number of parameters, severely limiting QNN scalability.

This research addresses fundamental questions about gradient measurement efficiency in QNNs. The authors identify a critical trade-off between measurement efficiency and expressivity in deep QNNs. More expressive networks require higher measurement costs per parameter, while reducing expressivity to match specific tasks can improve gradient measurement efficiency.

Based on this insight, the researchers propose the Stabilizer-Logical Product Ansatz (SLPA), a novel approach inspired by quantum error correction. The SLPA exploits symmetric circuit structures to achieve optimal gradient estimation, reaching the theoretical upper bound for measurement efficiency given a specific expressivity level.

The SLPA design proves particularly valuable for problems involving symmetry, common in quantum chemistry, physics, and machine learning. Numerical experiments demonstrate that SLPA drastically reduces the number of data samples needed for training while maintaining accuracy and trainability compared to parameter-shift methods.

This work illuminates the theoretical limits and possibilities for efficient QNN training, establishing a framework for understanding the fundamental relationship between gradient measurement efficiency and expressivity. By offering both theoretical insights and practical implementation strategies, this research advances the potential for achieving quantum advantages in variational quantum algorithms and quantum machine learning applications.

Reference: Chinzei, K., Yamano, S., Tran, Q.H. et al. Trade-off between gradient measurement efficiency and expressivity in deep quantum neural networks. npj Quantum Inf 11, 79 (2025) doi:10.1038/s41534-025-01036-7

Previous Article

Kaon Anomaly Breaks Flavor Symmetry at CERN’s NA61 Experiment

Next Article

Characterizing privacy in quantum machine learning

You might be interested in …

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.