Trainability barriers and opportunities in quantum generative modeling

The generative modeling framework using quantum circuit Born machines.

npj Quantum Information, Published online: 13 November 2024; doi:10.1038/s41534-024-00902-0

Quantum generative models provide inherently efficient sampling strategies and thus show promise for achieving an advantage using quantum hardware.

In this work, researchers have investigated the barriers to the trainability of quantum generative models posed by barren plateaus and exponential loss concentration. They have explored the interplay between explicit and implicit models and losses, and showed that using quantum generative models with explicit losses such as the KL divergence leads to a new flavor of barren plateaus.

In contrast, the implicit Maximum Mean Discrepancy loss can be viewed as the expectation value of an observable that is either low-bodied and provably trainable, or global and untrainable depending on the choice of kernel.

In parallel, they found that solely low-bodied implicit losses cannot in general distinguish high-order correlations in the target data, while some quantum loss estimation strategies can.

They have validated their findings by comparing different loss functions for modeling data from High-Energy-Physics.

Previous Article

Interferometry of quantum correlation functions to access quasiprobability distribution of work

Next Article

Photonic Time Crystals That Amplify Light Exponentially

You might be interested in …