Optimizing ZX-Diagrams with Deep Reinforcement Learning

ORAL

Abstract

ZX-diagrams are a powerful graphical language for the description of quantum processes with applications in quantum circuit optimization, tensor network simulation, the understanding of quantum error correction algorithms, and many more. Their utility relies on the application of local transformation rules. However, it is often non-trivial to find the optimal sequence of transformations to achieve a given task. Here, we bring together ZX-diagrams with reinforcement learning, a machine learning technique designed to discover an optimal sequence of actions in a decision-making problem, and show that our reinforcement learning agent can significantly outperform other optimization techniques like a greedy strategy and simulated annealing. The use of graph neural networks to encode the policy of our agent enables generalization to diagrams much bigger and of a different structure than seen during the training phase.

Presenters

  • Maximilian Nägele

    Max Planck Institute for Science of Light

Authors

  • Maximilian Nägele

    Max Planck Institute for Science of Light

  • Florian Marquardt

    Friedrich-Alexander University Erlangen, Max Planck Institute for the Science of Light, Friedrich-Alexander University Erlangen-, Max Planck Institute for the Science of Light