General

This page presents benchmark results comparing various quantum circuit simulators across multiple metrics using Maestro. These benchmarks help illustrate the trade-offs in performance and fidelity across backends such as statevector, MPS, and hybrid simulators when integrated via Maestro’s intelligent backend selection system.

Statevector Benchmark

This benchmark compares only statevector-based simulators, isolating their performance without involving backend selection or fallback mechanisms. Statevector simulators compute the full quantum state and are typically the most accurate, but scale poorly beyond ~30 qubits due to exponential memory growth.

The results show that interfacing simulators with Maestro can lead to improved performance. Here we tested 16 qubit circuits for 10,000 shots. We compare standard Qiskit and note that QCSim does not have built-in optimization for multi-shot simulation.

Comparison of statevector simulators

Benchmark of different statevector simulators simulating a 16-qubit circuits, 10,000 shots each.

MPS Benchmark

This benchmark evaluates Matrix Product State (MPS) simulators, which approximate quantum states using tensor network techniques. MPS methods scale better than full statevector approaches in cases of limited entanglement and are well-suited to 1D circuit topologies.

In this benchmark, we compare the timing of running 22-qubit circuits using MPS simulation. Here we can see Maestro wrapping QCSim performs the best, but some performance is lost while interfacing to Qiskit, as the Python version of Qiskit has a better performance.

Comparison of MPS simulators

Benchmark of MPS simulators on 22-qubit circuits with limited entanglement, 10,000 shots each.

Quality Comparison

This section compares the fidelity and output distribution quality between simulators on different types of circuits. This shows that the methods used maintain result quality across the various simulation methods.

4-Qubit Circuit

A 4-qubit circuit with 100 randomly selected gates and full measurement of all qubits. For the MPS simulation, these settings are applied:

  • Max bond dimension: 15
  • Singular value truncation threshold: 0.001

We see that the statistics are all generally close, implying the simulators are producing the same results.

Output comparison on 4-qubit circuit

Comparison of output distributions on a small 4-qubit circuit under tight approximation limits.

22-Qubit Circuit

A more sizable test: a 22-qubit circuit with 100 gates, where only the first four qubits are measured. This mimics practical use cases in variational quantum algorithms where only part of the register is read out.

For the MPS simulation, these settings are applied:

  • Max bond dimension: 15
  • Singular value truncation threshold: 0.001

Again, we see that the statistics are all generally close, implying the simulators are producing the same results.

Output comparison on 22-qubit circuit

Comparison of output distributions from simulators on a 22-qubit circuit, showing relative fidelity of approximations.

Compare Timing

This figure compares the end-to-end simulation runtime across different simulators, capturing the real-world cost of executing a circuit within Maestro. These timings include I/O, circuit translation, execution, and result collection for each backend.

Here we simulate executing 1,000 randomly selected circuits, 10,000 shots each with a bias towards qubits 4 - 10 range, up to 22 qubits. In the final column is Maestro’s performance for automated simulator selection. It is comparable mostly to the statevector simulation.

Runtime comparison across simulators

End-to-end simulation time for batch circuits using various simulator backends.