HPC Integration
Maestro is designed to run natively in high-performance computing (HPC) environments. It integrates with SLURM via a custom SPANK plugin and has been deployed at multiple European HPC centers.
SLURM SPANK Plugin
The Maestro SPANK Plugin (SPANK = Slurm Plug-in Architecture for Node and Job Kontrol) exposes Maestro configuration directly as SLURM command-line options. Users can specify simulation parameters — qubits, shots, simulator type, bond dimension — through standard srun, sbatch, and salloc flags.
Installation
# Build
mkdir build && cd build
cmake ..
cmake --build .
# Install the Maestro library
sudo cp _deps/maestro-build/libmaestro.so /usr/lib64/
sudo ldconfig
# Install the plugin
sudo cp maestro_spank_plugin.so /usr/lib64/slurm/Register in /etc/slurm/plugstack.conf:
optional /usr/lib64/slurm/maestro_spank_plugin.soConfiguration Defaults
System-wide defaults and limits can be set in plugstack.conf:
optional /usr/lib64/slurm/maestro_spank_plugin.so nrqubits=10 max_qubits=32 auto_set_qubit_count=1| Parameter | Description |
|---|---|
nrqubits |
Default number of qubits |
shots |
Default number of shots |
max_bond_dim |
Default max bond dimension (MPS) |
min_qubits / max_qubits |
Allowed qubit range |
max_shots |
Maximum allowed shots |
max_mbd |
Maximum allowed bond dimension |
auto_set_qubit_count |
Auto-detect qubits from QASM (0 or 1) |
Command-Line Options
Once installed, the following flags are available on srun / sbatch:
| Flag | Description |
|---|---|
--nrqubits=<int> |
Number of qubits to simulate |
--shots=<int> |
Number of measurement shots |
--simulator_type=<type> |
Backend: auto, qcsim, aer, gpu |
--simulation_type=<type> |
Method: auto, statevector, mps, stabilizer, tensor |
--max_bond_dim=<int> |
Maximum bond dimension for MPS |
--auto-set-qubit-count |
Auto-detect qubit count from QASM |
--expectations |
Compute expectation values (requires .obs file) |
Example: Basic Job
#!/bin/bash
#SBATCH --job-name=maestro_job
#SBATCH --output=slurm-%j.out
#SBATCH --time=00:01:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --mem=4G
#SBATCH --nrqubits=3
#SBATCH --shots=10
#SBATCH --simulator_type=qcsim
#SBATCH --simulation_type=mps
#SBATCH --max_bond_dim=8
srun maestro test.qasmExample: GPU Simulation
#!/bin/bash
#SBATCH --job-name=maestro_gpu_job
#SBATCH --output=slurm-%j.out
#SBATCH --time=00:05:00
#SBATCH --nodes=1
#SBATCH --mem=8G
#SBATCH --nrqubits=10
#SBATCH --shots=100
#SBATCH --simulator_type=gpu
#SBATCH --simulation_type=statevector
srun maestro test.qasmExample: Job Arrays (Parallel Batch Processing)
Run many QASM files in parallel using SLURM job arrays:
#!/bin/bash
#SBATCH --job-name=maestro_array
#SBATCH --output=logs/maestro_%A_%a.out
#SBATCH --error=logs/maestro_%A_%a.err
#SBATCH --array=0-2
#SBATCH --nodes=1
#SBATCH --mem=4G
#SBATCH --nrqubits=20
#SBATCH --shots=1000
FILES=(circuits/*.qasm)
INPUT_FILE=${FILES[$SLURM_ARRAY_TASK_ID]}
OUTPUT_FILE="outputs/$(basename ${INPUT_FILE%.*}).json"
mkdir -p logs outputs
echo "Task $SLURM_ARRAY_TASK_ID: Processing $INPUT_FILE"
srun maestro "$INPUT_FILE" "$OUTPUT_FILE"This is ideal for batch benchmarking or processing heterogeneous circuit sets across HPC nodes.
HPC Deployments
Maestro has been deployed and validated at multiple European HPC centers. Full details are available in the Maestro paper (arXiv:2512.04216).
CESGA — CUNQA Platform (Spain)
Maestro is integrated into the CUNQA platform at CESGA as a virtual QPU (vQPU) backend. CUNQA emulates distributed quantum computing (DQC) models inside an HPC environment, and Maestro serves as a pluggable simulator component supporting all three communication modes:
- No-communication: Independent local simulation
- Classical-communication: Classical message passing between vQPUs
- Quantum-communication: Teledata and telegate protocols for quantum communication
This enables users to execute distributed quantum workloads using Maestro’s multi-backend engine and GPU acceleration within the CESGA HPC environment.
LRZ — QDMI Backend (Germany)
At the Leibniz Supercomputing Centre (LRZ), Maestro is integrated via the Quantum Device Management Interface (QDMI), the hardware–software interface of the Munich Quantum Software Stack (MQSS). Maestro appears as a standard QDMI-compatible backend, meaning MQSS users can submit quantum kernels to Maestro through the same interface they use for physical QPU hardware — while Maestro internally selects the optimal simulation strategy.
NPL — Independent Validation (UK)
Maestro was independently benchmarked by the UK National Physical Laboratory (NPL) as part of the M4Q program. Their assessment confirmed Maestro’s suitability for HPC:
"[The] Maestro framework [is] well-suited for HPC environments due to [its] ability to exploit parallelism through multithreading and multiprocessing. Features such as Maestro Auto for batched execution and distributed simulation strategies enable efficient scaling across clusters and reduce overhead compared to single-threaded runs."