Optimizers

Optimizers

Divi provides a set of classical optimizers as composable objects. Each optimizer is instantiated as a class and passed to the algorithm constructor, allowing full control over method selection and hyperparameters.

ScipyOptimizer

ScipyOptimizer wraps SciPy’s optimization suite and supports all methods available through scipy.optimize.minimize. The method is selected via the ScipyMethod enum.

from divi.qprog.optimizers import ScipyOptimizer, ScipyMethod

optimizer = ScipyOptimizer(ScipyMethod.COBYLA)

Available Methods

Method Type Best For
L_BFGS_B Quasi-Newton (gradient-based) Fast convergence, smooth cost functions
NELDER_MEAD Simplex (gradient-free) Noisy, low-dimensional landscapes
COBYLA Trust-region (gradient-free) Constrained problems, quick experimentation

Any SciPy method can be used. The three above are the most commonly used with quantum algorithms.

L-BFGS-B with Parameter Shift Gradients

L-BFGS-B (Limited-memory Broyden–Fletcher–Goldfarb–Shanno with Bound constraints) 1 is a quasi-Newton method that leverages gradient information to efficiently converge to a local minimum. In Divi, gradient calculation is performed using the parameter shift rule, a technique well-suited to quantum circuits that allows for analytical gradient computation by evaluating the function at shifted parameter values.

Divi computes these parameter shifts in parallel, significantly reducing wall-clock time for gradient evaluations.

Nelder-Mead

Nelder-Mead 2 is a gradient-free, simplex-based optimization algorithm. It is ideal for local optimization in low to moderate dimensional spaces. The method iteratively refines a simplex by evaluating cost function values and applying operations such as reflection, expansion, and contraction.

COBYLA

COBYLA 3 is a gradient-free, local optimization method that supports nonlinear inequality constraints. It constructs successive linear approximations of the objective function and constraints, iteratively refining the solution within a trust region.

COBYLA is also a good choice of optimizer when trying out QAOA for a new problem or experimenting, but your mileage may vary.

MonteCarloOptimizer

The Monte Carlo 4 optimizer performs stochastic global search by randomly sampling the parameter space and selecting configurations that minimize the target cost function.

from divi.qprog.optimizers import MonteCarloOptimizer

optimizer = MonteCarloOptimizer(n_samples=50)

Monte Carlo optimization is particularly useful when:

  • The optimization landscape is rugged or non-convex
  • Gradients are not available or are unreliable
  • A rough global search is preferred before local refinement
  • You want to keep the parameter keep_best_params to refine found solutions iteratively

PymooOptimizer

For evolutionary optimization strategies, Divi integrates with the pymoo library:

from divi.qprog.optimizers import PymooOptimizer

# CMA-ES (Covariance Matrix Adaptation Evolution Strategy)
optimizer = PymooOptimizer("cmaes")

# Differential Evolution
optimizer = PymooOptimizer("de")
Algorithm Description
CMA-ES Covariance Matrix Adaptation — effective for moderate-dimensional, non-convex problems
DE Differential Evolution — population-based, robust for noisy landscapes

These are particularly useful for problems where gradient-free methods like Nelder-Mead get stuck in local minima.

Usage Example

All optimizers follow the same interface and can be swapped freely:

from divi.qprog import QAOA, GraphProblem
from divi.qprog.optimizers import ScipyOptimizer, ScipyMethod, MonteCarloOptimizer
from divi import ParallelSimulator
import networkx as nx

G = nx.bull_graph()

# Using ScipyOptimizer
qaoa = QAOA(
    problem=G,
    graph_problem=GraphProblem.MAXCUT,
    optimizer=ScipyOptimizer(ScipyMethod.COBYLA),
    max_iterations=10,
    backend=ParallelSimulator(),
)

# Or swap in MonteCarloOptimizer — no other changes needed
qaoa = QAOA(
    problem=G,
    graph_problem=GraphProblem.MAXCUT,
    optimizer=MonteCarloOptimizer(n_samples=30),
    max_iterations=10,
    backend=ParallelSimulator(),
)

  1. Zhu, C., Byrd, R. H., Lu, P., & Nocedal, J. (1997). Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization. ACM Transactions on Mathematical Software, 23(4), 550–560. ↩︎

  2. Nelder, J. A., & Mead, R. (1965). A simplex method for function minimization. The Computer Journal, 7(4), 308–313. ↩︎

  3. Powell, M. J. D. (1994). A direct search optimization method that models the objective and constraint functions by linear interpolation. In Advances in Optimization and Numerical Analysis (pp. 51–67). Springer. ↩︎

  4. Kalos, M. H., & Whitlock, P. A. (2008). Monte Carlo Methods (2nd ed.). Wiley-VCH. ↩︎