Optimizers
Divi provides built-in support for optimizing quantum programs currently using three distinct methods, each suited to different problem types and user requirements. These methods can be used interchangeably within Divi’s parallel execution framework, allowing users to flexibly explore and fine-tune their solutions.
1. Monte Carlo Optimization
The Monte Carlo 1 method in Divi is a stochastic global optimization approach. It works by randomly sampling the parameter space and selecting configurations that minimize the target cost function. This method is particularly useful when:
- The optimization landscape is rugged or non-convex.
- Gradients are not available or are unreliable.
- A rough global search is preferred before local refinement.
Monte Carlo optimization can help identify promising regions in high-dimensional parameter spaces before applying more refined methods.
2. Nelder-Mead
Nelder-Mead 2 is a gradient-free, simplex-based optimization algorithm. It is ideal for local optimization in low to moderate dimensional spaces. The method iteratively refines a simplex (a geometrical figure defined by a set of parameter vectors) by evaluating cost function values and applying operations such as reflection, expansion, and contraction.
Use Nelder-Mead when:
- Your problem is continuous but noisy.
- Gradients are unavailable or expensive to compute.
- You are tuning parameters in a relatively low-dimensional space.
3. L-BFGS-B with Parameter Shift Gradients
L-BFGS-B (Limited-memory Broyden–Fletcher–Goldfarb–Shanno with Bound constraints) 3 is a quasi-Newton method that leverages gradient information to efficiently converge to a local minimum. In Divi, gradient calculation is performed using the parameter shift rule, a technique well-suited to quantum circuits that allows for analytical gradient computation by evaluating the function at shifted parameter values.
Divi computes these parameter shifts in parallel, significantly reducing wall-clock time for gradient evaluations.
Use L-BFGS-B when:
- You require fast convergence to a local minimum.
- Your cost function is smooth and differentiable.
4. COBYLA (Constrained Optimization BY Linear Approximations)
COBYLA 4 is a gradient-free, local optimization method—like Nelder-Mead—that supports nonlinear inequality constraints. It constructs successive linear approximations of the objective function and constraints, iteratively refining the solution within a trust region.
Use COBYLA when:
- Your optimization problem includes constraints.
- Gradients are inaccessible or too noisy.
- You seek a reliable optimizer for low to moderate-dimensional spaces.
COBYLA is also a good choice of optimizer when trying out QAOA for a new problem/experimenting, but your mileage may vary.
-
Kalos, M. H., & Whitlock, P. A. (2008). Monte Carlo Methods (2nd ed.). Wiley-VCH. ↩︎
-
Nelder, J. A., & Mead, R. (1965). A simplex method for function minimization. The Computer Journal, 7(4), 308–313. ↩︎
-
Zhu, C., Byrd, R. H., Lu, P., & Nocedal, J. (1997). Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization. ACM Transactions on Mathematical Software, 23(4), 550–560. ↩︎
-
Powell, M. J. D. (1994). A direct search optimization method that models the objective and constraint functions by linear interpolation. In Advances in Optimization and Numerical Analysis (pp. 51–67). Springer. ↩︎