API Reference

Problem definition

OptimaSolver.OptimaProblemType
OptimaProblem{T, F, G}

Gibbs-energy minimisation problem in the form:

minimize    f(n, p)            (e.g. G(n) = nᵀ μ(n,p))
subject to  A n = b            (mass conservation, m × ns)
            n ≥ ε              (positivity)

Fields

  • A: conservation matrix (m × ns), typically integer-valued
  • b: RHS vector (m,)
  • f: objective function (n, p) -> scalar
  • g!: in-place gradient (grad, n, p) -> nothing (∂f/∂n)
  • ns: number of species
  • m: number of conservation equations
  • lb: lower bounds on n (default: fill(ε, ns))
  • ub: upper bounds on n (default: fill(Inf, ns))
  • p: parameter tuple passed through to f and g!
source
OptimaSolver.OptimaOptionsType
OptimaOptions

Solver hyperparameters.

Fields

  • tol: KKT residual tolerance (default 1e-10)
  • max_iter: maximum Newton iterations (default 300)
  • warm_start: reuse previous (n, y) as initial guess (default true)
  • barrier_init: initial log-barrier weight μ₀ (default 1e-4)
  • barrier_min: minimum barrier weight (default 1e-14)
  • barrier_decay: barrier reduction factor per outer iteration (default 0.1)
  • ls_alpha: Armijo sufficient-decrease parameter (default 1e-4)
  • ls_beta: backtracking contraction factor (default 0.5)
  • ls_max_iter: maximum backtracking steps (default 40)
  • verbose: print iteration log (default false)
  • use_fd_hessian: compute Hessian diagonal via finite differences of ∇f instead of the ideal-solution approximation 1/nᵢ (default false). Enable for problems with pure solid or gas species where the true ∂²f/∂nᵢ² = 0, otherwise the approximation 1/nᵢ causes extremely slow convergence.
source
OptimaSolver.OptimaStateType
OptimaState{T}

Mutable solver state — primal variables n, dual variables y (Lagrange multipliers for A n = b), and the barrier parameter μ.

Warm-starting: pass the converged state from a previous solve as u0 to solve; the solver will initialise (n, y) from it.

source
OptimaSolver.OptimaResultType
OptimaResult{T}

Immutable solver output.

Fields

  • n: equilibrium mole amounts
  • y: Lagrange multipliers at convergence
  • iterations: total Newton iterations
  • converged: convergence flag
  • error_opt: final KKT optimality residual
  • error_feas: final feasibility residual
source

Canonicalizer

OptimaSolver.CanonicalizerType
Canonicalizer{T}

Decomposes the conservation matrix A (m × ns) into canonical form:

A Q = [B  N]    with B = basic block (m × m, full rank)

where Q is a column permutation such that the first m columns are linearly independent. The LU factorisation of B is cached and reused for every Newton step without re-factorisation.

Fields

  • A: original conservation matrix (m × ns)
  • Q: column permutation (pivot indices into 1:ns)
  • Qinv: inverse permutation
  • jb: basic variable indices (length m)
  • jn: non-basic variable indices (length ns-m)
  • B: basic block A[:, jb] (m × m)
  • BLU: LU factorisation of B (used for Newton and sensitivity)
  • R: R = B⁻¹ N (m × (ns-m)), the reduced-cost matrix
  • ns: number of species
  • m: number of constraints
  • rank_A: effective rank of A
source

Solver

solve is exported and re-exports SciMLBase.solve, so using OptimaSolver is sufficient. solve! (the in-place variant) is not exported; use the qualified name OptimaSolver.solve!(...) or import OptimaSolver: solve!.

CommonSolve.solveFunction
solve(prob, opts; u0, y0) -> OptimaResult

Solve the Gibbs minimisation problem prob and return an OptimaResult.

Arguments

  • prob: OptimaProblem
  • opts: OptimaOptions (keyword; defaults to OptimaOptions())
  • u0: initial guess for n (keyword; defaults to b/m spread)
  • y0: initial guess for y (keyword; defaults to zeros)

Warm-start

Pass a previous OptimaResult as u0 = prev_result and the solver will initialise from prev_result.n and prev_result.y.

source
solve(prob, can, opts; u0, y0) -> OptimaResult

Variant that accepts a pre-built Canonicalizer (avoids recomputing QR when prob.A is fixed across many solves, e.g. during a temperature scan).

source
SciMLBase.solve(opt_prob, alg::OptimaOptimizer; kwargs...) -> OptimizationSolution

Convert a SciML OptimizationProblem to an internal OptimaProblem and solve with the OptimaSolver primal-dual method.

The OptimizationProblem is expected to carry:

  • f.f: objective (u, p) -> scalar
  • f.grad: in-place gradient (g, u, p) -> nothing (or nothing)
  • prob.cons: equality constraints (res, u, p) -> nothing (A u = b encoded as residual)
  • prob.lcons, prob.ucons: lower/upper constraint bounds (should be equal for equality)
  • prob.lb: lower bounds on u
  • prob.u0: initial guess
  • prob.p: parameter tuple

Gradient fallback

If f.grad is nothing, a ForwardDiff gradient is constructed automatically.

source
OptimaSolver.solve!Function
solve!(state, prob, can, opts) -> OptimaState

Run the interior-point loop, mutating state in-place.

Arguments

  • state: OptimaState — initial iterate on entry, solution on exit
  • prob: OptimaProblem
  • can: pre-built Canonicalizer for prob.A
  • opts: OptimaOptions
source

Sensitivity

OptimaSolver.SensitivityResultType
SensitivityResult{T}

Sensitivity of the equilibrium composition n* with respect to:

  • ∂n_∂b: ∂n*/∂b (ns × m) — response to changes in mass-balance RHS
  • ∂n_∂μ0: ∂n*/∂(μ⁰/RT) (ns × ns) — response to standard potentials

These Jacobians can be used directly as the Jacobian of the RHS in an ODE coupling kinetics (rates→ b) and thermodynamics (T-dependent μ⁰ → potentials).

source
OptimaSolver.sensitivityFunction
sensitivity(prob, n, y, h; μ) -> SensitivityResult

Compute the sensitivity matrices ∂n/∂b and ∂n/∂(μ⁰/RT) at the converged point (n, y) with Hessian diagonal h.

The KKT Jacobian is:

J = [ H   Aᵀ ]
    [ A    0  ]

We solve J [dn/dc; dy/dc] = -dF/dc for each perturbation direction.

Using the Schur complement (same decomposition as the Newton step):

S dy = rhs_dual         S = A H⁻¹ Aᵀ
dn   = -H⁻¹ (ex_rhs + Aᵀ dy)

Arguments

  • prob: OptimaProblem
  • n: converged primal iterate (ns,)
  • y: converged dual iterate (m,)
  • h: Hessian diagonal at (n, y) (ns,)
  • μ: barrier parameter at convergence
source

SciML interface

OptimaSolver.OptimaOptimizerType
OptimaOptimizer

Drop-in SciML optimizer implementing the OptimaSolver primal-dual interior-point algorithm for Gibbs-energy minimisation.

Constructors

OptimaOptimizer(; tol=1e-10, max_iter=300, warm_start=true, verbose=false)
OptimaOptimizer(opts::OptimaOptions)

Fields

  • options: OptimaOptions with all algorithm hyperparameters
  • _cache: Ref{Union{Nothing, OptimaResult}} — previous solution for warm-start
source
OptimaSolver.reset_cache!Function
reset_cache!(alg::OptimaOptimizer)

Clear the warm-start cache. Call this when the chemical system changes (new set of species, different A matrix).

source

Internal components

The following symbols are exported for testing and extension purposes. They are not needed for typical usage.

KKT residual and Hessian

OptimaSolver.kkt_residualFunction
kkt_residual(prob, n, y, grad_f, μ) -> KKTResidual

Compute the KKT residual at (n, y) with barrier weight μ.

  • grad_f: gradient ∇f(n) evaluated outside (allows caching)
  • μ: log-barrier weight (scalar, T-compatible for AD)
source
OptimaSolver.hessian_diagonalFunction
hessian_diagonal(prob, n, μ, hess_f_diag) -> Vector

Return the diagonal of the barrier-augmented Hessian:

H_diag[i] = hess_f_diag[i] + μ / (n[i] - lb[i])²

where hess_f_diag is the diagonal of ∇²f(n) (caller-provided). For a convex Gibbs function with positive curvature, H_diag > 0 always.

source
OptimaSolver.gibbs_hessian_diagFunction
gibbs_hessian_diag(n, p) -> Vector

Diagonal of ∇²G for the ideal/dilute Gibbs function G(n) = nᵀ μ(n,p).

For an ideal solution where μᵢ(n) = μᵢ⁰(T,P)/RT + ln(aᵢ(n)):

  • Aqueous solvent (mole fraction): ∂²G/∂nᵢ² = 1/nᵢ - 1/naq + 1/naq (approximately 1/nᵢ)
  • Aqueous solutes (molality): ∂²G/∂nᵢ² ≈ 1/nᵢ
  • Pure solids/gases: ∂²G/∂nᵢ² = 0 (or small positive for regularisation)

For the general case we use finite-difference or AD. Here we provide the ideal approximation H_diag[i] = 1/nᵢ as a sensible default that is always positive definite.

When using AD via ForwardDiff: do not call this function; instead pass hess_f_diag computed analytically or via forward-mode on the gradient.

source

Newton step

OptimaSolver.NewtonStepType
NewtonStep{T}

Workspace for the Schur-complement Newton solver. All matrices and vectors are pre-allocated once and reused across Newton iterations.

Fields:

  • S: Schur complement A * diag(1/h) * A' (m × m)
  • rhs: RHS ew - A * diag(1/h) * ex for the Schur system (m,)
  • dn: primal Newton step δn (ns,)
  • dy: dual Newton step δy (m,)
  • d: equilibration diagonal sqrt.(diag(S)) (m,)
  • AoverH: buffer for A ./ h' (m × ns), shared between the Schur build (S = AoverH * A') and the RHS computation (rhs = ew - AoverH * ex)
source
OptimaSolver.compute_step!Function
compute_step!(ws, can, h, ex, ew) -> (dn, dy)

Compute the Newton step (dn, dy) by Schur complement elimination.

Arguments

  • ws: NewtonStep workspace (mutated in-place)
  • can: Canonicalizer (provides A)
  • h: Hessian diagonal (ns,), all positive
  • ex: optimality residual (ns,)
  • ew: feasibility residual (m,)

Returns

(dn, dy) — views into the workspace vectors.

source
OptimaSolver.clamp_stepFunction
clamp_step(n, lb, dn; τ=0.995)

Scale the primal step so that n + α dn stays strictly above lb, using the fraction-to-boundary rule with safety factor τ ∈ (0,1).

Returns α ∈ (0, 1].

source
OptimaSolver.line_searchFunction
line_search(prob, n, y, dn, dy, f_val, grad_f, μ, opts; filter) -> (α, n_new, y_new, f_new)

Backtracking line search with filter acceptance.

Starting from α = α_max (from fraction-to-boundary), tries α, βα, β²α, … until the new point (n + α dn, y + α dy) is accepted by the filter or the Armijo condition on feasibility is satisfied.

Returns the accepted step size α and the new iterates.

source

Variable stability

OptimaSolver.classify_variablesFunction
classify_variables(n, lb, ex; tol_stable=1e-8) -> (js, ju)

Return indices of stable (js) and unstable (ju) variables.

A variable is stable if (nᵢ - lbᵢ) > tolstable * maxslack.

source
OptimaSolver.reduced_step_for_unstable!Function
reduced_step_for_unstable!(dn, ju, n, lb; τ=0.5)

For unstable variables (near lower bound), cap the step to move at most τ * slack toward the bound to prevent crossing.

source
OptimaSolver.stability_measureFunction
stability_measure(n, lb, ex) -> Vector

Compute the stability measure sᵢ = (nᵢ - lbᵢ) * |exᵢ|.

Small sᵢ means variable i is both close to its bound AND has large stationarity residual — likely to be at an active bound at the solution.

source

```