API Reference
Problem definition
OptimaSolver.OptimaProblem — Type
OptimaProblem{T, F, G}Gibbs-energy minimisation problem in the form:
minimize f(n, p) (e.g. G(n) = nᵀ μ(n,p))
subject to A n = b (mass conservation, m × ns)
n ≥ ε (positivity)Fields
A: conservation matrix (m × ns), typically integer-valuedb: RHS vector (m,)f: objective function(n, p) -> scalarg!: in-place gradient(grad, n, p) -> nothing(∂f/∂n)ns: number of speciesm: number of conservation equationslb: lower bounds on n (default: fill(ε, ns))ub: upper bounds on n (default: fill(Inf, ns))p: parameter tuple passed through to f and g!
OptimaSolver.OptimaOptions — Type
OptimaOptionsSolver hyperparameters.
Fields
tol: KKT residual tolerance (default 1e-10)max_iter: maximum Newton iterations (default 300)warm_start: reuse previous (n, y) as initial guess (default true)barrier_init: initial log-barrier weight μ₀ (default 1e-4)barrier_min: minimum barrier weight (default 1e-14)barrier_decay: barrier reduction factor per outer iteration (default 0.1)ls_alpha: Armijo sufficient-decrease parameter (default 1e-4)ls_beta: backtracking contraction factor (default 0.5)ls_max_iter: maximum backtracking steps (default 40)verbose: print iteration log (default false)use_fd_hessian: compute Hessian diagonal via finite differences of ∇f instead of the ideal-solution approximation 1/nᵢ (default false). Enable for problems with pure solid or gas species where the true ∂²f/∂nᵢ² = 0, otherwise the approximation 1/nᵢ causes extremely slow convergence.
OptimaSolver.OptimaState — Type
OptimaState{T}Mutable solver state — primal variables n, dual variables y (Lagrange multipliers for A n = b), and the barrier parameter μ.
Warm-starting: pass the converged state from a previous solve as u0 to solve; the solver will initialise (n, y) from it.
OptimaSolver.OptimaResult — Type
OptimaResult{T}Immutable solver output.
Fields
n: equilibrium mole amountsy: Lagrange multipliers at convergenceiterations: total Newton iterationsconverged: convergence flagerror_opt: final KKT optimality residualerror_feas: final feasibility residual
Canonicalizer
OptimaSolver.Canonicalizer — Type
Canonicalizer{T}Decomposes the conservation matrix A (m × ns) into canonical form:
A Q = [B N] with B = basic block (m × m, full rank)where Q is a column permutation such that the first m columns are linearly independent. The LU factorisation of B is cached and reused for every Newton step without re-factorisation.
Fields
A: original conservation matrix (m × ns)Q: column permutation (pivot indices into 1:ns)Qinv: inverse permutationjb: basic variable indices (length m)jn: non-basic variable indices (length ns-m)B: basic block A[:, jb] (m × m)BLU: LU factorisation of B (used for Newton and sensitivity)R: R = B⁻¹ N (m × (ns-m)), the reduced-cost matrixns: number of speciesm: number of constraintsrank_A: effective rank of A
Solver
solve is exported and re-exports SciMLBase.solve, so using OptimaSolver is sufficient. solve! (the in-place variant) is not exported; use the qualified name OptimaSolver.solve!(...) or import OptimaSolver: solve!.
CommonSolve.solve — Function
solve(prob, opts; u0, y0) -> OptimaResultSolve the Gibbs minimisation problem prob and return an OptimaResult.
Arguments
prob:OptimaProblemopts:OptimaOptions(keyword; defaults toOptimaOptions())u0: initial guess for n (keyword; defaults to b/m spread)y0: initial guess for y (keyword; defaults to zeros)
Warm-start
Pass a previous OptimaResult as u0 = prev_result and the solver will initialise from prev_result.n and prev_result.y.
solve(prob, can, opts; u0, y0) -> OptimaResultVariant that accepts a pre-built Canonicalizer (avoids recomputing QR when prob.A is fixed across many solves, e.g. during a temperature scan).
SciMLBase.solve(opt_prob, alg::OptimaOptimizer; kwargs...) -> OptimizationSolutionConvert a SciML OptimizationProblem to an internal OptimaProblem and solve with the OptimaSolver primal-dual method.
The OptimizationProblem is expected to carry:
f.f: objective(u, p) -> scalarf.grad: in-place gradient(g, u, p) -> nothing(ornothing)prob.cons: equality constraints(res, u, p) -> nothing(A u = b encoded as residual)prob.lcons,prob.ucons: lower/upper constraint bounds (should be equal for equality)prob.lb: lower bounds on uprob.u0: initial guessprob.p: parameter tuple
Gradient fallback
If f.grad is nothing, a ForwardDiff gradient is constructed automatically.
OptimaSolver.solve! — Function
solve!(state, prob, can, opts) -> OptimaStateRun the interior-point loop, mutating state in-place.
Arguments
state:OptimaState— initial iterate on entry, solution on exitprob:OptimaProblemcan: pre-builtCanonicalizerforprob.Aopts:OptimaOptions
Sensitivity
OptimaSolver.SensitivityResult — Type
SensitivityResult{T}Sensitivity of the equilibrium composition n* with respect to:
∂n_∂b: ∂n*/∂b (ns × m) — response to changes in mass-balance RHS∂n_∂μ0: ∂n*/∂(μ⁰/RT) (ns × ns) — response to standard potentials
These Jacobians can be used directly as the Jacobian of the RHS in an ODE coupling kinetics (rates→ b) and thermodynamics (T-dependent μ⁰ → potentials).
OptimaSolver.sensitivity — Function
sensitivity(prob, n, y, h; μ) -> SensitivityResultCompute the sensitivity matrices ∂n/∂b and ∂n/∂(μ⁰/RT) at the converged point (n, y) with Hessian diagonal h.
The KKT Jacobian is:
J = [ H Aᵀ ]
[ A 0 ]We solve J [dn/dc; dy/dc] = -dF/dc for each perturbation direction.
Using the Schur complement (same decomposition as the Newton step):
S dy = rhs_dual S = A H⁻¹ Aᵀ
dn = -H⁻¹ (ex_rhs + Aᵀ dy)Arguments
prob:OptimaProblemn: converged primal iterate (ns,)y: converged dual iterate (m,)h: Hessian diagonal at (n, y) (ns,)μ: barrier parameter at convergence
SciML interface
OptimaSolver.OptimaOptimizer — Type
OptimaOptimizerDrop-in SciML optimizer implementing the OptimaSolver primal-dual interior-point algorithm for Gibbs-energy minimisation.
Constructors
OptimaOptimizer(; tol=1e-10, max_iter=300, warm_start=true, verbose=false)
OptimaOptimizer(opts::OptimaOptions)Fields
options:OptimaOptionswith all algorithm hyperparameters_cache:Ref{Union{Nothing, OptimaResult}}— previous solution for warm-start
OptimaSolver.reset_cache! — Function
reset_cache!(alg::OptimaOptimizer)Clear the warm-start cache. Call this when the chemical system changes (new set of species, different A matrix).
Internal components
The following symbols are exported for testing and extension purposes. They are not needed for typical usage.
KKT residual and Hessian
OptimaSolver.KKTResidual — Type
KKTResidual{T}Holds the KKT residual vectors and associated norms for one evaluation.
OptimaSolver.kkt_residual — Function
kkt_residual(prob, n, y, grad_f, μ) -> KKTResidualCompute the KKT residual at (n, y) with barrier weight μ.
grad_f: gradient ∇f(n) evaluated outside (allows caching)μ: log-barrier weight (scalar, T-compatible for AD)
OptimaSolver.hessian_diagonal — Function
hessian_diagonal(prob, n, μ, hess_f_diag) -> VectorReturn the diagonal of the barrier-augmented Hessian:
H_diag[i] = hess_f_diag[i] + μ / (n[i] - lb[i])²where hess_f_diag is the diagonal of ∇²f(n) (caller-provided). For a convex Gibbs function with positive curvature, H_diag > 0 always.
OptimaSolver.gibbs_hessian_diag — Function
gibbs_hessian_diag(n, p) -> VectorDiagonal of ∇²G for the ideal/dilute Gibbs function G(n) = nᵀ μ(n,p).
For an ideal solution where μᵢ(n) = μᵢ⁰(T,P)/RT + ln(aᵢ(n)):
- Aqueous solvent (mole fraction): ∂²G/∂nᵢ² = 1/nᵢ - 1/naq + 1/naq (approximately 1/nᵢ)
- Aqueous solutes (molality): ∂²G/∂nᵢ² ≈ 1/nᵢ
- Pure solids/gases: ∂²G/∂nᵢ² = 0 (or small positive for regularisation)
For the general case we use finite-difference or AD. Here we provide the ideal approximation H_diag[i] = 1/nᵢ as a sensible default that is always positive definite.
When using AD via ForwardDiff: do not call this function; instead pass hess_f_diag computed analytically or via forward-mode on the gradient.
Newton step
OptimaSolver.NewtonStep — Type
NewtonStep{T}Workspace for the Schur-complement Newton solver. All matrices and vectors are pre-allocated once and reused across Newton iterations.
Fields:
S: Schur complementA * diag(1/h) * A'(m × m)rhs: RHSew - A * diag(1/h) * exfor the Schur system (m,)dn: primal Newton step δn (ns,)dy: dual Newton step δy (m,)d: equilibration diagonalsqrt.(diag(S))(m,)AoverH: buffer forA ./ h'(m × ns), shared between the Schur build (S = AoverH * A') and the RHS computation (rhs = ew - AoverH * ex)
OptimaSolver.compute_step! — Function
compute_step!(ws, can, h, ex, ew) -> (dn, dy)Compute the Newton step (dn, dy) by Schur complement elimination.
Arguments
ws:NewtonStepworkspace (mutated in-place)can:Canonicalizer(provides A)h: Hessian diagonal (ns,), all positiveex: optimality residual (ns,)ew: feasibility residual (m,)
Returns
(dn, dy) — views into the workspace vectors.
OptimaSolver.clamp_step — Function
clamp_step(n, lb, dn; τ=0.995)Scale the primal step so that n + α dn stays strictly above lb, using the fraction-to-boundary rule with safety factor τ ∈ (0,1).
Returns α ∈ (0, 1].
Line search
OptimaSolver.LineSearchFilter — Type
LineSearchFilter{T}Mutable filter for the line search.
OptimaSolver.line_search — Function
line_search(prob, n, y, dn, dy, f_val, grad_f, μ, opts; filter) -> (α, n_new, y_new, f_new)Backtracking line search with filter acceptance.
Starting from α = α_max (from fraction-to-boundary), tries α, βα, β²α, … until the new point (n + α dn, y + α dy) is accepted by the filter or the Armijo condition on feasibility is satisfied.
Returns the accepted step size α and the new iterates.
Variable stability
OptimaSolver.classify_variables — Function
classify_variables(n, lb, ex; tol_stable=1e-8) -> (js, ju)Return indices of stable (js) and unstable (ju) variables.
A variable is stable if (nᵢ - lbᵢ) > tolstable * maxslack.
OptimaSolver.reduced_step_for_unstable! — Function
reduced_step_for_unstable!(dn, ju, n, lb; τ=0.5)For unstable variables (near lower bound), cap the step to move at most τ * slack toward the bound to prevent crossing.
OptimaSolver.stability_measure — Function
stability_measure(n, lb, ex) -> VectorCompute the stability measure sᵢ = (nᵢ - lbᵢ) * |exᵢ|.
Small sᵢ means variable i is both close to its bound AND has large stationarity residual — likely to be at an active bound at the solution.
```