loupe.optimize#
- loupe.optimize(fun, params, ftol=1e-09, gtol=1e-05, maxiter=1000, analytic_grad=True)[source]#
Minimize a scalar function of one or more variables using the L-BFGS-B algorithm.
This function constructs a call to scipy.optimize.minimize() with method = ‘L-BFGS-B’.
- Parameters:
fun (
Function
) – The objective function to be minimized.params (list of
array
orarray
) – Parameters in fun to optimize.ftol (float, optional) – Termination tolerance on the objective function value. Default is 1e-9.
gtol (float, optional) – Termination tolerance on the gradient value. Default is 1e-6.
maxiter (int, optional) – Maximum number of iterations. Default is 1000.
analytic_grad (bool, optional) – If true (default), the gradient vector is analytically computed using algorithmic differentiation. If false, the gradient is numerically estimated by a finite difference algorithm.
- Returns:
res – The optimization result. Important attributes are:
x
the solution array,success
a Boolean flag indicating if the optimizer exited successfully andmessage
which describes the cause of the termination. See OptimizeResult for a full description of other attributes.- Return type:
OptimizeResult
Examples
Construct a simple quadratic objective function and compute the minimum value of x:
>>> x = loupe.rand() >>> y = 4*(x-5)**2 >>> loupe.optimize(y, params=x) fun: array(1.26217745e-29) hess_inv: <1x1 LbfgsInvHessProduct with dtype=float64> jac: array([-1.42108547e-14]) message: 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL' nfev: 3 nit: 2 njev: 3 status: 0 success: True x: array([5.])
Optimize the same function, this time using the finite difference gradient:
>>> x = loupe.rand() >>> y = 4*(x-5)**2 >>> loupe.optimize(y, params=x, analytic_grad=False) fun: array(8.4112054e-13) hess_inv: <1x1 LbfgsInvHessProduct with dtype=float64> jac: array([3.70850496e-06]) message: 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL' nfev: 6 nit: 2 njev: 3 status: 0 success: True x: array([5.00000046])