pyblp.Optimization¶
-
class
pyblp.
Optimization
(method, method_options=None, compute_gradient=True, universal_display=True)¶ Configuration for solving optimization problems.
- Parameters
method (str or callable) –
The optimization routine that will be used. The following routines support parameter bounds and use analytic gradients:
'knitro'
- Uses an installed version of Artleys Knitro. Python 3 is supported by Knitro version 10.3 and newer. A number of environment variables most likely need to be configured properly, such asKNITRODIR
,ARTELYS_LICENSE
,LD_LIBRARY_PATH
(on Linux), andDYLD_LIBRARY_PATH
(on Mac OS X). For more information, refer to the Knitro installation guide.'slsqp'
- Uses thescipy.optimize.minimize()
SLSQP routine.'trust-constr'
- Uses thescipy.optimize.minimize()
trust-region routine.'l-bfgs-b'
- Uses thescipy.optimize.minimize()
L-BFGS-B routine.'tnc'
- Uses thescipy.optimize.minimize()
TNC routine.
The following routines also use analytic gradients but will ignore parameter bounds (not bounding the problem may create issues if the optimizer tries out large parameter values that create overflow errors):
'cg'
- Uses thescipy.optimize.minimize()
CG routine.'bfgs'
- Uses thescipy.optimize.minimize()
BFGS routine.'newton-cg'
- Uses thescipy.optimize.minimize()
Newton-CG routine.
The following routines do not use analytic gradients and will also ignore parameter bounds (without analytic gradients, optimization will likely be much slower):
'nelder-mead'
- Uses thescipy.optimize.minimize()
Nelder-Mead routine.'powell'
- Uses thescipy.optimize.minimize()
Powell routine.
The following trivial routine can be used to evaluate an objective at specific parameter values:
'return'
- Assume that the initial parameter values are the optimal ones.
Also accepted is a custom callable method with the following form:
method(initial, bounds, objective_function, iteration_callback, **options) -> (final, converged)
where
initial
is an array of initial parameter values,bounds
is a list of(min, max)
pairs for each element ininitial
,objective_function
is a callable objective function of the form specified below,iteration_callback
is a function that should be called without any arguments after each major iteration (it is used to record the number of major iterations),options
are specified below,final
is an array of optimized parameter values, andconverged
is a flag for whether the routine converged.The
objective_function
has the following form:objective_function(theta) -> (objective, gradient)
where
gradient
isNone
ifcompute_gradient is ``False
.method_options (dict, optional) –
Options for the optimization routine.
For any non-custom
method
other than'knitro'
and'return'
, these options will be passed tooptions
inscipy.optimize.minimize()
, with the exception of'keep_feasible'
, which is by defaultTrue
and is passed to anyscipy.optimize.Bounds
. Refer to the SciPy documentation for information about which options are available for each optimization routine.If
method
is'knitro'
, these options should be Knitro user options. The non-standardknitro_dir
option can also be specified. The following options have non-standard default values:knitro_dir : (str) - By default, the KNITRODIR environment variable is used. Otherwise, this option should point to the installation directory of Knitro, which contains direct subdirectories such as
'examples'
and'lib'
. For example, on Windows this option could be'/Program Files/Artleys3/Knitro 10.3.0'
.algorithm : (int) - The optimization algorithm to be used. The default value is
1
, which corresponds to the Interior/Direct algorithm.gradopt : (int) - How the objective’s gradient is computed. The default value is
1
ifcompute_gradient
isTrue
and is2
otherwise, which corresponds to estimating the gradient with finite differences.hessopt : (int) - How the objective’s Hessian is computed. The default value is
2
, which corresponds to computing a quasi-Newton BFGS Hessian.honorbnds : (int) - Whether to enforce satisfaction of simple variable bounds. The default value is
1
, which corresponds to enforcing that the initial point and all subsequent solution estimates satisfy the bounds.
compute_gradient (bool, optional) –
Whether to compute an analytic objective gradient during optimization, which must be
False
ifmethod
does not use analytic gradients, and must beTrue
ifmethod
is'newton-cg'
, which requires an analytic gradient.By default, analytic gradients are computed. Not using an analytic gradient will likely slow down estimation a good deal. If
False
, an analytic gradient may still be computed once at the end of optimization to compute optimization results. To always use finite differences,finite_differences
inProblem.solve()
can be set toTrue
.universal_display (bool, optional) – Whether to format optimization progress such that the display looks the same for all routines. By default, the universal display is used and some
method_options
are used to prevent default displays from showing up.
Examples