pyblp.Optimization

class pyblp.Optimization(method, method_options=None, compute_gradient=True, universal_display=True)

Configuration for solving optimization problems.

Parameters
  • method (str or callable) –

    The optimization routine that will be used. The following routines support parameter bounds and use analytic gradients:

    The following routines also use analytic gradients but will ignore parameter bounds (not bounding the problem may create issues if the optimizer tries out large parameter values that create overflow errors):

    The following routines do not use analytic gradients and will also ignore parameter bounds (without analytic gradients, optimization will likely be much slower):

    The following trivial routine can be used to evaluate an objective at specific parameter values:

    • 'return' - Assume that the initial parameter values are the optimal ones.

    Also accepted is a custom callable method with the following form:

    method(initial, bounds, objective_function, iteration_callback, **options) -> (final, converged)
    

    where initial is an array of initial parameter values, bounds is a list of (min, max) pairs for each element in initial, objective_function is a callable objective function of the form specified below, iteration_callback is a function that should be called without any arguments after each major iteration (it is used to record the number of major iterations), options are specified below, final is an array of optimized parameter values, and converged is a flag for whether the routine converged.

    The objective_function has the following form:

    objective_function(theta) -> (objective, gradient, progress)

    where gradient is None if compute_gradient is ``False and progress is an OptimizationProgress object that contains additional information about optimization progress so far, which may be helpful for debugging or to inform non-standard optimization routines.

  • method_options (dict, optional) –

    Options for the optimization routine.

    For any non-custom method other than 'knitro' and 'return', these options will be passed to options in scipy.optimize.minimize(), with the exception of 'keep_feasible', which is by default True and is passed to any scipy.optimize.Bounds. Refer to the SciPy documentation for information about which options are available for each optimization routine.

    If method is 'knitro', these options should be Knitro user options. The non-standard knitro_dir option can also be specified. The following options have non-standard default values:

    • knitro_dir : (str) - By default, the KNITRODIR environment variable is used. Otherwise, this option should point to the installation directory of Knitro, which contains direct subdirectories such as 'examples' and 'lib'. For example, on Windows this option could be '/Program Files/Artleys3/Knitro 10.3.0'.

    • algorithm : (int) - The optimization algorithm to be used. The default value is 1, which corresponds to the Interior/Direct algorithm.

    • gradopt : (int) - How the objective’s gradient is computed. The default value is 1 if compute_gradient is True and is 2 otherwise, which corresponds to estimating the gradient with finite differences.

    • hessopt : (int) - How the objective’s Hessian is computed. The default value is 2, which corresponds to computing a quasi-Newton BFGS Hessian.

    • honorbnds : (int) - Whether to enforce satisfaction of simple variable bounds. The default value is 1, which corresponds to enforcing that the initial point and all subsequent solution estimates satisfy the bounds.

  • compute_gradient (bool, optional) –

    Whether to compute an analytic objective gradient during optimization, which must be False if method does not use analytic gradients, and must be True if method is 'newton-cg', which requires an analytic gradient.

    By default, analytic gradients are computed. Not using an analytic gradient will likely slow down estimation a good deal. If False, an analytic gradient may still be computed once at the end of optimization to compute optimization results. To always use finite differences, finite_differences in Problem.solve() can be set to True.

  • universal_display (bool, optional) – Whether to format optimization progress such that the display looks the same for all routines. By default, the universal display is used and some method_options are used to prevent default displays from showing up.

Examples