The following shows how to get started with pyOpt by solving Schittkowski’s TP37 constrained problem.
Start by importing the pyOpt package:
>>> import pyOpt
The optimization class in pyOpt requires an objective function that takes in the design variable list or array and returns the objective function value, a list/array of constraints and a flag indicating if the objective function evaluation was successful. For the TP37, the objective function is a simple analytic function:
def objfunc(x):
f = -x[0]*x[1]*x[2]
g = [0.0]*2
g[0] = x[0] + 2.*x[1] + 2.*x[2] - 72.0
g[1] = -x[0] - 2.*x[1] - 2.*x[2]
fail = 0
return f,g, fail
Now the optimization problem can be initialized:
>>> opt_prob = pyOpt.Optimization('TP37 Constrained Problem',objfunc)
This creates an instance of the optimization class with a name and a link to the objective function. To complete the setup of the optimization problem, the design variables, constraints and objective need to be defined:
>>> opt_prob.addObj('f')
Design variables and constraints can be added either one-by-one or as a group:
>>> opt_prob.addVar('x1','c',lower=0.0,upper=42.0,value=10.0)
>>> opt_prob.addVar('x2','c',lower=0.0,upper=42.0,value=10.0)
>>> opt_prob.addVar('x3','c',lower=0.0,upper=42.0,value=10.0)
>>> opt_prob.addConGroup('g',2,'i')
In the case of constraints, all equality constraints must be added before adding inequality constraints.
The optimization problem can be printed to verify that it is setup correctly:
>>> print opt_prob
Optimization Problem -- TP37 Constrained Problem
================================================================================
Objective Function: objfunc
Objectives:
Name Value Optimum
f 0 0
Variables (c - continuous, i - integer, d - discrete):
Name Type Value Lower Bound Upper Bound
x1 c 10.000000 0.00e+00 4.20e+01
x2 c 10.000000 0.00e+00 4.20e+01
x3 c 10.000000 0.00e+00 4.20e+01
Constraints (i - inequality, e - equality):
Name Type Bounds
g1 i -1.00e+21 <= 0.000000 <= 0.00e+00
g2 i -1.00e+21 <= 0.000000 <= 0.00e+00
To solve an optimization problem with pyOpt an optimizer must be initialized. The initialization of one or more optimizers is independent of the initialization of any number of optimization problems. To initialize SLSQP, which is an open-source, sequential least squares programming algorithm that comes as part of the pyOpt package, use:
>>> slsqp = pyOpt.SLSQP()
This initializes an instance of SLSQP with the default options. The setOption method can be used to change any optimizer specific option, for example the internal output flag of SLSQP:
>>> slsqp.setOption('IPRINT', -1)
The different optimizers and their specific options can be seen in Optimizers .
Now TP37 can be solved using SLSQP and for example, pyOpt’s automatic finite difference for the gradients:
>>> [fstr, xstr, inform] = slsqp(opt_prob,sens_type='FD')
By default, the solution information of an optimizer is also stored in the specific optimization problem. To output solution to the screen one can use:
>>> print opt_prob.solution(0)
SLSQP Solution to TP37 Constrained Problem
================================================================================
Objective Function: objfunc
Solution:
--------------------------------------------------------------------------------
Total Time: 0.0000
Total Function Evaluations:
Sensitivities: FD
Objectives:
Name Value Optimum
f -3456 0
Variables (c - continuous, i - integer, d - discrete):
Name Type Value Lower Bound Upper Bound
x1 c 24.000000 0.00e+00 4.20e+01
x2 c 12.000000 0.00e+00 4.20e+01
x3 c 12.000000 0.00e+00 4.20e+01
Constraints (i - inequality, e - equality):
Name Type Bounds
g1 i -1.00e+21 <= 0.000000 <= 0.00e+00
g2 i -1.00e+21 <= -72.000000 <= 0.00e+00
--------------------------------------------------------------------------------
For more information on how to use the features of pyOpt see the Quickguide and the Examples .