Documentation Center

  • Trials
  • Product Updates

Optimization Options Reference

Optimization Options

The following table describes optimization options. Create options using the optimoptions function, or optimset for fminbnd, fminsearch, fzero, or lsqnonneg.

See the individual function reference pages for information about available option values and defaults.

The default values for the options vary depending on which optimization function you call with options as an input argument. You can determine the default option values for any of the optimization functions by entering optimoptions(@solvername) or the equivalent optimoptions('solvername'). For example,

optimoptions(@fmincon)

returns a list of the options and the default values for the default trust-region-reflective fmincon algorithm. To find the default values for another fmincon algorithm, set the Algorithm option. For example,

opts = optimoptions(@fmincon,'Algorithm','sqp')

Optimization Options

Option NameDescriptionUsed by Functions
Algorithm

Chooses the algorithm used by the solver.

fmincon, fminunc, fsolve, linprog, lsqcurvefit, lsqlin, lsqnonlin, quadprog
AlwaysHonorConstraints

The default 'bounds' ensures that bound constraints are satisfied at every iteration. Turn off by setting to 'none'.

fmincon
BranchingRuleRule for choosing the component for branching:
  • 'maxpscost' — The fractional component with maximum pseudocost. See Branch and Bound.

  • 'mostfractional' — The component whose fractional part is closest to 1/2.

  • 'maxfun' — The fractional component with maximal corresponding component in the absolute value of objective vector f.

intlinprog
BranchStrategy

Strategy bintprog uses to select branch variable.

bintprog

CutGenerationLevel of cut generation (see Cut Generation):
  • 'none' — No cuts. Makes CutGenerationMaxIter irrelevant.

  • 'basic' — Normal cut generation.

  • 'intermediate' — Use more cut types.

  • 'advanced' — Use most cut types.

intlinprog
CutGenMaxIterNumber of passes through all cut generation methods before entering the branch-and-bound phase, an integer from 1 through 50. Disable cut generation by setting the CutGeneration option to 'none'.intlinprog
DerivativeCheck

Compare user-supplied analytic derivatives (gradients or Jacobian, depending on the selected solver) to finite differencing derivatives.

fgoalattain, fmincon, fminimax, fminunc, fseminf, fsolve, lsqcurvefit, lsqnonlin

Diagnostics

Display diagnostic information about the function to be minimized or solved.

All but fminbnd, fminsearch, fzero, and lsqnonneg

DiffMaxChange

Maximum change in variables for finite differencing.

fgoalattain, fmincon, fminimax, fminunc, fseminf, fsolve, lsqcurvefit, lsqnonlin

DiffMinChange

Minimum change in variables for finite differencing.

fgoalattain, fmincon, fminimax, fminunc, fseminf, fsolve, lsqcurvefit, lsqnonlin

Display

Level of display.

  • 'off' displays no output.

  • 'iter' displays output at each iteration, and gives the default exit message.

  • 'iter-detailed' displays output at each iteration, and gives the technical exit message.

  • 'notify' displays output only if the function does not converge, and gives the default exit message.

  • 'notify-detailed' displays output only if the function does not converge, and gives the technical exit message.

  • 'final' displays just the final output, and gives the default exit message.

  • 'final-detailed' displays just the final output, and gives the technical exit message.

All. See the individual function reference pages for the values that apply.

FinDiffRelStep

Scalar or vector step size factor. When you set FinDiffRelStep to a vector v, forward finite differences delta are

delta = v.*sign(x).*max(abs(x),TypicalX);

and central finite differences are

delta = v.*max(abs(x),TypicalX);

Scalar FinDiffRelStep expands to a vector. The default is sqrt(eps) for forward finite differences, and eps^(1/3) for central finite differences.

fgoalattain, fmincon, fminimax, fminunc, fseminf, fsolve, lsqcurvefit, lsqnonlin

FinDiffType

Finite differences, used to estimate gradients, are either 'forward' (the default) , or 'central' (centered), which takes twice as many function evaluations but should be more accurate. 'central' differences might violate bounds during their evaluation in fmincon interior-point evaluations if the AlwaysHonorConstraints option is set to 'none'.

fgoalattain, fmincon, fminimax, fminunc, fseminf, fsolve, lsqcurvefit, lsqnonlin

FunValCheck

Check whether objective function and constraints values are valid. 'on' displays an error when the objective function or constraints return a value that is complex, NaN, or Inf.

    Note:   FunValCheck does not return an error for Inf when used with fminbnd, fminsearch, or fzero, which handle Inf appropriately.

'off' displays no error.

fgoalattain, fminbnd, fmincon, fminimax, fminsearch, fminunc, fseminf, fsolve, fzero, lsqcurvefit, lsqnonlin

GoalsExactAchieve

Specify the number of objectives required for the objective fun to equal the goal goal. Objectives should be partitioned into the first few elements of F.

fgoalattain

GradConstr

User-defined gradients for the nonlinear constraints.

fgoalattain, fmincon, fminimax

GradObj

User-defined gradients for the objective functions.

fgoalattain, fmincon, fminimax, fminunc, fseminf

HessFcn

Function handle to a user-supplied Hessian (see Hessian).

fmincon
Hessian

If 'user-supplied', function uses user-defined Hessian or Hessian information (when using HessMult), for the objective function. If 'off', function approximates the Hessian using finite differences.

fmincon, fminunc

HessMult

Handle to a user-supplied Hessian multiply function. For fmincon, ignored unless Hessian is 'user-supplied' or 'on'.

fmincon, fminunc, quadprog

HessPattern

Sparsity pattern of the Hessian for finite differencing. The size of the matrix is n-by-n, where n is the number of elements in x0, the starting point.

fmincon, fminunc

HessUpdate

Quasi-Newton updating scheme.

fminunc

HeuristicsAlgorithm for searching for feasible points (see Heuristics for Finding Feasible Solutions):
  • 'none'

  • 'rss'

  • 'round'

  • 'rins'

intlinprog
HeuristicsMaxNodesStrictly positive integer that bounds the number of nodes intlinprog can explore in its branch-and-bound search for feasible points. See Heuristics for Finding Feasible Solutions.intlinprog
InitBarrierParam

Initial barrier value.

fmincon
InitialHessMatrix

Initial quasi-Newton matrix.

fminunc

InitialHessType

Initial quasi-Newton matrix type.

fminunc

InitTrustRegionRadius

Initial radius of the trust region.

fmincon
IPPreprocessTypes of integer preprocessing (see Mixed-Integer Program Preprocessing):
  • 'none' — Use very few integer preprocessing steps.

  • 'basic' — Use a moderate number of integer preprocessing steps.

  • 'advanced' — Use all available integer preprocessing steps.

intlinprog
Jacobian

If 'on', function uses user-defined Jacobian or Jacobian information (when using JacobMult), for the objective function. If 'off', function approximates the Jacobian using finite differences.

fsolve, lsqcurvefit, lsqnonlin

JacobMult

User-defined Jacobian multiply function. Ignored unless Jacobian is 'on' for fsolve, lsqcurvefit, and lsqnonlin.

fsolve, lsqcurvefit, lsqlin, lsqnonlin

JacobPattern

Sparsity pattern of the Jacobian for finite differencing. The size of the matrix is m-by-n, where m is the number of values in the first argument returned by the user-specified function fun, and n is the number of elements in x0, the starting point.

fsolve, lsqcurvefit, lsqnonlin

LargeScale

Use Algorithm instead

Use large-scale algorithm if possible.

fminunc, fsolve, linprog, lsqcurvefit, lsqlin, lsqnonlin

LPMaxIterStrictly positive integer, the maximum number of simplex algorithm iterations per node during the branch-and-bound process.intlinprog
LPPreprocessType of preprocessing for the solution to the relaxed linear program (see Linear Program Preprocessing):
  • 'none' — No preprocessing.

  • 'basic' — Use preprocessing.

intlinprog
MaxFunEvals

Maximum number of function evaluations allowed.

fgoalattain, fminbnd, fmincon, fminimax, fminsearch, fminunc, fseminf, fsolve, lsqcurvefit, lsqnonlin

MaxIter

Maximum number of iterations allowed.

All but fzero and lsqnonneg

MaxNodesStrictly positive integer that is the maximum number of nodes the solver explores in its branch-and-bound process.

bintprog, intlinprog

MaxNumFeasPointsStrictly positive integer. intlinprog stops if it finds MaxNumFeasPoints integer feasible points.intlinprog
MaxPCGIter

Maximum number of iterations of preconditioned conjugate gradients method allowed.

fmincon, fminunc, fsolve, lsqcurvefit, lsqlin, lsqnonlin, quadprog

MaxProjCGIter

A tolerance for the number of projected conjugate gradient iterations; this is an inner iteration, not the number of iterations of the algorithm.

fmincon
MaxRLPIter

Maximum number of iterations of linear programming relaxation method allowed.

bintprog

MaxSQPIter

Maximum number of iterations of sequential quadratic programming method allowed.

fgoalattain, fmincon, fminimax

MaxTime

Maximum amount of time in seconds allowed for the algorithm.

bintprog, intlinprog

MeritFunction

Use goal attainment/minimax merit function (multiobjective) vs. fmincon (single objective).

fgoalattain, fminimax

MinAbsMax

Number of F(x) to minimize the worst case absolute values.

fminimax

NodeDisplayInterval

Node display interval for bintprog.

bintprog

NodeSearchStrategy

Search strategy that bintprog uses.

bintprog

NodeSelectionChoose the node to explore next.
  • 'simplebestproj' — Best projection. See Branch and Bound.

  • 'minobj' — Explore the node with the minimum objective function.

  • 'mininfeas' — Explore the node with the minimal sum of integer infeasibilities. See Branch and Bound.

intlinprog
ObjectiveCutoffReal greater than -Inf. The default is Inf.intlinprog
ObjectiveLimit

If the objective function value goes below ObjectiveLimit and the iterate is feasible, then the iterations halt.

fmincon, fminunc, quadprog
OutputFcn

Specify one or more user-defined functions that the optimization function calls at each iteration. See Output Function.

fgoalattain, fminbnd, fmincon, fminimax, fminsearch, fminunc, fseminf, fsolve, fzero, lsqcurvefit, lsqnonlin

PlotFcns

Plots various measures of progress while the algorithm executes, select from predefined plots or write your own.

  • @optimplotx plots the current point

  • @optimplotfunccount plots the function count

  • @optimplotfval plots the function value

  • @optimplotconstrviolation plots the maximum constraint violation

  • @optimplotresnorm plots the norm of the residuals

  • @optimplotfirstorderopt plots the first-order of optimality

  • @optimplotstepsize plots the step size

See Plot Functions.

fgoalattain, fminbnd, fmincon, fminimax, fminsearch, fminunc, fseminf, fsolve, fzero, lsqcurvefit, lsqnonlin. See the individual function reference pages for the values that apply.

PrecondBandWidth

Upper bandwidth of preconditioner for PCG. Setting to 'Inf' uses a direct factorization instead of CG.

fmincon, fminunc, fsolve, lsqcurvefit, lsqlin, lsqnonlin, quadprog

RelLineSrchBnd

Relative bound on line search step length.

fgoalattain, fmincon, fminimax, fseminf

RelLineSrchBndDuration

Number of iterations for which the bound specified in RelLineSrchBnd should be active.

fgoalattain, fmincon, fminimax, fseminf

RelObjThresholdNonnegative real. intlinprog changes the current feasible solution only when it locates another with an objective function value that is at least RelObjThreshold lower: (fold – fnew)/(1 + fold) > RelObjThreshold.intlinprog
RootLPAlgorithmAlgorithm for solving linear programs:
  • 'dual-simplex' — Dual simplex algorithm

  • 'primal-simplex' — Primal simplex algorithm

intlinprog
RootLPMaxIterNonnegative integer that is the maximum number of simplex algorithm iterations to solve the initial linear programming problem.intlinprog
ScaleProblem

For fmincon interior-point and sqp algorithms, 'obj-and-constr' causes the algorithm to normalize all constraints and the objective function by their initial values. Disable by setting to the default 'none'.

For the other solvers, when using the Algorithm option 'levenberg-marquardt', setting the ScaleProblem option to 'jacobian' sometimes helps the solver on badly-scaled problems.

fmincon, fsolve, lsqcurvefit, lsqnonlin, quadprog

Simplex

Use Algorithm instead

If 'on', function uses the simplex algorithm.

linprog

SubproblemAlgorithm

Determines how the iteration step is calculated.

fmincon
TolCon

Tolerance on the constraint violation.

bintprog, fgoalattain, fmincon, fminimax, fseminf, intlinprog, quadprog

TolConSQP

Constraint violation tolerance for the inner SQP iteration.

fgoalattain, fmincon, fminimax, fseminf
TolFun

Termination tolerance on the function value.

bintprog, fgoalattain, fmincon, fminimax, fminsearch, fminunc, fseminf, fsolve, linprog (interior-point only), lsqcurvefit, lsqlin (trust-region-reflective only), lsqnonlin, quadprog 

TolFunLPNonnegative real where reduced costs must exceed TolFunLP for a variable to be taken into the basis.intlinprog
TolGapAbsNonnegative real. intlinprog stops if the difference between the internally calculated upper (U) and lower (L) bounds on the objective function is less than or equal to TolGapAbs:

U – L <= TolGapAbs.

intlinprog
TolGapRelReal from 0 through 1. intlinprog stops if the relative difference between the internally calculated upper (U) and lower (L) bounds on the objective function is less than or equal to TolGapRel:

(U – L) / (abs(U) + 1) <= TolGapRel.

intlinprog
TolIntegerReal from 1e-6 through 1e-3, where the maximum deviation from integer that a component of the solution x can have and still be considered an integer. TolInteger is not a stopping criterion.intlinprog
TolPCG

Termination tolerance on the PCG iteration.

fmincon, fminunc, fsolve, lsqcurvefit, lsqlin, lsqnonlin, quadprog

TolProjCG

A relative tolerance for projected conjugate gradient algorithm; this is for an inner iteration, not the algorithm iteration.

fmincon
TolProjCGAbs

Absolute tolerance for projected conjugate gradient algorithm; this is for an inner iteration, not the algorithm iteration.

fmincon
TolRLPFun

Termination tolerance on the function value of a linear programming relaxation problem.

bintprog

TolX

Termination tolerance on x.

All functions except the medium-scale algorithms for linprog, lsqlin, and quadprog

TolXInteger

Tolerance within which bintprog considers the value of a variable to be an integer.

bintprog

TypicalX

Array that specifies typical magnitude of array of parameters x. The size of the array is equal to the size of x0, the starting point. Primarily for scaling finite differences for gradient estimation.

fgoalattain, fmincon, fminimax, fminunc, fsolve, lsqcurvefit, lsqlin, lsqnonlin, quadprog

UseParallel

When true, applicable solvers estimate gradients in parallel. Disable by setting to false.

fgoalattain, fmincon, fminimax.

Output Function

The Outputfcn field of options specifies one or more functions that an optimization function calls at each iteration. Typically, you might use an output function to plot points at each iteration or to display optimization quantities from the algorithm. Using an output function you can view, but not set, optimization quantities. To set up an output function, do the following:

  1. Write the output function as a function file or local function.

  2. Use optimoptions to set the value of Outputfcn to be a function handle, that is, the name of the function preceded by the @ sign. For example, if the output function is outfun.m, the command

     options = optimoptions(@solvername,'OutputFcn', @outfun);

    specifies OutputFcn to be the handle to outfun. To specify more than one output function, use the syntax

     options = optimoptions(@solvername,'OutputFcn',{@outfun, @outfun2});
  3. Call the optimization function with options as an input argument.

See Output Functions for an example of an output function.

Passing Extra Parameters explains how to parameterize the output function OutputFcn, if necessary.

Structure of the Output Function

The function definition line of the output function has the following form:

stop = outfun(x, optimValues, state)

where

  • x is the point computed by the algorithm at the current iteration.

  • optimValues is a structure containing data from the current iteration. Fields in optimValues describes the structure in detail.

  • state is the current state of the algorithm. States of the Algorithm lists the possible values.

  • stop is a flag that is true or false depending on whether the optimization routine should quit or continue. See Stop Flag for more information.

The optimization function passes the values of the input arguments to outfun at each iteration.

Fields in optimValues

The following table lists the fields of the optimValues structure. A particular optimization function returns values for only some of these fields. For each field, the Returned by Functions column of the table lists the functions that return the field.

Corresponding Output Arguments.  Some of the fields of optimValues correspond to output arguments of the optimization function. After the final iteration of the optimization algorithm, the value of such a field equals the corresponding output argument. For example, optimValues.fval corresponds to the output argument fval. So, if you call fmincon with an output function and return fval, the final value of optimValues.fval equals fval. The Description column of the following table indicates the fields that have a corresponding output argument.

Command-Line Display.  The values of some fields of optimValues are displayed at the command line when you call the optimization function with the Display field of options set to 'iter', as described in Iterative Display. For example, optimValues.fval is displayed in the f(x) column. The Command-Line Display column of the following table indicates the fields that you can display at the command line.

Some optimValues fields apply only to specific algorithms:

  • AS — active-set

  • D — trust-region-dogleg

  • IP — interior-point

  • LM — levenberg-marquardt

  • Q — quasi-newton

  • SQP — sqp

  • TR — trust-region

  • TRR — trust-region-reflective

optimValues Fields

OptimValues Field (optimValues.field)DescriptionReturned by FunctionsCommand-Line Display

attainfactor

Attainment factor for multiobjective problem. For details, see Goal Attainment Method.

fgoalattain

None

cgiterations

Number of conjugate gradient iterations at current optimization iteration.

fmincon (IP, TRR), fsolve (TRR), lsqcurvefit (TRR), lsqnonlin (TRR)

CG-iterations

See Iterative Display.

constrviolation

Maximum constraint violation.

fgoalattain, fmincon, fminimax, fseminf

Max constraint or Feasibility

See Iterative Display.

degenerate

Measure of degeneracy. A point is degenerate if

The partial derivative with respect to one of the variables is 0 at the point.

A bound constraint is active for that variable at the point.

See Degeneracy.

fmincon (TRR), lsqcurvefit (TRR), lsqnonlin (TRR)

None

directionalderivative

Directional derivative in the search direction.

fgoalattain, fmincon (AS), fminimax, fminunc (Q), fseminf, fsolve (LM), lsqcurvefit (LM), lsqnonlin (LM)

Directional derivative

See Iterative Display.

firstorderopt

First-order optimality (depends on algorithm). Final value equals optimization function output output.firstorderopt.

fgoalattain, fmincon, fminimax, fminunc, fseminf, fsolve, lsqcurvefit, lsqnonlin

First-order optimality

See Iterative Display.

funccount

Cumulative number of function evaluations. Final value equals optimization function output output.funcCount.

fgoalattain, fminbnd, fmincon, fminimax, fminsearch, fminunc, fsolve, fzero, fseminf, lsqcurvefit, lsqnonlin

F-count or Func-count

See Iterative Display.

fval

Function value at current point. Final value equals optimization function output fval.

fgoalattain, fminbnd, fmincon, fminimax, fminsearch, fminunc, fseminf, fsolve,
fzero

f(x)

See Iterative Display.

gradient

Current gradient of objective function — either analytic gradient if you provide it or finite-differencing approximation. Final value equals optimization function output grad.

fgoalattain, fmincon, fminimax, fminunc, fseminf, fsolve, lsqcurvefit, lsqnonlin

None

iteration

Iteration number — starts at 0. Final value equals optimization function output output.iterations.

fgoalattain, fminbnd,fmincon, fminimax, fminsearch, fminunc, fsolve, fseminf, fzero, lsqcurvefit, lsqnonlin

Iteration

See Iterative Display.

lambda

The Levenberg-Marquardt parameter, lambda, at the current iteration. See Levenberg-Marquardt Method.

fsolve (LM), lsqcurvefit (LM), lsqnonlin (LM)

Lambda

maxfval

Maximum function value

fminimax

None

positivedefinite

0 if algorithm detects negative curvature while computing Newton step.

1 otherwise.

fmincon (TRR), fminunc (TRR), fsolve (TRR), lsqcurvefit (TRR), lsqnonlin (TRR)

None

procedure

Procedure messages.

fgoalattain, fminbnd, fmincon (AS), fminimax, fminsearch, fseminf,
fzero

Procedure

See Iterative Display.

ratio

Ratio of change in the objective function to change in the quadratic approximation.

fmincon (TRR), fsolve (TRR), lsqcurvefit (TRR), lsqnonlin (TRR)

None

residual

The residual vector. For fsolve, residual means the 2-norm of the residual squared.

lsqcurvefit, lsqnonlin, fsolve

Residual

See Iterative Display.

resnorm

2-norm of the residual squared.

lsqcurvefit, lsqnonlin

Resnorm

See Iterative Display.

searchdirection

Search direction.

fgoalattain, fmincon (AS, SQP), fminimax, fminunc (Q), fseminf, fsolve (LM), lsqcurvefit (LM), lsqnonlin (LM)

None

stepaccept

Status of the current trust-region step. Returns true if the current trust-region step was successful, and false if the trust-region step was unsuccessful.

fsolve (D)

None

stepsize

Current step size (displacement in x). Final value equals optimization function output output.stepsize.

fgoalattain, fmincon, fminimax, fminunc, fseminf, fsolve, lsqcurvefit, lsqnonlin

Step-size or Norm of Step

See Iterative Display.

trustregionradius

Radius of trust region.

fmincon (IP, TRR), fminunc (TR), fsolve (D, TRR), lsqcurvefit (TRR), lsqnonlin (TRR)

Trust-region radius

See Iterative Display.

Degeneracy.  The value of the field degenerate, which measures the degeneracy of the current optimization point x, is defined as follows. First, define a vector r, of the same size as x, for which r(i) is the minimum distance from x(i) to the ith entries of the lower and upper bounds, lb and ub. That is,

r = min(abs(ub-x, x-lb))

Then the value of degenerate is the minimum entry of the vector r + abs(grad), where grad is the gradient of the objective function. The value of degenerate is 0 if there is an index i for which both of the following are true:

  • grad(i) = 0

  • x(i) equals the ith entry of either the lower or upper bound.

States of the Algorithm

The following table lists the possible values for state:

StateDescription

'init'

The algorithm is in the initial state before the first iteration.

'interrupt'

The algorithm is in some computationally expensive part of the iteration. In this state, the output function can interrupt the current iteration of the optimization. At this time, the values of x and optimValues are the same as at the last call to the output function in which state=='iter'.

'iter'

The algorithm is at the end of an iteration.

'done'

The algorithm is in the final state after the last iteration.

The following code illustrates how the output function might use the value of state to decide which tasks to perform at the current iteration:

switch state
    case 'iter'
          % Make updates to plot or guis as needed
    case 'interrupt'
          % Probably no action here. Check conditions to see  
          % whether optimization should quit.
    case 'init'
          % Setup for plots or guis
    case 'done'
          % Cleanup of plots, guis, or final plot
otherwise
end

Stop Flag

The output argument stop is a flag that is true or false. The flag tells the optimization function whether the optimization should quit or continue. The following examples show typical ways to use the stop flag.

Stopping an Optimization Based on Data in optimValues.  The output function can stop an optimization at any iteration based on the current data in optimValues. For example, the following code sets stop to true if the directional derivative is less than .01:

function stop = outfun(x,optimValues,state)
stop = false;
% Check if directional derivative is less than .01.
if optimValues.directionalderivative < .01
    stop = true;
end 

Stopping an Optimization Based on GUI Input.  If you design a GUI to perform optimizations, you can make the output function stop an optimization when a user clicks a Stop button on the GUI. The following code shows how to do this, assuming that the Stop button callback stores the value true in the optimstop field of a handles structure called hObject:

function stop = outfun(x,optimValues,state)
stop = false;
% Check if user has requested to stop the optimization.
stop = getappdata(hObject,'optimstop');

Plot Functions

The PlotFcns field of the options structure specifies one or more functions that an optimization function calls at each iteration to plot various measures of progress while the algorithm executes. The structure of a plot function is the same as that for an output function. For more information on writing and calling a plot function, see Output Function. For an example of using built-in plot functions, Using a Plot Function.

To view a predefined plot function listed for PlotFcns, you can open it in the MATLAB® Editor. For example, to view the file corresponding to the norm of residuals, enter:

edit optimplotresnorm.m

You can use any predefined plot function as a template for writing a custom plot function.

Was this topic helpful?