HyperMath

FMinUncon

FMinUncon

Previous topic Next topic No expanding text in this topic  

FMinUncon

Previous topic Next topic JavaScript is required for expanding text JavaScript is required for the print function  

Minimizes a smooth multivariate function f(x). The second form allows for specifying the gradient of the objective.

Syntax

x, f, dh, oh  = FMinUncon(objFunc,  init, maxIter, tol, userdata)

x, f, dh, oh  = FMinUncon(objFunc, gradFunc, init, maxIter, tol, userdata)

Arguments

Name

Description

 

objFunc

A string that contains the name of the user defined objective function.  It must return a real number or one element vector.  See the Comments section.

 

gradFunc

A string that contains the name of the user defined gradient function. It must return a vector that has a length equal to the number of design variables. If the vector has one element, then a real number may be returned instead. See the Comments section.

 

init

Initial estimates for the variables at which the minimum occurs.

 

maxIter

(optional)

A one element vector containing the maximum number of iterations allowed. If it is empty or omitted, it defaults to 200.

 

tol

(optional)

A one element vector containing the convergence tolerance for the algorithm.  Defaults to 1.0e-6.

 

userdata

(optional)

Data matrix that is passed to the user defined function at each iteration.  This can be used to supply data that are defined outside the function.

Output

Name

Description

 

x

The location of the function minimum. It may be only a local minimum.

 

f

The minimum value of the function.

 

dh

A matrix containing the design value history. Each column of the matrix contains the iteration step values for a design variable.

 

oh

A row vector containing the objective function history at each iteration step.

Example 1

Find the minimum of the Rosenbrock function.

f(x,y) = (1 – x)^2 + 100 * (y – x^2)^2.

 

Syntax

// define the function to be minimized

function Rosenbrock(p, d)

{

   // p is the vector (x,y).

   // d is the userdata. May be omitted when not used.

   out = (1 - p(1))^2 + 100 * (p(2) – p(1)^2)^2;

   return out

}

 

// find the minimum

initial = [-1.2, 1.0]        // estimate of minimum

x, f = FMinUncon ("Rosenbrock", initial, [100], [1.0e-6])

 

Result

x = 1     1

f = 3523881796301e-015

Example 2

Modify the previous example to use the analytical gradient vector.

 

Syntax

 

// define the gradient function

function GradFunc(p)

{

   out = []

   out(1) = -2 * (1 – p(1)) - 400 * (p(2) - p(1)^2) * p(1)

   out[2] = 200 * (p(2) – p(1)^2)

   return out

}

 

// find the minimum

initial = [-1.2, 1.0]        // estimate of minimum

x, f = FMinUncon ("Rosenbrock", "GradFunc", initial, [100],

[1.0e-6])

 

Result

x = 1     1

f = 3523881796301e-015

Comments

FMinUncon is designed to work with objective functions that have continuous gradients. If that is not the case the chances of success may be greater using GA.

The user defined function gets called by the solver until it is no longer decreasing based on the given tolerance.  It accepts a vector of the same size as init for the variable values as the first argument.  Optionally, it can take a data matrix as the second argument.

If the function that computes the gradient vector is omitted, the vector will be computed internally with numerical derivatives.

The output vector x will have the same orientation as the input vector init. However, the vector for the design variables that is passed to the user functions will be a column vector.

See Also:

FMinBnd

FMinCon

GA

NLSolve

NLCurveFit