Articles

Optimization of Functions of Several Variables - Mathematics


One of the most useful applications for derivatives of a function of one variable is the determination of maximum and/or minimum values. This application is also important for functions of two or more variables, but as we have seen in earlier sections of this chapter, the introduction of more independent variables leads to more possible outcomes for the calculations. The main ideas of finding critical points and using derivative tests are still valid, but new wrinkles appear when assessing the results.

Critical Points

For functions of a single variable, we defined critical points as the values of the function when the derivative equals zero or does not exist. For functions of two or more variables, the concept is essentially the same, except for the fact that we are now working with partial derivatives.

Definition: Critical Points

Let (z=f(x,y)) be a function of two variables that is differentiable on an open set containing the point ((x_0,y_0)). The point ((x_0,y_0)) is called a critical point of a function of two variables (f) if one of the two following conditions holds:

  1. (f_x(x_0,y_0)=f_y(x_0,y_0)=0)
  2. Either (f_x(x_0,y_0) ; ext{or} ; f_y(x_0,y_0)) does not exist.

Example (PageIndex{1}): Finding Critical Points

Find the critical points of each of the following functions:

  1. (f(x,y)=sqrt{4y^2−9x^2+24y+36x+36})
  2. (g(x,y)=x^2+2xy−4y^2+4x−6y+4)

Solution:

a. First, we calculate (f_x(x,y) ; ext{and} ; f_y(x,y):)

[egin{align*} f_x(x,y)&=dfrac{1}{2}(−18x+36)(4y^2−9x^2+24y+36x+36)^{−1/2} &=dfrac{−9x+18}{sqrt{4y^2−9x^2+24y+36x+36}} end{align*}]

[egin{align*} f_y(x,y)&=dfrac{1}{2}(8y+24)(4y^2−9x^2+24y+36x+36)^{−1/2} &=dfrac{4y+12}{sqrt{4y^2−9x^2+24y+36x+36}} end{align*}.]

Next, we set each of these expressions equal to zero:

[egin{align*} dfrac{−9x+18}{sqrt{4y^2−9x^2+24y+36x+36}}&=0 dfrac{4y+12}{sqrt{4y^2−9x^2+24y+36x+36}}&=0. end{align*}]

Then, multiply both sides of each equation by its denominator (to clear the denominators):

[egin{align*} −9x+18&=0 4y+12&=0. end{align*}]

Therefore, (x=2) and (y=−3,) so ((2,−3)) is a critical point of (f).

We must also check for the possibility that the denominator of each partial derivative can equal zero, thus causing the partial derivative not to exist. Since the denominator is the same in each partial derivative, we need only do this once:

[4y^2−9x^2+24y+36x+36=0. onumber]

This equation represents a hyperbola. We should also note that the domain of (f) consists of points satisfying the inequality

[4y^2−9x^2+24y+36x+36≥0. onumber]

Therefore, any points on the hyperbola are not only critical points, they are also on the boundary of the domain. To put the hyperbola in standard form, we use the method of completing the square:

[egin{align*} 4y^2−9x^2+24y+36x+36&=0 4y^2−9x^2+24y+36x&=−36 4y^2+24y−9x^2+36x&=−36 4(y^2+6y)−9(x^2−4x)&=−36 4(y^2+6y+9)−9(x^2−4x+4)&=−36−36+36 4(y+3)^2−9(x−2)^2&=−36.end{align*}]

Dividing both sides by (−36) puts the equation in standard form:

[egin{align*} dfrac{4(y+3)^2}{−36}−dfrac{9(x−2)^2}{−36}&=1 dfrac{(x−2)^2}{4}−dfrac{(y+3)^2}{9}&=1. end{align*}]

Notice that point ((2,−3)) is the center of the hyperbola.

Thus, the critical points of the function (f) are ( (2, -3) ) and all points on the hyperbola, (dfrac{(x−2)^2}{4}−dfrac{(y+3)^2}{9}=1).

b. First, we calculate (g_x(x,y)) and (g_y(x,y)):

[egin{align*} g_x(x,y)&=2x+2y+4 g_y(x,y)&=2x−8y−6. end{align*}]

Next, we set each of these expressions equal to zero, which gives a system of equations in (x) and (y):

[egin{align*} 2x+2y+4&=0 2x−8y−6&=0. end{align*}]

Subtracting the second equation from the first gives (10y+10=0), so (y=−1). Substituting this into the first equation gives (2x+2(−1)+4=0), so (x=−1).

Therefore ((−1,−1)) is a critical point of (g). There are no points in (mathbb{R}^2) that make either partial derivative not exist.

Figure (PageIndex{1}) shows the behavior of the surface at the critical point.

Exercise (PageIndex{1}):

Find the critical point of the function (f(x,y)=x^3+2xy−2x−4y.)

Hint

Calculate (f_x(x,y)) and (f_y(x,y)), then set them equal to zero.

Answer

The only critical point of (f) is ((2,−5)).

Determining Global and Local Extrema

The main purpose for determining critical points is to locate relative maxima and minima, as in single-variable calculus. When working with a function of one variable, the definition of a local extremum involves finding an interval around the critical point such that the function value is either greater than or less than all the other function values in that interval. When working with a function of two or more variables, we work with an open disk around the point.

Definition: Global and Local Extrema

Let (z=f(x,y)) be a function of two variables that is defined and continuous on an open set containing the point ((x_0,y_0).) Then (f) has a local maximum at ((x_0,y_0)) if

[f(x_0,y_0)≥f(x,y)]

for all points ((x,y)) within some disk centered at ((x_0,y_0)). The number (f(x_0,y_0)) is called a local maximum value. If the preceding inequality holds for every point ((x,y)) in the domain of (f), then (f) has a global maximum (also called an absolute maximum) at ((x_0,y_0).)

The function (f) has a local minimum at ((x_0,y_0)) if

[f(x_0,y_0)≤f(x,y)]

for all points ((x,y)) within some disk centered at ((x_0,y_0)). The number (f(x_0,y_0)) is called a local minimum value. If the preceding inequality holds for every point ((x,y)) in the domain of (f), then (f) has a global minimum (also called an absolute minimum) at ((x_0,y_0)).

If (f(x_0,y_0)) is either a local maximum or local minimum value, then it is called a local extremum (Figure (PageIndex{2})).

In Calculus 1, we showed that extrema of functions of one variable occur at critical points. The same is true for functions of more than one variable, as stated in the following theorem.

Fermat’s Theorem for Functions of Two Variables

Let (z=f(x,y)) be a function of two variables that is defined and continuous on an open set containing the point ((x_0,y_0)). Suppose (f_x) and (f_y) each exists at ((x_0,y_0)). If f has a local extremum at ((x_0,y_0)), then ((x_0,y_0)) is a critical point of (f).

Consider the function (f(x)=x^3.) This function has a critical point at (x=0), since (f'(0)=3(0)^2=0). However, (f) does not have an extreme value at (x=0). Therefore, the existence of a critical value at (x=x_0) does not guarantee a local extremum at (x=x_0). The same is true for a function of two or more variables. One way this can happen is at a saddle point. An example of a saddle point appears in the following figure.

Figure (PageIndex{3} label{saddlefigure}): Graph of the function (z=x^2−y^2). This graph has a saddle point at the origin.

In this graph, the origin is a saddle point. This is because the first partial derivatives of f((x,y)=x^2−y^2) are both equal to zero at this point, but it is neither a maximum nor a minimum for the function. Furthermore the vertical trace corresponding to (y=0) is (z=x^2) (a parabola opening upward), but the vertical trace corresponding to (x=0) is (z=−y^2) (a parabola opening downward). Therefore, it is both a global maximum for one trace and a global minimum for another.

Definition: Saddle Point

Given the function (z=f(x,y),) the point (ig(x_0,y_0,f(x_0,y_0)ig)) is a saddle point if both (f_x(x_0,y_0)=0) and (f_y(x_0,y_0)=0), but (f) does not have a local extremum at ((x_0,y_0).)

Classifying Critical Points

In order to develop a general method for classifying the behavior of a function of two variables at its critical points, we need to begin by classifying the behavior of quadratic polynomial functions of two variables at their critical points.

To see why this will help us, consider that the quadratic approximation of a function of two variables (its 2nd-degree Taylor polynomial) shares the same first and second partials as the function it approximates at the chosen point of tangency (or center point). Since sharing the same second partials means the two surfaces will share the same concavity (or curvature) at the critical point, this causes these quadratic approximation surfaces to share the same behavior as the function (z = f(x, y)) that they approximate at the point of tangency. In other words, if the original function has a relative maximum at this point, so will the quadratic approximation. If the original function has a relative minimum at this point, so will the quadratic approximation, and if the original function has a saddle point at this point, so will the quadratic approximation.

Now there are really three basic behaviors of a quadratic polynomial in two variables at a point where it has a critical point. It will fit one of the following three forms, often being a transformation of one of the following functions.

  1. A sum of two squared terms, like (z = x^2 + y^2), producing a paraboloid that opens up and has a relative (absolute) minimum at its vertex. See the plot on the left side of Figure (PageIndex{4}).
  2. The negative of a sum of two squared terms, like (z = -left(x^2 + y^2 ight)), producing a paraboloid that opens down and has a relative (absolute) maximum at its vertex. See the plot on the right side of Figure (PageIndex{4}).
  3. The difference of two squared terms, like (z = f(x, y) = x^2 - y^2) or (z = f(x, y) = y^2 - x^2), producing a saddle with a saddle point at its critical point. See Figure (PageIndex{3}).

Figure (PageIndex{4}): (z = x^2 + y^2) has an absolute minimum of (0) at ( (0,0)), while (z = -(x^2 + y^2)) has an absolute maximum of (0) at ( (0,0)),

Example (PageIndex{1}): Classifying the critical points of a function

Use completing the square to identify local extrema or saddle points of the following quadratic polynomial functions:

  1. (f(x,y) = x^2 - 6x + y^2 + 10y + 20)
  2. (f(x,y) = 12 - 3x^2 - 6x - y^2 + 12y)
  3. (f(x,y) = x^2 + 8x - 2y^2 + 16y)
  4. (f(x,y) = x^2 + 6xy + y^2)

Solution

a. To determine the critical points of this function, we start by setting the partials of (f) equal to (0). [ egin{align*} ext{Set}quad f_x(x,y) &= 2x -6 = 0 & implies x &= 3 ext{and}quad f_y(x,y) &= 2y + 10 = 0 & implies y &= -5 end{align*} ]We obtain a single critical point with coordinates ( (3, -5) ). Next we need to determine the behavior of the function (f) at this point.

Completing the square, we get: [egin{align*} f(x,y) &= x^2 - 6x + y^2 + 10y + 20 &= x^2 - 6x + 9 + y^2 + 10y + 25 + 20 - 9 - 25 &= (x - 3)^2 + (y + 5)^2 - 14 end{align*}]Notice that this function is really just a translated version of (z = x^2 + y^2), so it is a paraboloid that opens up with its vertex (minimum point) at the critical point ( (3, -5) ). We can argue that it has an absolute minimum value of (-14) at the point ( (3, -5) ), since we are adding squared terms to (-14) and thus cannot get a value less than (-14) for any values of (x) and (y), while we do obtain this minimum value of (-14) at the vertex point ( (3, -5) ).

b. Setting the partials of (f) equal to (0), we obtain: [ egin{align*} ext{Set}quad f_x(x,y) &= -6x -6 = 0 & implies x &= -1 ext{and}quad f_y(x,y) &= -2y + 12 = 0 & implies y &= 6 end{align*} ]We obtain a single critical point with coordinates ( (-1, 6) ). Next we need to determine the behavior of the function (f) at this point.

To complete the square here, we first need to factor out the factors of the squared terms. Doing this and reordering the terms some gives us: [egin{align*} f(x,y) &= 12 - 3x^2 - 6x - y^2 + 12y &= - 3left(x^2 + 2xquadquad ight) - 1left(y^2 - 12y quadquad ight) + 12 &= -3left(x^2 + 2x + 1 ight) - 1left(y^2 - 12y +36 ight) + 12 +3+36 &= 51 - 3(x + 1)^2 - (y - 6)^2 end{align*}]Notice that this function is an elliptic paraboloid that opens down with its vertex (maximum point) at the critical point ( (-1, 6) ). We can argue that it has an absolute maximum value of (51) at the point ( (-1, 6) ), since we are subtracting squared terms from (51) and thus cannot get a value more than (51) for any values of (x) and (y), while we do obtain this minimum value of (51) at the vertex point ( (-1, 6) ).

c. Setting the partials of (f) equal to (0), we obtain: [ egin{align*} ext{Set}quad f_x(x,y) &= 2x + 8 = 0 & implies x &= -4 ext{and}quad f_y(x,y) &= -4y + 16 = 0 & implies y &= 4 end{align*} ]This gives us a critical point with coordinates ( (-4, 4) ). To determine if (f) has a local extremum or saddle point at this point, we complete the square.

Factoring out (-2) from the (y)-squared term gives us: [egin{align*} f(x,y) &= x^2 + 8x - 2y^2 + 16y &= x^2 + 8x +16 - 2left(y^2 - 8y + 16 ight) - 16 + 32 &= (x + 4)^2 - 2(y - 4)^2 +16end{align*}]Since one squared term is positive and one is negative, we see that this function has the form of (z = x^2 - y^2) and so it has a saddle point at its critical point. That is, (f) has a saddle point at ( (-4, 4, 16) ).

d. Setting the partials of (f) equal to (0), we get: [ egin{align*} ext{Set}quad f_x(x,y) &= 2x + 6y = 0 & ext{and}quad f_y(x,y) &= 6x + 2y = 0 & implies y &= -3x end{align*} ]Substituting (-3x) into the first equation for (y) gives us, [egin{align*}2x + 6(-3x) &= 0 -16x &= 0 x &= 0end{align*}]Since (y = -3x), we have ( y = -3(0) = 0), so the critical point of (f) is ( (0,0) ). To determine the behavior of (f) at this critical point, we complete the square.

[egin{align*} f(x,y) &= x^2 + 6xy + y^2 &= (x^2 + 6xy + 9y^2) + y^2 - 9y^2 &= (x + 3y)^2 - 8y^2 end{align*}]As this produces a difference of squares with one positive squared term and the other a negative squared term, we see that (f) takes a form similar to (z = x^2 - y^2) and will have a saddle point at ( (0, 0, 0) ).

Now let's consider the quadratic approximation to a function (z = f(x, y)) centered at a critical point ( (x_0, y_0) ) of this function.

[Q(x, y) = f (x_0, y_0) + f_x(x_0, y_0) (x - x_0) + f_y(x_0, y_0) (y - y_0) + frac{f_{xx}(x_0, y_0)}{2}(x-x_0)^2 + f_{xy}(x_0, y_0)(x-x_0)(y-y_0) + frac{f_{yy}(x_0, y_0)}{2}(y-y_0)^2]

But, since the point ( (x_0, y_0) ), in this case, is a critical point of (f), we know that (f_x(x_0, y_0) = 0) and (f_y(x_0, y_0) = 0).

This allows us to simplify (Q(x, y)) to just:

[Q(x, y) = f (x_0, y_0) + frac{f_{xx}(x_0, y_0)}{2}(x-x_0)^2 + f_{xy}(x_0, y_0)(x-x_0)(y-y_0) + frac{f_{yy}(x_0, y_0)}{2}(y-y_0)^2]

Now we need to complete the square on this quadratic polynomial in two variables to learn how we can classify the behavior of this function at this critical point. Remember that the original function will share the same behavior (max, min, saddle point) as this 2nd-degree Taylor polynomial at this critical point.

To make this process easier, let's make some substitutions. Let's choose to let (u = x - x_0) and (v = y - y_0),

and let [egin{align*} a &= frac{f_{xx}(x_0, y_0)}{2}, b &= f_{xy}(x_0, y_0), c &= frac{f_{yy}(x_0, y_0)}{2} , ext{and} d &= f (x_0, y_0) end{align*}]

Then we need to complete the square on the polynomial: [ Q(x,y) = au^2 +buv + cv^2 + d]

Completing the square:

First we factor out the coefficient of (u^2): [= aleft[ u^2 + frac{b}{a}uv + frac{c}{a}v^2 ight] + d]

Next, we complete the square using the first two terms: [= aleft[ left(u^2 + frac{b}{a}uv + left(frac{b}{2a}v ight)^2 ight) + frac{c}{a}v^2 - left(frac{b}{2a}v ight)^2 ight] + d]

Rewriting the perfect square trinomial as the square of a binomial and combining the (v^2) terms yields:

[egin{align*} &= aleft[ left(u+ frac{b}{2a}v ight)^2 + left(frac{c}{a} - frac{b^2}{4a^2} ight)v^2 ight] + d
&= aleft[ left(u+ frac{b}{2a}v ight)^2 + left(frac{4ac}{4a^2} - frac{b^2}{4a^2} ight)v^2 ight] + d
&= aleft[ left(u+ frac{b}{2a}v ight)^2 + left(frac{4ac-b^2}{4a^2} ight)v^2 ight] + d end{align*}]

Note that the shape of this function's graph depends on the sign of the coefficient of (v^2). And the sign of this coefficient is determined only by its numerator, as the denominator is always positive (being a perfect square). This expression, (4ac-b^2), is called the discriminant, as it helps us discriminate (tell the difference between) which behavior the function has at this critical point.

If (D = 4ac-b^2gt 0), then the two squared terms inside the brackets are both positive, and

  • if (a = frac{f_{xx}(x_0, y_0)}{2} gt 0), the function (f) opens upwards with a local minimum at the critical point ( (x_0, y_0) ). Note it would be similar to the form, (z = x^2 + y^2).
  • if (a = frac{f_{xx}(x_0, y_0)}{2} lt 0), the function (f) opens downwards with a local maximum at the critical point ( (x_0, y_0) ). Note it would be similar to the form, (z = -left(x^2 + y^2 ight)).

If (D = 4ac-b^2 lt 0), then either

  • the two squared terms inside the brackets have opposite signs (meaning (f) is concave up along a line parallel to the (x)-axis and concave down along a line parallel to the (y)-axis, or vice-versa) or
  • the (b^2) term, representing the square of the mixed partial (f_{xy}(x_0, y_0)), is larger than the positive product of the two 2nd-partials (f_{xx}(x_0, y_0)) and (f_{yy}(x_0, y_0)). This means that even if the surface is concave up in both (x)- and (y)-directions, or concave down in both (x)- and (y)-directions, a large mixed partial can offset these and cause the surface to have a saddle point at the point ((x_0, y_0)).

In either case, the quadratic polynomial will be in the form of (z = x^2 - y^2) or (z = y^2 - x^2) (i.e., it will be the difference of two squared terms), so we get a saddle point at the critical point ( (x_0, y_0) ).

But if (D = 4ac-b^2 = 0), the quadratic polynomial reduces to (Q(x,y) = aleft(u+ frac{b}{2a}v ight)^2 + d), whose graph is a parabolic cylinder, so the behavior of the function is not clear at the critical point ( (x_0, y_0) ).

Now remembering the values of the constants (a), (b), and (c) from above, we see that: [egin{align*} D(x_0, y_0) &= 4frac{f_{xx}(x_0, y_0)}{2}frac{f_{yy}(x_0, y_0)}{2} - ig(f_{xy}(x_0, y_0)ig)^2 &= f_{xx}(x_0, y_0)f_{yy}(x_0, y_0) - ig(f_{xy}(x_0, y_0)ig)^2 end{align*}]

This formula is called the Second Partials Test, and it can be used to classify the behavior of any function at its critical points, as long as its second partials exist there and as long as the value of this discriminate is not zero.

The Second Partials Test

The second derivative test for a function of one variable provides a method for determining whether an extremum occurs at a critical point of a function. When extending this result to a function of two variables, an issue arises related to the fact that there are, in fact, four different second-order partial derivatives, although equality of mixed partials reduces this to three. The second partials test for a function of two variables, stated in the following theorem, uses a discriminant (D) that replaces (f''(x_0)) in the second derivative test for a function of one variable.

second partials Test

Let (z=f(x,y)) be a function of two variables for which the first- and second-order partial derivatives are continuous on some disk containing the point ((x_0,y_0)). Suppose (f_x(x_0,y_0)=0) and (f_y(x_0,y_0)=0.) Define the quantity

[D=f_{xx}(x_0,y_0)f_{yy}(x_0,y_0)−ig(f_{xy}(x_0,y_0)ig)^2.]

Then:

  1. If (D>0) and (f_{xx}(x_0,y_0)>0), then (f) is concave up at this critical point, so (f) has a local minimum at ((x_0,y_0)).
  2. If (D>0) and (f_{xx}(x_0,y_0)<0), then (f) is concave down at this critical point, so (f) has a local maximum at ((x_0,y_0)).
  3. If (D<0), then (f) has a saddle point at ((x_0,y_0)).
  4. If (D=0), then the test is inconclusive.

See Figure (PageIndex{4}).

To apply the second partials test, it is necessary that we first find the critical points of the function. There are several steps involved in the entire procedure, which are outlined in a problem-solving strategy.

Problem-Solving Strategy: Using the second partials Test for Functions of Two Variables

Let (z=f(x,y)) be a function of two variables for which the first- and second-order partial derivatives are continuous on some disk containing the point ((x_0,y_0).) To apply the second partials test to find local extrema, use the following steps:

  1. Determine the critical points ((x_0,y_0)) of the function (f) where (f_x(x_0,y_0)=f_y(x_0,y_0)=0.) If you find any critical points where at least one of the partial derivatives does not exist, you will need to find and justify extrema in another way, as you can't use the second partials test.
  2. Calculate the discriminant (D=f_{xx}(x_0,y_0)f_{yy}(x_0,y_0)−ig(f_{xy}(x_0,y_0)ig)^2) for each critical point of (f).
  3. Apply the four cases of the test to determine whether each critical point is a local maximum, local minimum, or saddle point, or whether the test is inconclusive. If the test is inconclusive, you will need to analyze and classify the behavior at the critical point another way.

Example (PageIndex{2}): Using the second partials Test

Find the critical points for each of the following functions, and use the second partials test to find any local extrema or saddle points.

  1. (f(x,y)=4x^2+9y^2+8x−36y+24)
  2. (g(x,y)=dfrac{1}{3}x^3+y^2+2xy−6x−3y+4)

Solution:

a. Step 1 of the problem-solving strategy requires us to find the critical points of (f). To do this, we first calculate (f_x(x,y)) and (f_y(x,y)) and then set each of them equal to zero:

[egin{align*} f_x(x,y)&=8x+8 f_y(x,y)&=18y−36. end{align*}]

Setting them equal to zero yields the system of equations

[egin{align*} 8x+8&=0 18y−36&=0. end{align*}]

The solution to this system is (x=−1) and (y=2). Therefore ((−1,2)) is the only critical point of (f).

Step 2 of the problem-solving strategy involves calculating (D.) To do this, we first calculate the second partial derivatives of (f:)

[egin{align*} f_{xx}(x,y)&=8 f_{xy}(x,y)&=0 f_{yy}(x,y)&=18. end{align*}]

Therefore, (D(-1,2)=f_{xx}(−1,2)f_{yy}(−1,2)−ig(f_{xy}(−1,2)ig)^2=(8)(18)−(0)^2=144>0.)

Step 3 tells us to apply the four cases of the test to classify the function's behavior at this critical point.

Since (D>0) and (f_{xx}(−1,2)=8>0,;f) is concave up, so (f) has a local minimum of (f(-1,2) = -16) at ((−1,2)), as shown in the following figure. (Note that this corresponds to case 1 of the second partials test.)

Figure (PageIndex{5}): The function (f(x,y)) has a local minimum at ((−1,2,−16).) Note the scale on the (y)-axis in this plot is in thousands.

b. For step 1, we first calculate (g_x(x,y)) and (g_y(x,y)), then set each of them equal to zero:

[egin{align*} g_x(x,y)&=x^2+2y−6 g_y(x,y)&=2y+2x−3. end{align*}]

Setting them equal to zero yields the system of equations

[egin{align*} x^2+2y−6&=0 2y+2x−3&=0. end{align*}]

To solve this system, first solve the second equation for (y). This gives (y=dfrac{3−2x}{2}). Substituting this into the first equation gives

[egin{align*} x^2+3−2x−6&=0 x^2−2x−3&=0 (x−3)(x+1)&=0. end{align*}]

Therefore, (x=−1) or (x=3). Substituting these values into the equation (y=dfrac{3−2x}{2}) yields the critical points (left(−1,frac{5}{2} ight)) and (left(3,−frac{3}{2} ight)).

Step 2 involves calculating the second partial derivatives of (g):

[egin{align*} g_{xx}(x,y)&=2x g_{xy}(x,y)&=2 g_{yy}(x,y)&=2. end{align*}]

Next, we substitute each critical point into the discriminant formula:

[egin{align*} Dleft(−1, frac{5}{2} ight)&=(2(−1))(2)−(2)^2=−4−4=−8 Dleft(3,− frac{3}{2} ight)&=(2(3))(2)−(2)^2=12−4=8. end{align*}]

In step 3, we use the second partials test to classify the behavior of the function at each of its critical points.

At point (left(−1,frac{5}{2} ight)), we see that (Dleft(−1, frac{5}{2} ight)=-8<0) (case 3 of the test), which means that (f) has a saddle point at the point (left(−1,frac{5}{2} ight)). The coordinates of this saddle point are (left(−1,frac{5}{2}, frac{41}{12} ight)).

Applying the theorem to point (left(3,−frac{3}{2} ight)) leads to case (1). That is, since (Dleft(3,- frac{3}{2} ight)=8>0) and (g_{xx}left(3,- frac{3}{2} ight)=2(3)=6>0), we know that (g) is concave up at this critical point, so (g) has a local minimum of (-frac{29}{4}) at the point (left(3,−frac{3}{2} ight)), as shown in the following figure.

Note: Sometimes it can be helpful to find a general formula for (D). For example, here we could have used the following formula:

[egin{align*} D(x_0, y_0) &=g_{xx}(x_0,y_0)g_{yy}(x_0,y_0)−ig(g_{xy}(x_0,y_0)ig)^2 &=(2x_0)(2)−2^2 &=4x_0−4.end{align*}]

Then we would have:

[egin{align*} Dleft(−1, frac{5}{2} ight)&=4(-1)-4=−4−4=−8 Dleft(3,− frac{3}{2} ight)&=4(3)-4=12−4=8. end{align*}]

Note that the final values of the discriminant at each critical point are the same.

Exercise (PageIndex{2})

Use the second partials to find the local extrema of the function

[ f(x,y)=x^3+2xy−6x−4y^2. onumber]

Hint

Follow the problem-solving strategy for applying the second partials test.

Answer

(left(frac{4}{3},frac{1}{3} ight)) is a saddle point, (left(−frac{3}{2},−frac{3}{8} ight)) is a local maximum.

  • A critical point of the function (f(x,y)) is any point ((x_0,y_0)) where either (f_x(x_0,y_0)=f_y(x_0,y_0)=0), or at least one of (f_x(x_0,y_0)) and (f_y(x_0,y_0)) do not exist.
  • A saddle point is a point ((x_0,y_0)) where (f_x(x_0,y_0)=f_y(x_0,y_0)=0), but ((x_0,y_0)) is neither a maximum nor a minimum at that point.
  • To find extrema of functions of two variables, first find the critical points, then calculate the discriminant and apply the second partials test.

Key Equations

  • Discriminant

(D=f_{xx}(x_0,y_0)f_{yy}(x_0,y_0)−(f_{xy}(x_0,y_0))^2)

Glossary

critical point of a function of two variables

the point ((x_0,y_0)) is called a critical point of (f(x,y)) if one of the two following conditions holds:

1. (f_x(x_0,y_0)=f_y(x_0,y_0)=0)

2. At least one of (f_x(x_0,y_0)) and (f_y(x_0,y_0)) do not exist

discriminant
the discriminant of the function (f(x,y)) is given by the formula (D=f_{xx}(x_0,y_0)f_{yy}(x_0,y_0)−(f_{xy}(x_0,y_0))^2)
saddle point
given the function (z=f(x,y),) the point ((x_0,y_0,f(x_0,y_0))) is a saddle point if both (f_x(x_0,y_0)=0) and (f_y(x_0,y_0)=0), but (f) does not have a local extremum at ((x_0,y_0))

Contributors

  • Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.

  • Paul Seeburger (Monroe Community College) edited and adapted this section extensively.
    Paul also wrote the entire subsection titled Classifying Critical Points.

How to optimize a function with several variables

I need to develop code to optimize a set or variables based on the following conditions.

  1. I don't have the source of function.
  2. The function gets a point (x,y) and generate a mapped point (x',y') using a set of parameters (around 10 parameter)(Mapping_Function)..
  3. I have an array of points with desire mapped ones (input[N], mapped[N]).
  4. I have a function that can calculate distance between two points (for example input point and mapped one). This function is in fact Euclidean distance (GetAbsError).
  5. I need to wrote code to optimize parameters to function so the distance between input points and mapped one became minimized (Optimize_Parameters).

The sample code is as follow:

I need to write Optimize_Parameters function to do the optimization.

What library can I use to write this function?

Where can I find more information on this?

I can write it in c++ or c#, but prefer to do this in c++ as it is faster.


R. C. Strongin and Ya. D. Sergeyev, Global Optimization with Non-Convex Constraints. Sequential and Parallel Algorithms (Kluwer, Dordrecht, 2000).

V. V. Voevodin and Vl. V. Voevodin, Parallel Computations (BKhV-Peterburg, St. Petersburg, 2002) [in Russian].

Yu. G. Evtushenko, “A Numerical Method for Global Optimization of Functions (Search on a Nonuniform Grid),” Zh. Vychisl. Mat. Mat. Fiz. 11, 1390–1403 (1971).

Yu. G. Evtushenko and V. A. Rat’kin, “Bisection Method for the Global Optimization of Functions of Several Variables,” Izv. Ross. Akad. Nauk, Tekh. Kibern., No. 1, 119–127 (1987).

A. Ya. Belyankov, “Improving the Efficiency of Nonuniform Covering Methods in Global Optimization,” in Abstracts of the Conference on Mathematical Programming and Software (Ural’skoe Otdelenie Akad. Nauk SSSR, Sverdlovsk, 1989), pp. 21–22 [in Russian].

Yu. G. Evtushenko, V. U. Malkova, and A. A. Stanevichyus, “Parallelization of the Global Extremum Searching Process,” Avtom.Telemekh., No. 5, 46–58 (2007) [Autom. Remote Control 68, 787–798 (2007)].

Yu. Nesterov and B. Polyak, “Cubic Regularization of Newton Method and Its Global Performance,” Math. Program. 108(1), 177–205 (2006).

A. S. Strekalovsky, Elements of Nonconvex Optimization (Nauka, Novosibirsk, 2003) [in Russian].

A. Ya. Belyankov, “Preliminary Parallelipiped Partitioning in Nonuniform Covering Methods in Global Optimization,” II All-Russia Conf. with Youth Research School on Mathematical Modeling of Developing Economies (Vyatskii Gos. Univ., Kirov, 2007), pp. 59–62 [in Russian].

M. A. Posypkin and I. Kh. Sigal, “Investigation of Algorithms for Parallel Computations in Knapsack-Type Discrete Optimization Problems,” Zh. Vychisl. Mat. Mat. Fiz. 45(10), 1801–1809 (2005) [Comput. Math. Math. Phys. 45, 1735–1742 (2005)].

Joint Supercomputer Center of the Russian Academy of Sciences, httr://www.jsss.ru

Yu. G. Evtushenko and M. A. Potapov, “Methods for Solving Multicriteria Problems,” Dokl. Akad. Nauk SSSR 291, 25–29 (1986).

Yu. G. Evtushenko, “A Numerical Method for Finding Best Guaranteed Estimates,” Zh. Vychisl. Mat. Mat. Fiz. 12(1), 89–104 (1972).


G. M. Ademenko, “A minimization method for functions of n variables,” Izv. Akad. Nauk BSSR, Ser. Fiz.-Mat. Nauk, No. 2, 131–132 (1972).

G. M. Ademenko, R. Gabasov, and A. A. Edinovich, “Minimization of functions of a finite number of variables by second-order methods,” Zh. Vychisl. Matem. I. Matem. Fiz.,11, No. 5, 1139–1149 (1971).

G. M. Adel'son-Vel'skii, V. L. Arlazarov, and F. M. Filler, “Search for the extrema of a function of several variables,” in: Proc. First Winter School on Mathematical Programming, 1968 [in Russian], No. 2, Moscow (1969), pp. 205–215.

S. I. Al'ber and Ya. I. Al'ber, “Application of the method of differential descent for the solution of nonlinear systems,” Zh. Vychisl. Matem. i Matem. Fiz.,7, No. 1, 14–32 (1967).

Ya. I. Al'ber, “Implicit difference scheme for the minimization of functionals and the solution of nonlinear systems,” Izv. Vyssh. Ucheb. Zaved., Radiofiz.,10, No. 7, 1035–1941 (1967).

G. E. Antonov and V. Ya. Katkovnik, “Filtering and smoothing of functions of several variables in order to find a global extremum,” Avtomat. i Vychisl. Tekh., No. 4, 32–38 (1970).

E. A. Arais and M. N. Golovchiner, “Optimization of ravine functions,” in: Aspects of Programming and Design Automation [in Russian], Tomsk. Univ., Tomsk (1971), pp. 98–108.

G. M. Bakan, “The use of certain extremum-locating methods for a function of several variables in optimization problems,” in: Complex Control Systems (All-Republic Inter-institutional Collection, No. 3) [in Russian] (1967), pp. 106–122.

P. I. Bartolomei, “Use of the Monte Carlo approach for calculation of the optimal regime of an energy system on a digital computer,” Trudy Ural'sk. Politekh. Inst., No. 154, 72–82 (1966).

D. I. Batishev, “Test functions for the comparison of extremum-locating methods for functions of several variables,” Avtomat. i Vychisl. Tekh., No. 1, 16–19 (1968).

A. V. Buledza, “Improvement of estimates for the method of steepest descent,” Dopovidi Akad. Nauk Ukr. RSR, No. 2, 150–153 (1962).

L. M. Vasserman, B. B. Levi, and Yu. A. Nazarkin, “Steepest descent with variation of dimensions,” Nauch. Trudy Kursk. Politekh. Inst., No. I, Part 2, 14–21 (1971).

M. K. Gavurin and Yu. B. Farforovskaya, “An iterative method for locating the minimum of the sum of the squares,” Zh. Vychisl. Matem. i Matem. Fiz.,6, No. 6, 1094–1097 (1966).

A. L. Gaidukov, “Random search in optimal planning,” in: Applied Problems of Engineering Cybernetics [in Russian], Sov. Radio, Moscow (1966), pp. 420–434.

B. A. Galanov, “A method of connected differential descent,” in: Mathematical Methods in Cybernetic Engineering [in Russian], No. 1, Kiev (1970), pp. 8–12.

B. A. Galanov, “Gradient methods for the solution of systems of linear algebraic equations,” Dopcvidi Akad. Nauk Ukr. RSR, No. 12, 1519–1522 (1966).

B. A. Galanov and Yu. M. Krivonos, “A differential-descent method in extremal problems,” in: Proc. Semin. Mathematical Methods in Special-Purpose Computer Engineering [in Russian], No. 1, Kiev (1969), pp. 38–47.

A. I. Galushkin, “Analysis of an iterative extremum-locating method,” Avtomat. i Vychisl. Tekh., No. 2, 38–40 (1970).

I. M. Gel'fand and S. V. Fomin, Variational Calculus [in Russian], Fizmatgiz, Moscow (1961), 228 pages.

I. M. Gel'fand and M. L. Tsetlin, “Some control techniques for complex systems,” Usp. Matem. Nauk,17, No. 1, 3–25 (1962).

I. M. Gel'fand and M. L. Tsetlin, “A nonlocal search principle for automatic optimization systems,” Dokl. Akad. Nauk SSSR,137, No. 2, 295–298 (1961).

S. M. Gerashchenko, “Assignment of right-hand sides in a system of differential equations for the gradient method,” Differents. Uravnen.,3, No. 12, 2144–2150 (1967).

S. M. Gerashchenko, “Application of automatic control theory to the differential-descent method,” in: Proc. Second All-Republic Conf. Belorussian Mathematicians [in Russian], Belorussk. Univ., Minsk (1969), pp. 179–191.

S. M. Gerashchenko, “Comparison of certain modifications of differential descent in terms of convergence rate,” Differents. Uravnen.,6, No. 10, 1810–1817 (1970).

G. I. Gershengom, “Method of ravines and random directions,” Inform. Sborn. Trudov Vychisl. Tsentra Irkutsk. Univ., No. 1, 59–71 (1966).

L. S. Gurin, “Experience in the use of Monte Carlo methods for finding the extremal values of functions,” in: Aspects of Computational Mathematics and Computer Engineering [in Russian], Mashgiz, Moscow (1963), pp. 118–123.

L. S. Gurin, “Optimization in stochastic models,” Zh. Vychisl. Matem. i Matem. Fiz.,4, No. 6, 1134–1137 (1964).

L. S. Gurin and V. P. Lobach, “Combination of the Monte Carlo method with the steepest-descent method for the solution of certain extremal problems,” Zh. Vychisl. Matem. i Matem. Fiz.,2, No. 3, 499–502 (1962).

Yu. M. Danilin, “Minimization methods based on approximation of the objective functional by convex functionals,” in: Proc. Semin. Optimal Decision Theory [in Russian], No. 1, Kiev (1969), pp. 45–71.

Yu. M. Danilin, “Estimate of the efficiency of an algorithm for the location of an absolute minimum,” Zh. Vychisl. Matem. i Matem. Fiz.,11, No. 4, 1026–1031 (1971).

Yu. M. Danilin and S. A. Piyavskii, “An-algorithm for the absolute minimum,” in: Proc. Semin. Optimal Decision Theory [in Russian], No. 2, Kiev (1967), pp. 25–37.

Yu. M. Danilin and B. N. Pshenichnyi, “Minimization techniques with accelerated convergence,” Zh. Vychisl. Matem. i Matem. Fiz.,10, No. 6, 1341–1354 (1970).

V. F. Dem'yanov and A. M. Rubinov, Approximation Methods for the Solution of Extremal Problems [in Russian], Leningrad Univ. (1968), 180 pages.

V. I. Denisov, “Combination of two methods for locating an extremum of a multifactorial system,” Avtomat. i Vychisl. Tekh., No. 6, 32–36 (1968).

A. D. Dobysh, “An algorithm for minimization of a function computed with random error,” Sborn. Trudov Moskov. Inzh.-Stroit. Inst., No. 83, 124–139 (1970).

Yu. G. Evtushenko, “Numerical method for locating the global extremum of a function (direct search on a nonuniform net),” Zh. Vychisl. Matem. i Matem. Fiz.,11, No. 6, 1390–1403 (1971).

N. P. Zhidkov and B. M. Shchedrin, “A method for finding the minimum of a function of several variables,” in: Computational Methods and Programming [in Russian], No. 10, Moskov. Univ., Moscow (1968), pp. 203–210.

G. Zoutendijk, Methods of Feasible Directions, Elsevier, New York-Amsterdam (1960).

A. K. Zuev, L. A. Rastrigin, and K. K. Ripa, “Correction of random-search algorithms for the solution of wandering-extremum tracking problems,” in: Statistical Optimization Problems [in Russian], Zinatne, Riga (1971), pp. 93–96.

V. V. Ivanov, “Problems in the optimization of computations,” in: Digital Computer Software [in Russian], Kiev (1972), pp. 149–172.

V. V. Ivanov, “Rapid-descent algorithms,” Dokl. Akad. Nauk SSSR,143, No. 4, 775–778 (1962).

V. V. Ivanov and K. Dzhumaev, “Analysis of the precision of Powell's algorithm (I),” in: Proc. Semin. Mathematical Methods and Special-Purpose Computer Engineering [in Russian], No. 1, Kiev (1968–69), pp. 3–12.

V. V. Ivanov and K. Dzhumaev, “Analysis of the precision of Powell's algorithm (II),” in: Mathematical Methods in Cybernetic Engineering [in Russian], No. 1, Kiev (1970), pp. 21–25.

V. V. Ivanov arid G. A. Kiyashko, “Optimal algorithms for minimization in the class of unimodal functions,” Dopovidi Akad. Nauk Ukr. RSR, No. 4, 313–316 (1972).

V. N. Il'in, “Modifications of the method of steepest descent,” Trudy Moskov. Inst. Radiotekh. Élektron. Avtomat., No. 46, 5–10 (1970).

A. N. Ioseliani, “An extremal-locating method for functions of several variables,” Trudy Gruz. Politekh. Inst., No. 5(110), 203–207 (1966).

A. N. Ioseliani, “Convergence of an extremum-locating algorithm for functions of several variables,” Trudy Gruz. Politekh. Inst., No. 2(114), 59–64 (1967).

A. N. Ioseliani, “A modification of search for an extremum by the method of tangent planes,” Trudy Inst. Élektron. Avtomat. Telemekhan. Akad. Nauk Gruz. SSR,8, No. 1, 74–80 (1970).

A. N. Ioseliani and M. E. Salukvadze, “Comparison of certain deterministic extremum-locating methods,” Trudy Inst. Élektron Avtomat. Telemekhan. Akad. Nauk Gruz. SSR,8, No. 1, 61–73 (1970).

L. V. Kantorovich and G. P. Akilov, Functional Analysis in Normed Spaces [in Russian], Fizmatgiz, Moscow (1959), 634 pages.

V. Ya. Katkovnik, “Sensitivity of gradient schemes,” Avtomat. i Telemekhan., No. 12, 87–94 (1970).

V. Ya. Katkovnik, “A generalized gradient descent algorithm,” Trudy Leningrad. Politekh. Inst., No. 318, 143–146 (1971).

A. B. Kovrigin, “Estimation of the rate of convergence of the K-step gradient method,” Vestnik Leningrad Univ., No. 13, 34–36 (1970).

Yu. M. Krivonos and B. A. Galanov, “A differential descent method,” Differents. Uravnen.,5, No. 8, 1426–1430 (1969).

A. I. Kuzovkin and V. M. Tikhomirov, “Volume of computations for finding the minimum of a convex function,” Ekonom. i Matem. Metody,3, No. 1, 95–103 (1967).

V. I. Kuptsov and E. G. Shurshkova, “Convergence of a modified Newton method,” Shorn. Rabot Vychisl. Tsentra Moskov. Univ.,14, 157–165 (1970).

V. A. Lavrov and V. A. Khokhlov, “Program for finding a local extremum of a function of several variables on computers of the Mir series,” in: Machines for Engineering Calculations [in Russian], No. 5, Kiev (1972), pp. 8–17.

G. S. Lbov, “Adaptive search for an extremum of a function of variables measured on a scale of denominations,” in: Computing Systems [in Russian], No. 44, Nauka, Novosibirsk (1971), pp. 13–22.

V. V. Leonov, “Computational method for the synthesis of optimization processes,” in: Second All-Union Congr. Theoretical and Applied Mechanics [in Russian], Nauka, Moscow (1964).

V. V. Leonov, “A covering method for finding the global maximum of a function of several variables,” in: Cybernetics Research [in Russian], Sov. Radio, Moscow (1970), pp. 41–52.

V. V. Leonov, “A method for finding the global maximum of a function of several vari-ables,” Sborn. Trudov Inst. Matem., Sibirsk. Otd. Akad. Nauk SSSR, No. 8, 43–50 (1971).

É. É. Loiter, L. A. Brichkin, and É. A. Nedel'chik, “A technique for accelerating the convergence of the iterative procedure in the solution of certain optimization problems,” Nauch. Trudy Kazakhsk. Politekh. Inst. (Alma-Ata) (1971), pp. 87–91.

Yu. I. Lyubich, “Convergence of the steepest-descent process,” Dokl. Akad. Nauk SSSR,179, No. 5, 1054–1056 (1968).

M. D. Maergoiz, “Parameter selection in the minimization problem,” Dokl. Akad. Nauk SSSR,188, No. 4, 752–755 (1969).

M. D. Maergoiz, “Application of the generalized Aitkin-Steffensen method to the function minimization problem,” in: Numerical Analysis [in Russian], No. 1, Kiev (1970), pp. 56–78.

M. D. Maergoiz, “Application of the generalized Aitkin-Steffensen method to the function minimization problem,” Sibirsk. Matem. Zh.,13, No. 1, 133–141 (1972).

G. D. Maistrovskii, “Convergence of the conjugate-gradient method,” Zh. Vychisl. Matem. i Matem. Fiz.,11, No. 5, 1291–1294 (1971).

N. N. Moiseev, Optimization Methods [in Russian], Vychisl. Tsentr., Moscow (1969), 96 pages, Chap. I: “The extremum-locating problem for functions of several variables,”

I. B. Motskus, Multiple-Extremum Problems in Design [in Russian], Nauka, Moscow (1967).

A. M. Myalkovskii and M. Kh. Tishavaeva, “An algorithm for finding the global maxima of multiple-extremum functions,” in: Aspects of Cybernetics and Computational Mathematics [in Russian], No. 11, Fan, Tashkent (1967), pp. 56–63.

Yu. A. Nazarkin, “Survey and classification of investigative methods for optimal systems,” Nauch. Trudy Kursk. Politekh. Inst., No. 1, Part 2, 5–13 (1971).

V. V. Nalimov and N. A. Chernova, Statistical Methods in Extremal Experimental Design [in Russian], Nauka, Moscow (1965), 340 pages.

I. I. Narozhnyi, “Combination of the gradient method and Euler equation for the optimization of aircraft flight paths,” Vestnik Leningrad. Univ., No. 13, 118–125 (1971).

Yu. I. Neimark and R. G. Strongin, “Search for an extremum of a function by the maximum-information principle,” Avtomat. i Telemekhan., No. 1, 113–118 (1966).

E. G. Nikolaev, “Steepest descent based on the random m-gradient method,” Avtomat. i Vychisl. Tekh., No. 3, 40–46 (1970).

E. G. Nikolaev, “A method for the random selection of descent direction,” in: Aspects of Cybernetics and Computational Mathematics [in Russian], No. 28, Fan, Tashkent (1969), pp. 160–167.

L. Ya. Oblomskaya, “Rate of convergence of the conjugate-gradient method for quadratic functionals,” in: Proc. Second Winter School on Mathematical Programming and Related Topics, 1969 [in Russian], No. 3, Moscow (1969), pp. 550–568.

E. V. Oganesyan, G. S. Gekchyan, and S. A. Piliposyan, “Nonequivalence of extremum-locating methods,” in: Automation of Chemical-Engineering Processes [in Russian], Erevan (1966), pp. 68–74.

A. A. Pervozvanskii, Search [in Russian], Nauka, Moscow (1970), 164 pages.

I. Sh. Pinsker and B. M. Tseitlin, “A nonlinear optimization problem,” Avtomat. i Tele-mekhan.,13, No. 12, 1611–1619 (1962).

I. Sh. Pinsker and B. M. Tseitlin, “Solution of the optimization problem by the method of independent search,” in: Automatic Control and Computer Engineering [in Russian], No. 6, Mashinostroenie, Moscow (1964), pp. 213–231.

S. A. Piyavskii, “Algorithm for finding the absolute minimum of a function,” in: Proc. Semin. Optimal Decision Theory [in Russian], No. 2, Kiev (1967), pp. 13–24.

S. A. Piyavskii, “An algorithm for finding the absolute extremum of a function,” Zh. Vychisl. Matem. i Matem. Fiz.,12, No. 4, 888–896 (1972).

V. Poll, “Methods for locating the stationary points of functions of several variables,” Eesti NSV Tead. Akad. Toimetised. Füüs.-Mat.,16, No. 1, 35–44 (1967).

V. Poll, “Convergence of certain methods for locating stationary points of functions of several variables,” Eesti NSV Tead. Akad. Toimetised. Füüs.-Mat.,16, No. 2, 157–167 (1967).

V. Poll, “Methods for locating stationary points,” Eesti NSV Tead. Akad. Toimetised. Füüs.-Mat.,16, No. 3, 382–384 (1967).

B. T. Polyak, “Techniques for accelerating the convergence of iterative methods,” Zh. Vychisl. Matem. i Matem. Fiz.,4, No. 5, 791–803 (1964).

B. T. Polyak, “Minimization methods for functions of several variables,” Ékonom. i Matem. Metody,3, No. 6, 881–901 (1967).

B. T. Polyak, “The conjugate-gradient method,” in: Proc. Second Winter School on Mathematical Programming and Related Topics, 1969 [in Russian], No. 1, Moscow (1969), pp. 152–201.

B. T. Polyak, “The conjugate-gradient method in extremal problems,” Zh. Vychisl. Matem. i Matem. Fiz.,9, No. 4, 807–821 (1969).

B. T. Polyak, “Convergence of methods of feasible directions in extremal problems,” Zh. Vychisl. Matem. i Matem. Fiz.,11, No. 4, 855–869 (1971).

B. T. Polyak and B. I. Shostakovskii, “A problem on the maximum of a function of several variables,” in: Sborn. Rabot Vychisl. Tsentra Moskov. Univ.,5, 107–114 (1966).

A. F. Potapova, “Accelerating the convergence of the steepest-descent method,” Zh. Vychisl. Matem. i Matem. Fiz.,11, No. 3, 749–752 (1971).

B. N. Pshenichnyi, “A descent algorithm,” Zh. Vychisl. Matem. i Matem. Fiz.,8, No. 3, 647–652 (1968).

B. N. Pshenichnyi and D. N. Marchenko, “An approach to the location of a global minimum,” in: Proc. Semin. Optimal Decision Theory [in Russian], No. 2, Kiev (1967), pp. 3–12.

T. L. Razinkova, “An algorithm for the computer-aided location of a function extremum,” Zh. Vychisl. Matem. i Matem. Fiz.,5, No. 4, 734–742 (1965).

L. A. Rastrigin, Random Search in Optimization Problems for Multiple-Parameter Systems [in Russian], Zinatne, Riga (1965).

L. A. Rastrigin, “Markovian and non-Markovian descent for extremal control in a noisy environment (II),” Izv. Akad. Nauk Latv. SSR, Ser. Fiz. Tekh. Nauk, No. 6, 80–87 (1965).

L. A. Rastrigin, Statistical Search Methods [in Russian], Nauka, Moscow (1968).

L. A. Rastrigin, Random Search with Linear Timing [in Russian], Zinatne, Riga (1971), 192 pages.

L. A. Rastrigin, “Principles of random search,” in: Theory and Application of Adaptive Systems [in Russian], Alma-Ata (1971), pp. 55–74.

V. P. Revyakin, G. S. Lbov, Yu. S. Lbov, and G. M. Shishkin, “An adaptive random-search algorithm for the solution of extremal problems,” Izv. Irkutsk. Sel'skokhoz. Inst.,3, No. 28, 97–113 (1970).

O. K. Romanov, “Heuristic method for successive improvement of the solution of a general optimization problem,” Uch. Zap. Moskov. Obl. Pedagog. Inst.,202, No. 6, 247–260 (1968).

M. E. Salukadze, “Optimum search for an extremum of a function of several variables,” in: Automatic Control [in Russian], Tbilisi (1967).

B. A. Samokish, “Analysis of the rate of convergence of the steepest-descent method,” Usp. Matem. Nauk,12, No. 1, 238–240 (1957).

I. V. Serienko, “Amethod for the solution of extremal-value problems,” Avtomatika (Kiev), No. 5, 15–24 (1964).

B. P. Serebryakov and V. F. Gurskii, “Interpolation algorithms for determining an extremum of a symmetric function,” Trudy Moskov. Aviats. Inst., No. 229, 90–99 (1971).

V. A. Skokov, “An algorithm for the minimization of functions of several variables,” in: Abstr. Sci. Conf. Scientists of Moscow State University [in Russian], MGU, Moscow (1968), pp. 18–19.

V. S. Spiridonov, “Application of the gradient relaxation method for the solution of systems of nonlinear equations,” Zh. Vychisl. Matem. i Matem. Fiz.,8, No. 4, 872–873 (1968).

L. A. Starosel'skii, G. A. Shelud'ko, and B. Ya. Kantor, “One realization of the method of ravines with adaptation of the ravine spacing by an exponential law,” Zh. Vychisl. Matem. i Matem. Fiz.,8, No. 5, 1161–1167 (1968).

R. I. Stakhovskii, “Comparison of certain search methods for an automatic optimizer,” in: Theory and Application of Discrete Automatic Systems [in Russian], Izd. AN SSSR, Moscow (1960).

R. G. Strongin, “Search for an extremum of a function of several variables by the maximum-information principle with an automatically adapted model,” in: Proc. All-Union Interinstitutional Sympos. Applied Mathematics and Cybernetics [in Russian], Gor'kii (1967), pp. 113–130.

R. G. Strongin, “Minimization of functions on the basis of the maximum-difference hypothesis,” in: Abstr. Fourth All-Union Semin. Statistical Optimization Problems [in Russian], Zinatne, Riga (1967).

R. G. Strongin, “Minimization of functions on the basis of the maximum-difference hypothesis,” in: Statistical Optimization Problems [in Russian], Riga (1968), pp. 41–50.

R. G. Strongin, “A global minimization algorithm,” Izv. Vyssh. Ucheb. Zaved., Radiofiz.,13, No. 4, 539–545 (1970).

R. G. Strongin, “Minimization of multiple-extremum functions of several variables,” Izv. Akad. Nauk SSSR, Tekh. Kibernet., No. 6, 39–46 (1971).

R. G. Strongin, “Algorithms for finding an absolute minimum,” in: Statistical Optimization Problems [in Russian], Zinatne, Riga (1971), pp. 51–68.

A. G. Sukharev, “Optimal extremum-seeking strategies,” Zh. Vychisl. Matem. i Matem. Fiz.,11, No. 4, 910–924 (1971).

L. T. Tarushkina, “A method for determining the extremum of a functional composed of bilinear forms,” Trudy Sibirsk. Fiz.-Tekh. Inst. pri Tomsk. Univ., No. 51, 100–103 (1970).

V. E. Truten', “Estimating the total error of an algorithm for finding a global minimum,” Vychisl. i Prikl. Matem., Mezhved. Nauch. Sborn., No. 8, 182–186 (1969).

D. J. Wilde, Optimum Seeking Methods, Prentice-Hall, Englewood Cliffs, N. J. (1964).

D. K. Faddeev and V. N. Faddeeva, Computational Methods of Linear Algebra [in Russian], Fizmatgiz, Moscow (1960), 656 pages.

A. V. Fiacco and G. P. McCormick, Nonlinear Programming: Sequential Unconstrained Minimization Techniques, Wiley, New York (1968).

K. Fujii, Y. Nishimura, and H. Taguchi, “Improvement of steepest-descent method for optimizing control systems,” J. Japan. Assoc. Automat. Control Engrs.,11, No. 1, 38–43 (1967).

O. K. Khanmamedov, “On the global search problem,” Avtomat. i Vychisl. Tekh., No. 3, 66–67 (1972).

Hsin-kuei Ho, “On the Newton method and the gradient method,” Intyun Shusyué Yui Tszi-suan' Shusyué,3, No. 3, 167–173 (1966).

I. V. Tsaritsyna, “A technique for accelerating the convergence of minimum-seeking methods for functions of several variables,” Vestnik Leningrad Univ., No. 7, 47–51 (1971).

B. M. Tseitlin and L. B. Lasovskaya, “An algorithm and program for the quadratic method of finding optimum parameters,” in: Automated Analysis and Control of Precision in Mechanical Engineering [in Russian], Nauka, Moscow (1967), pp. 53–68.

A. Tsuruo, “The status and the future of the Monte Carlo methods,” Nippon Genshiryoku Gakkaishi,6, No. 9, 523–527 (1964).

F. L. Chernous'ko, “Optimal search for an extremum of a unimodular function,” Zh. Vychisl. Matem. i Matem. Fiz.,10, No. 4, 922–933 (1970).

F. L. Chernous'ko, “Optimal search for the minimum of a convex function,” Zh. Vychisl. Matem. i Matem. Fiz.,10, No. 6, 1355–1366 (1970).

V. E. Shamanskii, “Some computational schemes for iterative processes,” Ukrainsk. Matem. Zh.,14, No. 1, 100–109 (1962).

V. E. Shamanskii, “A modification of the Newton method,” Ukrainsk. Matem. Zh.,19, No. 1, 133–138 (1967).

V. E. Shamanskii, “Application of the Newton method in a special case,” Zh. Vychisl. Matem. i Matem. Fiz.,7, No. 4, 774–783 (1967).

N. Z. Shor, Structure of Algorithms for the Numerical Solution of Optimal Planning and Design Problems [in Russian], Inst. Kibernet., Akad. Nauk Ukr. SSSR, Kiev (1964).

N. Z. Shor, “Rate of convergence of generalized gradient descent,” Kibernetika, No. 3, 98–99 (1968).

N. Z. Shor, “Generalized gradient descent,” in: Proc. Winter School on Mathematical Programming, Drogobych [in Russian], No. 3, Moscow (1969).

N. Z. Shor, “Application of the space-dilation operation to minimization problems for convex functions,” Kibernetika, No. 1, 6–12 (1970).

N. Z. Shor, “Rate of convergence of the generalized gradient-descent method with space dilation,” Kibernetika, No. 2, 80–85 (1970).

N. Z. Shor and V. I. Biletskii, “The space-dilation method for accelerating convergence in problems of the ravine type,” in: Proc. Semin. Optimal Decision Theory [in Russian], No. 2, Kiev (1969), pp. 3–18.

N. Z. Shor and N. G. Zhurbenko, “A minimization method using the space-dilation operation in the direction of the difference of two sequential gradients,” Kibernetika, No. 3, 51–59 (1971).

S. I. Shudrova, “∏-Parametric modification of the steepest-descent method,” Uch. Zap. Mordovsk. Univ., No. 18, 197–204 (1961).

D. B. Yudin and É. M. Khazen, “Certain mathematical aspects of statistical search methods,” in: Automation and Computer Engineering [in Russian], No. 13, Zinatne, Riga, (1966), pp. 29–41.

N. Adachi, “On variable metric algorithms,” J. Optimization Theory Appl.,7, No. 6, 391–410 (1971).

F. Andreuzzi and G. P. Mattolini, “Ricerca del minimo vincolato di una funzione dotata di derivate prima seconda,” Calcolo,8, Nos. 1–2, 89–105 (1971).

H. A. Antosiewicz and W. C. Reinboldt, “Numerical analysis and functional analysis,” in: Survey of Numerical Analysis (J. Todd, ed.), McGraw-Hill, New York (1962), Chap. 14.

L. Armijo, “Minimization of functions having Lipschitz continuous first partial derivatives,” Pacific J. Math.,16, No. 1, 1–3 (1966).

E. S. Armstrong, A Combined Newton-Raphson and Gradient Parameter Correction Technique for Solution of Optimal-Control Problems, NASA Tech. Rep. R-293 (1968).

S. Atluri, “Conjugate gradient minimization technique in process optimization,” in: Proc. Seventh Ann. Allerton Conf. Circuit and System Theory, Monticello, Ill., 1969, New York (1969), pp. 586–594.

A. Auslender, “Méthodes numériques pour la décomposition et la minimisation de fonctions non différentiables,” Numer. Math.,18, No. 3, 213–223 (1971).

A. Auslender, “Une méthode de générale pour la décomposition et la minimisation de fonctions non différentiables,” Compt. Rend.,271, No. 21, A1078-A1081 (1970).

R. M. Baer, “Note on an extremum locating algorithm,” Comput. J.,5, No. 3, 193 (1962).

Y. Bard, “Comparison of gradient methods for the solution of nonlinear parameter estimation problems,” SIAM J. Numer. Anal.,7, No. 1, 157–186 (1970).

R. Bass, “A rank two algorithm for unconstrained minimization,” Math. Comput.,26, No. 117, 129–143 (1972).

M. Bassetti and R. M. Buonanni, An Improved Newton-Raphson Method, Lab. Naz. Frascati Rep. No. 4 (1967), 9 pages.

F. S. Beckman, “The solution of linear equations by the conjugate gradient method,” in: Mathematical Methods for Digital Computers, Wiley, New York-London (1960), pp. 62–72.

R. D. Bell, “The application of third variations to function minimization,” Internat. J. Control,12, No. 1, 17–24 (1970).

A. Bensasson, “Résolution automatique des systèmes d'équations algébriques par la méthode du gradient,” Internat. J. Electron.,23, No. 3, 43–52 (1968).

L. D. Berkovitz, “Variational methods in problems of control and programming,” J. Math. Anal. Appl.,3, No. 1, 145–169 (1961).

G. Berman, “Minimization by successive approximation,” SIAM J. Numer. Anal.,3, No. 1, 123–133 (1966).

G. Best, “A method of structural weight minimization suitable for high-speed digital computers,” AIAA Journal,1, No. 2, 478–479 (1963).

M. C. Biggs, “Minimization algorithms making use of nonquadratic properties of the objective function,” J. Inst. Math. Appl.,8, No. 3, 315–327 (1971).

J. R. Blum, “Approximation methods which converge with probability one,” Ann. Math. Statist.,125, No. 2, 382–386 (1954).

A. Bonnemay, “Un algorithme rapide de minimalisation statique: comparaison avec d'autres algorithmes,” Compt. Rend.,272, No. 23, A1514-A1517 (1971).

A. D. Booth, Numerical Methods, Butterworths, London (1955), 195 pages.

G. E. P. Box and K. B. Wilson, “The experimental attainment of optimal conditions,” J. Roy. Statist. Soc.,B13, 1–38 38–45 (1951).

M. J. Box, “A comparison of several current optimization methods, and the use of transformations in constrained problems,” Comput. J.,9, No. 1, 67–77 (1966).

D. Braess, “Über Dämpfung bei Minimalisierungsverfahren,” Comput.,1, No. 3, 264–272 (1966).

J. P. Brannen, “An iterative method for optimizing expectations,” J. Soc. Ind. Appl. Math.,13, No. 2, 545–554 (1965).

H. Bremermann, “A method of uncontrained global optimization,” Math. Biosci.,9, 1–15 (1970).

S. H. Brooks, “A discussion of random methods for seeking maxima,” Operat. Res.,6, No. 2, 244–251 (1958).

R. R. Brown, “A generalized computer procedure for the design of optimum systems, Part I,” Commun. Electron., No. 43, 285–289 (1959).

R. R. Brown, “A generalized computer procedure for the design of optimum systems, Part II,” Commun. Electron., No. 43, 289–293 (1959).

C. G. Broyden, “Quasi-Newton methods and their application to function minimization,” Math. Comput.,21, No. 99, 368–381 (1967).

C. G. Broyden, “The convergence of single-rank quasi-Newton methods,” Math. Comput.,24, No. 110, 365–382 (1970).

C. G. Broyden, “The convergence of a class of double-rank minimization algorithms. 1. General considerations,” J. Inst., Math. Appl.,6, No. 1, 76–90 (1970).

C. G. Broyden, “The convergence of a class of double-rank minimization algorithms. 2. The new algorithm,” J. Inst. Math. Appl.,6, No. 3, 222–231 (1970).

C. G. Broyden, “The convergence of an algorithm for solving sparse nonlinear systems,” Math. Comput.,25, No. 114, 285–294 (1971).

R. C. Buck, “Stochastic ascent,” Numer. Math.,4, No. 3, 177–186 (1962).

R. J. Buehler, B. V. Shah, and O. Kempthorne, The Method of Parallel Tangents (PARTAN) for Finding an Optimum, ONR Tech. Rep. No. 2, Contract No. 5030(05), Iowa State Univ. Statist. Lab. (April, 1961 rev. August, 1962.).

J. W. Cantrell, “Relation between the memory gradient method and the Fletcher-Reeves method,” J. Optimization Theory Appl.,4, No. 1, 67–71 (1969).

C. W. Carroll, An Operations Research Approach to the Economics Optimization of a Kraft Pulping Process (Ph. D. Dissertation), Inst. Paper Chemistry, Appleton Wis. (1959).

C. W. Carroll, “The created response surface technique for optimizating nonlinear restrained systems,” Operat. Res.,9, No. 2, 169–185 (1961).

R. Chattopadhyay, “A study of test functions for optimization algorithms,” J. Optimization Theory Appl.,8, No. 3, 231–236 (1971).

A. Charnes and W. W. Cooper, “Letter to the editor: ‘Such solutions are very little solved’,” Operat. Res.,3, No. 3, 345–346 (1955).

D. Chazan and W. L. Miranker, “A nongradient and parallel algorithm for unconstrained minimization,” SIAM J. Control,8, No. 2, 207–217 (1970).

E. W. Cheney and A. A. Goldstein, “Newton's method for convex programming and Tcheby-cheff approximation,” Numer. Math.,1, No. 5, 253–268 (1959).

V. K. Chichinadze, N. I. Jabladze, and R. G. Vachanadze, “A new approach for solving optimization problems,” in: IEEE Systems, Man. and Cybernetics Group Ann. Sympos. Rec., Anaheim, Calif., 1971, New York (1971), pp. 162–164.

A. I. Cohen, “Rate of convergence of several conjugate gradient algorithms,” SIAM J. Numer. Anal.,9, No. 2, 248–259 (1972).

E. E. Cragg and A. V. Levy, “Study on a supermemory gradient method for the minimization of functions,” J. Optimization Theory Appl.,4, No. 3, 191–205 (1969).

J. B. Crockett and H. Chernoff, “Gradient methods of maximization,” Pacific J. Math.,5, No. 1, 30–50 (1955).

J. W. Daniel, “Convergence of the conjugate gradient method with computationally convenient modifications,” Numer. Math.,10, No. 2, 125–131 (1967).

J. W. Daniel, “The conjugate gradient method for linear and nonlinear operator equations,” SIAM J. Numer. Anal.,4, No. 1, 10–26 (1967).

J. W. Daniel, “A correction concerning the convergence rate for the conjugate gradient method.” SIAM J. Numer. Anal.,7, No. 2, 277–280 (1970).

W. C. Davidon, Variable Metric Method for Minimization, Argonne Nat. Lab. Res. and Develop. Rep. ANL-5990 (rev.), U. S. Atomic Energy Commission (1959).

W. C. Davidon, “Variance algorithm for minimizaton,” Comput. J.,10, No. 4, 406–410 (1968).

S. Dietze and H. Schwetlick, “Über die Schrittweitenwahl bei Abstiegsverfahren zur Minimisierung konvexer Funktionen,” J. Angew. Math. Mech.,51, No. 6, 451–454 (1971).

B. Dijkhuis, “An adaptive algorithm for minimizing a unimodal function of 1 variable,” Z. Angew. Math. Mech.,51, Sonderheft, 45–46 (1971).

V. Drzymala, “O zbiezności algorytmu poszukiwania ekstremum metodą najszybszego spadku,” Biul. Wojsk. Akad. Tech. J. Dąbrowskiego,18, No. 3, 39–49 (1969).

R. Elkin, Convergence Theorems for Gauss-Seidel and Other Minimization Algorithms (Dissertation), Univ. Maryland (1968), 129 pages Dissert. Abstr.,B29, No. 8, 2970 (1969).

V. Fabian, “Stochastic approximation methods,” Czech. Math. J.,10, No. 1, 123–159 (1960).

P. Faure and P. Huard, “Resolution de programmes mathématiques à fonction non linéaire par la méthode du gradient reduit,” Rev. Franç. Rech. Opérat.,9, No. 36, 167–205 (1965).

A. V. Fiacco and G. P. McCormick, “Extensions of SUMT for nonlinear programming: equality constraints and extrapolation,” Management Sci.,12, No. 11, 816–828 (1966).

A. V. Fiacco and G. P. McCormick, “The sequential unconstrained minimization technique for nonlinear programming a primal-dual method,” Management Sci.,10, No. 2, 360–366 (1964).

A. V. Fiacco and G. P. McCormick, “Computational algorithm for the sequential unconstrained minimization technique for nonlinear programming,” Management Sci.,10, No. 4, 601–617 (1964).

A. V. Fiacco and G. P. McCormick, “The slacked unconstrained minimization technique for convex programming,” SIAM J. Appl. Math.,15, No. 3, 505–515 (1967).

A. W. O. Firth, “Optimization problems: solution by an analogue computer,” Comput. J.,4, No. 1, 68–72 (1961).

R. Fletcher, “Function minimization without evaluating derivatives — a review,” Comput. J.,8, No. 1, 33–41 (1965).

R. Fletcher, “A new approach to variable metric algorithms,” Comput. J.,13, No. 3, 317–322 (1970).

R. Fletcher and M. J. D. Powell, “A rapidly convergent descent method for minimization,” Comput. J.,6, No. 2, 163–168 (1963).

R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” Comput. J.,7, No. 2, 149–154 (1964).

G. E. Forsythe, “On the asymptotic directions of the S-dimensional optimum gradient method,” Numer. Math., 11, No. 1, 57–76 (1968).

G. E. Forsythe and T. S. Motzkin, “Asymptotic properties of the optimum gradient method,” Bull. Amer. Math. Soc.,57, 183 (1951).

I. Fried, “N-step conjugate gradient minimization scheme for nonquadratic functions,” AIAA Journal,9, No. 11, 2286–2287 (1971).

S. N. Ghani, “An improved ‘complex’ method for function minimization,” Computer Aided Design,4, No. 2, 71–78 (1972).

P. E. Gill and W. Murray, Quasi-Newton Methods for Unconstrained Optimization, Nat. Phys. Lab. Rep. N. Maths 97 (1971), 29 pages.

P. E. Gill, W. Murray, and R. A. Pitfield, The Implementation of Two Revised QuasiNewton Algorithms for Unconstrained Optimization, Nat. Phys. Lab., Div. Numer. Anal. Comput., N NAC 11 (1972), 131 pages.

D. Goldfarb, “Sufficient conditions for the convergence of variable-metric algorithms,” in: Optimization (R. Fletcher, ed.), Academic Press, New York (1969).

A. A. Goldstein, “Cauchy's method of minimization,” Numer. Math.,4, No. 2, 146–150 (1962).

A. A. Goldstein and B. R. Kripke, “Mathematical programming by minimizing differentiable functions,” Numer. Math.,6, No. 1, 47–48 (1964).

A. A. Goldstein and J. F. Price, “An effective algorithm for minimization,” Numer. Math.,10, 184–189 (1967).

A. A. Goldstein and J. F. Price, “On descent from local minima,” Math. Comput.,25, No. 115, 569–574 (1971).

D. S. Gordon and D. W. G. Shen, “An accelerated Davidon algorithm for solving active network equations represented by nonquadratic cost functions,” in: Proc. Fourth Hawaiian Internat. Conf. Systems Science, Honolulu, 1971, North Hollywood, Calif., (1971), pp. 608–610.

J. Greenstadt, “On the relative efficiencies of gradient methods,” Math. Comput.,21, No. 99, 360–367 (1967).

J. Greenstadt, “A quasi-Newton method with no derivatives,” Math. Comput.,26, No. 117, 145–166 (1972).

B. Gruber, “Numerical determination of the relative minimum of a function of several variables by quadratic interpolation,” Apl. Mat.,12, No. 2, 87–100 (1967).

J. Guigou, “Expériences numériques comparatives concernant la méthode du gradient conjugué,” Bull. Direct, et Études Rech. C, No. 2, 79–92 (1968).

K. P. Hadeler, “Bemerkungen zum Gradientenverfahren,” Numer. Math.,18, No. 5, 413–422 (1972).

R. M. Hayes, “Iterative methods of solving linear problems on Hilbert space,” Nat. Bur. Stand., Appl. Math. Ser., No. 39, 71–103 (1954).

J. Heller, “A Monte Carlo approach to the approximate solution to sequencing problems,” Nat. Conf. Assoc. Computing Mach., 1962, Digest Tech. Papers, New York (1962), p. 41.

M. R. Hestenes, “The conjugate-gradient method for solving linear systems,” in: Proc. Sympos. Applied Mathematics, New York-Toronto-London (1956), pp. 83–102.

M. R. Hestenes, “Inversion of matrices by biorthogonalization and related results,” J. Soc. Ind. Appl. Math.,6, No. 1, 51–90 (1958).

M. R. Hestenes and E. Stiefel, “Methods of conjugate gradient for solving linear systems,” J. Res. Math. Mat. Bur. Stand.,49, No. 6, 409–436 (1952).

C. Hildreth, “Point estimates of ordinates of concave functions,” J. Amer. Statist. Assoc.,49, No. 267, 598–619 (1954).

R. Hooke and T. A. Jeeves, “‘Direct search’ solution of numerical and statistical problems,” J. Assoc. Computing Mach.,8, No. 2, 212–229 (1961).

J. Hrouda, “Reklový algorithmus pro určení minima funkce několika proměnných,” Apl. Mat.,11, No. 4, 271–277 (1966).

H. Y. Huang, “Unified approach to quadratically convergent algorithms for function minimization,” J. Optimization Theory Appl.,5, Nos. 1–6, 405–523 (1970).

A. V. Levy, “Numerical experiments on quadratically convergent algorithms for function minimization,” J. Optimization Theory Appl.,6, No. 3, 269–282 (1970).

S. Inomata and M. Kumada, “On the golf method,” Bull. Electrotech. Lab. (Japan),25, No. 7, 495–512 (1961).

D. H. Jacobson and W. Oksman, “An algorithm that minimizes homogeneous functions of N variables in N + 2 iteration and rapidly minimizes general functions,” J. Math. Anal. Appl.,38, No. 3, 535–552 (1972).

D. H. Jacobson and W. Oksman, “New algorithms for function minimization,” in: Proc. Ninth IEEE Sympos. Adaptive Processes, Decision, and Control, Austin, Texas, 1970, New York (1970), pp. xxi. 1/1–xxi. 1/4.

J. Janko, “Solution of nonlinear equation systems by Newton's method and the gradients method,” Apl. Mat.,10, No. 3, 230–234 (1965).

P. Jarratt, “An iterative method for locating turning points,” Comput. J.,10, No. 1, 82–84 (1967).

B. Kacprzyński, “Sekwencyjna methoda poszukowania ekstremum,” Arch. Automat, i Telemech.,11, No. 2, 147–164 (1966).

H. J. Kelley, G. E. Myers, and I. L. Johnson, Jr., “An improved conjugate direction minimization procedure,” AIAA Journal,8, No. 11, 2091–2093 (1970).

H. J. Kelley and J. L. Speyer, “Accelerated gradient projection,” in: Lecture Notes in Mathematics,” No. 132, Springer, Berlin (1970), pp. 151–158.

J. Kiefer, “Sequential minimax search for a maximum,” Proc. Amer. Math. Soc.,4, No. 3, 502–506 (1953).

J. Kiefer and J. Wolfowitz, “Stochastic estimation of the maximum of a regression function,” Ann. Math. Statist.,23, 463–466 (1952).

R. F. King, “A fifth-order family of modified Newton methods,” BIT (Sweden),11, No. 4, 409–412 (1971).

W. Kolar, Zur Methode des steilsten Abstieges, Ber. Kernforschungsanlage Jülich, No. 445 (1966).

H. P. Künzi, H. G. Tzschach, and C. A. Zehnder, Numerical Methods of Mathematical Optimization with ALGOL and FORTRAN Programs, Academic Press, New York (1971), viii + 219 pages.

L. Lapidus, E. Shapiro, S. Shapiro, and R. Stillman, “Optimization of process performance,” AIChE Journal,7, 288–294 (1961).

F. Lehmann, “Allgemeiner Bericht über Monte-Carlo-Methoden,” Bl. Deut. Ges. Versicherungsmath.,8, No. 3, 431–456 (1967).

G. Leitmann (ed.), Optimization Techniques with Applications to Aerospace Systems, Academic Press, New York-London (1962), 453 pages.

A. Leon, “A comparison among eight known optimizing procedures,” in: Recent Advances in Optimization Techniques (1966).

D. G. Leunberger, Optimization by Vector Space Methods, Wiley, New York (1969), xvii +326 pages.

D. Mans, Minimierung konvexer Funktionen (Dissertation) (1971).

D. Mans, “Über die Konvergenz von Einzelschrittvervahren zur Minimierung konvexer Funktionen,” Manuscr. Math.,6, No. 1, 33–51 (1972).

D. W. Marquardt, “An algorithm for least-squares estimation of nonlinear parameters,” J. Soc. Ind. Appl. Math.,11, No. 2, 431–441 (1963).

A. Matthews and D. Davies, “A comparison of modified Newton methods for unconstrained optimization,” Comput. J.,14, No. 3, 293–294 (1971).

G. P. McCormick and J. D. Pearson, “Variable metric method and unconstrained optimization,” in: Optimization (R. Fletcher, ed.), Academic Press, New York-London (1969), pp. 307–325.

D. G. McDowell, Function Minimizing Algorithms (Doctoral Dissertation), Michigan State Univ. (1970), 85 pages Dissert. Abstr. Internat.,B31, No. 11, 6755 (1971).

A. Miele and J. W. Cantrell, “Study on a memory gradient method for the minimization of functions,” J. Optimiz. Theory Appl.,3, No. 6, 459–470 (1969).

A. Miele and J. W. Cantrell, Gradient Methods in Mathematical Programming, Part 2: Memory Gradient Method, Rice Univ. Aero-Astronaut. Rep. No. 56 (1969).

A. Miele and J. W. Cantrell, “Memory gradient method for the minimization of functions,” in: Lecture Notes in Mathematics, No. 132, Springer, Berlin (1970), pp. 252–263.

J. J. More, “Global convergence of Newton-Gauss-Seidel methods,” SIAM J. Numer. Anal.,8, No. 2, 325–336 (1971).

D. D. Morrison, “Optimization by least squares,” SIAM J. Numer. Anal.,5, No. 1, 83–88 (1968).

K. F. Muller and V. W. Eveleigh, “A learning algorithm for metric computation using first derivatives and arbitrary steps to achieve function minimization,” in: Proc. Fourth Hawaiian Internat. Conf. Systems Science, Honolulu, 1971, North Hollywood, Calif. (1971), pp. 73–75.

W. Murray, Second Derivative Methods, Nat. Phys. Lab. Rep., presented at the IMA/NPL Sympos. Numerical Methods for Unconstrained Optimization.

B. A. Murtagh and R. W. H. Sargent, “Computational experience with quadratically convergent minimization methods,” Computing J.,13, No. 2, 185–194 (1970).

G. E. Myers, “Properties of the conjugate-gradient and Davidon methods,” J. Optimization Theory Appl.,2, No. 4, 209–219 (1968).

J. A. Nelder and R. Mead, “A simplex method for function minimization,” Comput. J.,7, No. 4, 308–313 (1965).

D. J. Newman, “Location of the maximum on unimodel surfaces,” J. Assoc. Comput. Mach.,12, No. 3, 395–398 (1965).

A. M. Ostrowski, “Contributions to the theory of the method of steepest descent,” Arch. Rational Mech. Anal.,26, No. 4, 257–280 (1967).

K. J. Overholt, “Extended Aitkin acceleration,” Nord. Tidskr. Informationsbehandl.,5, No. 2, 122–132 (1965).

J. R. Palmer, “An improved procedure for orthogonalizing the search vectors in Rosen-brock's and Swann's direct search optimization methods,” Comput. J.,12, No. 1, 69–71 (1969).

J. D. Pearson, On Variable Metric Methods of Minimization, Res., Anal. Corp. Tech. Paper No. RAC-Tp 302 (1968).

T. Pietrzykowski, “Application of the steepest ascent method to concave programming,” in: Information Processing 1962, North-Holland, Amsterdam (1963), pp. 185–189.

T. Pietrzykowski, “The potential method for conditional maxima in the locally compact metric spaces,” Numer. Math.,14, No. 4, 325–329 (1970).

M. Pincus, “A Monte Carlo method for the approximate solution of certain types of constrained optimization problems,” Operat. Res.,18, No. 6, 1225–1228 (1970).

E. Polak and G. Ribiere, “Note sur le convergence de méthodes de directions conjugées,” Rev. Franc. Inform. et Rech. Opérat.,3, No. 16, 35–43 (1969).

T. A. Porsching, “On rates of convergence of Jacobi and Gauss-Seidel methods for M-functions,” SIAM, J. Numer. Anal.,8, No. 3, 575–582 (1971).

M. J. D. Powell, “An iterative method for finding stationary values of a function of several variables,” Comput. J.,5, No. 2, 147–151 (1962).

M. J. D. Powell, “An effective method for finding the minimum of a function of several variables without calculating derivatives,” Comput. J.,7, No. 2, 155–162 (1964).

M. J. D. Powell, “A survey of numerical methods for unconstrained optimization,” SIAM Rev.,12, No. 1, 79–97 (1970).

P. S. Pütter, “Ein allgemeines Maximalisierungsverfahren,” Z. Angew. Math. Mech.,39, No. 12, 466–472 (1959).

J. O. Ramsay, “A family of gradient methods for optimization,” Comput. J.,13, No. 4, 413–417 (1970).

J. Reid, “On the method of conjugate gradients for the solution of large sparse systems of linear equations,” in: Large Sparse Sets of Linear Equations, London-New York (1971), pp. 231–252.

G. Ribiere, “Sur la méthode de Davidon-Fletcher-Powell pour la minimization des fonctions,” Management Sci.,16, No. 9, 572–592 (1970).

F. S. G. Richards, “On finding local maxima of functions of a real variable,” Biometrika,54, Nos. 1–2, 310–311 (1967).

P. D. Roberts and R. H. Davis, “Conjugate gradients,” Control.13, No. 129, 206–210 (1969).

R. A. Rohrer, “Explicit step-size determination in gradient exploitation minimization techniques,” IEEE Trans. Circuit Theory,CT-17, No. 1, 154–155 (1970).

I. B. Rosen, “A review of quasi-Newton methods in nonlinear equation solving and unconstrained optimization,” in: Proc. Twenty-First Nat. Conf. Assoc. Computing Mach., Washington, D. C. (1966).

H. H. Rosenbrock, “An automatic method for finding the greatest or the least value of a function,” Comput. J.,3, No. 3, 175–183 (1960).

J. Sacks, “Asymptotic distribution of stochastic approximation procedures,” Ann. Math. Statist.,29, No. 2, 373–405 (1958).

G. N. Saridis, “Learning applied to successive approximation algorithms,” IEEE Trans. Systems Science and Cybernetics,SSC-6, No. 2 (1970).

S. Schechter, “Iteration methods for nonlinear problems,” Trans. Amer. Math. Soc.,104, No. 1, 179–189 (1962).

J. W. Schmidt and H. F. Trinkaus, “Extremwertermittelung mit Funktionsverten bei Functionen von mehreren Veränderlichen,” Computing,1, No. 3, 224–232 (1966).

B. V. Shah, R. J. Buehler, and O. Kempthorne, “Some algorithms for minimizing a function of several variables,” J. Soc. Ind. Appl. Math.,12, No. 1, 74–92 (1964).

D. F. Shanno, “Conditioning of quasi-Newton methods for function minimization,” Math. Comput.,24, No. 111, 647–656 (1970).

D. F. Shanno, “Parameter selection for modified Newton methods for function minimization,” SIAM J. Numer. Anal.,7, No. 3, 366–372 (1970).

D. F. Shanno and P. C. Kettler, “Optimal conditioning for quasi-Newton methods,” Math. Comput.,24, No. 111, 657–664 (1970).

C. S. Smith, The Automatic Computation of Maximum Likelihood Estimates,” NCB Scientific Dept. Rep. SC 846/MR/40 (1962).

H. W. Sorenson, “Comparison of some conjugate direction procedures for function minimization,” J. Franklin Inst.,288, No. 6, 421–441 (1969).

H. A. Spang, “A review of minimization techniques for nonlinear functions,” SIAM Rev.,4, No. 4, 343–365 (1962).

W. Spendley, G. R. Hext, and F. R. Himsworth, “Sequential application of simplex designs in optimization and evolutionary operation,” Technometrics,4, 441 (1962).

G. W. Stewart III, “A modification of Davidon's minimization method to accept difference approximations of derivatives,” J. Assoc. Computing Mach.,14, No. 1, 72–83 (1967).

E. Stiefel, “Über einige Methoden der Relaxationsrechnung,” Z. Angew. Math. Phys.,3, No. 1, 1–33 (1952).

T. A. Straeter, A Comparison of Gradient Dependent Techniques for the Minimization of an Unconstrained Function of Several Variables, AIAA Paper No. 951 (1969), 5 pages.

L. K. Schubert, “Modification of a quasi-Newton method for nonlinear equations with a sparse Jacobian,” Math. Comput.,24, No. 109, 27–30 (1970).

W. H. Swann, Report on the Development of a New Direct Search Method of Optimization, Central Inst. Lab. Res. Note 64/3 (1964).

R. P. Tewarson, “On the use of generalized inverses in function minimization,” Computing,6, Nos. 3–4, 241–248 (1970).

T. Tsuda and T. Kiyono, “Application of the Monte Carlo method to systems of nonlinear algebraic equations,” Numer. Math.,6, No. 2, 59–67 (1964).

A. M. Vercoustre, “Étude comparative des méthodes de minimization de Fletcher et Powell et de Davidon,” Bull. Direct. et Études Rech. C. No. 1, 57–75 (1970).

J. Warga, “Minimizing certain convex functions,” J. Soc. Ind. Appl. Math.,11, No. 3, 588–593 (1963).

T. M. Whitney and R. K. Meany, “Two algorithms related to the method of steepest descent,” SIAM J. Numer. Anal.,4, No. 1, 109–118 (1967).

J. H. Westcott, “Numerical computational methods of optimization in control,” Automatica,5, No. 6, 831–843 (1969).

P. Wolfe, “Convergence conditions for ascent methods,” SIAM Rev.,11, No. 2, 226–235 (1969).

W. I. Zangwill, “Minimizing a function without calculating derivatives,” Comput. J.,10, No. 3, 293–296 (1967).

W. I. Zangwill, Nonlinear Programming by Sequential Unconstrained Maximization, Working Paper 131, Cent. Res. Management Sci., Univ. California, Berkeley (1965).

R. Zieliński, “On the Monte Carlo evaluation of the extremal value of a function,” Algorytmy,2, No. 4, 7–13 (1965).

R. Zieliński, “On Monte Carlo estimation of the maximum of a function,” Algorytmy,7, No. 13, 5–7 (1970).

R. Zieliński, Stochastyczne algorytmy w zagadnieniach optymizacji, Komentowany przeglad bibliograficzny,” Algorytmy,3, No. 6, 127–138 (1966).


2 Answers 2

My other answer involved missing almost all of the actual question, so let's start afresh.

Let's set $h(x, y) = f(x, y) + g(x, y)$. The first thing to acknowledge is that it is possible for there to be no local extrema of $h$ - for a simple example, let $f(x, y) = -x$ and $g(x, y) = 2x$, then $h(x, y) = x$ has no critical points (although you can state that on the boundary of the region you defined, it does have a supremum).

If the function does have a maximum or minimum point, it will occur when the partial derivatives are both zero (or on the boundary, but since you've defined an open region that's not a valid possibility in this case). So you differentiate with respect to $x$, and find when that derivative is zero (possibly as a function of both $x$ and $y$), and you do the same for $y$, and you work out what point or points those could be, and then you can do a few different things to see whether those points are maxima, minima, or something else.

Your method kind of works, although it will only find an approximate point (since you can only check finitely many $x$ values) and how well you do will depend on how well-behaved the function is outside of the grid of $x$ and $y$ values you are checking.

An alternative approximation method is gradient descent, in which you pick a starting point, work out the gradient of the function at that point, and "walk" a bit in the direction of that gradient until you find yourself circling the same point, which is at least a local critical point if not the actual maximum/minimum.


Minor

MATH 0399. Support for Precalculus. 2 Credit Hours.

Practicum for Learning Support students enrolled in MATH 1113 (Precalculus).

MATH 0999. Support for College Algebra. 2 Credit Hours.

This Learning Support course provides corequisite support in mathematics for students enrolled in MATH 1111. Topics will parallel topics being studied in MATH 1111 and the essential quantitative skills needed to be successful.

MATH 1111. College Algebra. 4 Credit Hours.

This course is symbolically intensive, functional approach to algebra that incorporates the use of appropriate technology. Emphasis will be placed on the study of functions and their graphs, inequalities, and linear, quadratic, piece-wise defined, rational, polynomial, exponential, and logarithmic functions. Appropriate applications will be included.

MATH 1113. Precalculus. 4 Credit Hours.

Analytic geometry, the function concept, polynomials, exponential, logarithms, trigonometric functions, mathematical induction, and the theory of equations. May only be used for degree credit with departmental approval.

MATH 11X3. Transfer Precalculus. 3 Credit Hours.

MATH 1501. Calculus I. 4 Credit Hours.

Differential calculus and basic integral calculus including the fundamental theorem of calculus. Credit not allowed for both MATH 1501 and 1712.

MATH 1503. Calculus I for the Life Sciences. 4 Credit Hours.

Differential and basic calculus: sequences, difference equations, limits, continuity, differentiation, integration, applications. The topics parallel those of MATH 1501 with applications from life sciences.

MATH 1504. Calculus I for the Life Sciences. 4 Credit Hours.

Taylor approximations, introduction to differential equations, linear algebra, and introduction to multivariable calculus. Motivating examples drawn from life sciences.

MATH 1512. Honors Calculus II. 4 Credit Hours.

The topics covered parallel those of 1502 with a somewhat more intensive and rigorous treatment. Credit not allowed for both honors calculus and the corresponding regular calculus course. Credit not allowed for both MATH 1512 and MATH 1522. Credit not allowed for both MATH 1512 and MATH 15X2.

MATH 1550. Introduction to Differential Calculus. 3 Credit Hours.

An introduction to differential calculus including applications and the underlying theory of limits for functions and sequences. Credit not awarded for both MATH 1550 and MATH 1501, MATH 1551, or MATH 1503.

MATH 1551. Differential Calculus. 2 Credit Hours.

Differential calculus including applications and the underlying theory of limits for functions and sequences. Credit not awarded for both MATH 1551 and MATH 1501, MATH 1503, or MATH 1550.

MATH 1552. Integral Calculus. 4 Credit Hours.

Integral calculus: Definite and indefinite integrals, techniques of integration, improper integrals, infinite series, applications. Credit not awarded for both MATH 1552 and MATH 1502, MATH 1504, MATH 1512 or MATH 1555.

MATH 1553. Introduction to Linear Algebra. 2 Credit Hours.

An introduction to linear alegbra including eigenvalues and eigenvectors, applications to linear systems, least squares. Credit not awarded for both MATH 1553 and MATH 1522, MATH 1502, MATH 1504, MATH 1512, MATH 1554 or MATH 1564.

MATH 1554. Linear Algebra. 4 Credit Hours.

Linear algebra eigenvalues, eigenvectors, applications to linear systems, least squares, diagnolization, quadratic forms.

MATH 1555. Calculus for Life Sciences. 4 Credit Hours.

Overview of intergral calculus, multivariable calculus, and differential equations for biological sciences. Credit not awarded for both MATH 1555 and MATH 1552, MATH 1502, MATH 1504, MATH 1512 or MATH 2550.

MATH 1564. Linear Algebra with Abstract Vector Spaces. 4 Credit Hours.

This is an intensive first course in linear algebra including the theories of linear transformations and abstract vector spaces. Credit not awarded for both MATH 1564 and MATH 1553, MATH 1554, MATH 1522, MATH 1502, MATH 1504 or MATH 1512.

MATH 15X1. Transfer Calculus I. 3 Credit Hours.

MATH 15X2. Transfer Calculus II. 3,4 Credit Hours.

This course includes the treatment of single variable calculus in MATH 1502. This course is not equivalent to MATH 1502. Credit not allowed for both MATH 15X2 and MATH 1502. Credit not allowed for both MATH 15X2 and MATH 1512.

MATH 1601. Introduction to Higher Mathematics. 3 Credit Hours.

This course is designed to teach problem solving and proof writing. Mathematical subject matter is drawn from elementary number theory and geometry.

MATH 1711. Finite Mathematics. 4 Credit Hours.

Linear equations, matrices, linear programming, sets and counting, probability and statistics.

MATH 1712. Survey of Calculus. 4 Credit Hours.

Techniques of differentiation, integration, application of integration to probability and statistics, multidimensional calculus. Credit not allowed for both MATH 1712 and 1501.

MATH 17X1. Transfer Finite Math. 3 Credit Hours.

MATH 17X2. Transfer Survey-Calc. 3 Credit Hours.

MATH 1803. Special Topics. 3 Credit Hours.

Courses on special topics of current interest in Mathematics.

MATH 1X51. Transfer Differential Calc. 2,3 Credit Hours.

MATH 1X52. Transfer Integral Calculus. 3,4 Credit Hours.

MATH 1X53. Transfer Intro Linear Algebra. 2,3 Credit Hours.

MATH 1X54. Transfer Linear Algebra. 2,3 Credit Hours.

MATH 1X55. Transfer Calculus for Life Sci. 2,3 Credit Hours.

MATH 1XXX. Mathematics Elective. 1-21 Credit Hours.

MATH 2106. Foundations of Mathematical Proof. 3 Credit Hours.

An introduction to proofs in advanced mathematics, intended as a transition to upper division courses including Abstract Algebra I and Analysis I.

MATH 2406. Abstract Vector Spaces. 3 Credit Hours.

A proof-based development of linear algebra and vector spaces, with additional topics such as multilinear algebra and group theory.

MATH 24X1. Transfer Calculus III. 3 Credit Hours.

MATH 24X3. Transfer Diff Equations. 3 Credit Hours.

MATH 2550. Introduction to Multivariable Calculus. 2 Credit Hours.

Vectors in three dimensions, curves in space, functions of several variables, partial derivatives, optimization, integration of functions of several variables. Vector Calculus not covered. Credit will not be awarded for both MATH 2550 and MATH 2605 or MATH 2401 or MATH 2551 or MATH 1555.

MATH 2551. Multivariable Calculus. 4 Credit Hours.

Multivariable calculus: Linear approximation and Taylor's theorems, Lagrange multiples and constrained optimization, multiple integration and vector analysis including the theorems of Green, Gauss, and Stokes. Credit will not be awarded for both MATH 2551 and MATH 2401 or MATH 2411 or MATH 2561.

MATH 2552. Differential Equations. 4 Credit Hours.

Methods for obtaining numerical and analytic solutions of elementary differential equations. Applications are also discussed with an emphasis on modeling. Credit not awarded for both MATH 2552 and MATH 2403 or MATH 2413 or MATH 2562.

MATH 2561. Honors Multivariable Calculus. 4 Credit Hours.

The topics covered parallel those of MATH 2551 with a somewhat more intensive and rigorous treatment. Credit not awarded for both MATH 2561 and MATH 2401 or MATH 2411 or MATH 2551.

MATH 2562. Honors Differential Equations. 4 Credit Hours.

The topics covered parallel those of MATH 2552 with a somewhat more intensive and rigorous treatment.

MATH 2603. Introduction to Discrete Mathematics. 4 Credit Hours.

Mathematical logic and proof, mathematical induction, counting methods, recurrence relations, algorithms and complexity, graph theory and graph algorithms. Credit not awarded for both MATH 2603 and MATH 2602.

MATH 2605. Calculus III for Computer Science. 4 Credit Hours.

Topics in linear algebra and multivariate calculus and their applications in optimization and numerical methods, including curve fitting, interpolation, and numerical differentiation and integration.

MATH 2698. Undergraduate Research Assistantship. 1-12 Credit Hours.

Independent research conducted under the guidance of a faculty member.

MATH 2699. Undergraduate Research. 1-12 Credit Hours.

Independent research conducted under the guidance of a faculty member.

MATH 26X2. Transfer Linear & Disc Math. 3 Credit Hours.

MATH 26X3. Transfer Discrete Math. 3 Credit Hours.

MATH 2801. Special Topics. 1 Credit Hour.

Courses on special topics of current interest in mathematics.

MATH 2802. Special Topics. 2 Credit Hours.

Courses on special topics of current interest in mathematics.

MATH 2803. Special Topics. 3 Credit Hours.

Courses on special topics of current interest in mathematics.

MATH 2804. Special Topics. 4 Credit Hours.

Courses on special topics of current interest in mathematics.

MATH 2805. Special Topics. 5 Credit Hours.

Courses on special topics of current interest in mathematics.

MATH 2X51. Transfer Multivariable Calc. 3,4 Credit Hours.

MATH 2X52. Transfer Differential Equation. 3,4 Credit Hours.

MATH 2XXX. Mathematics Elective. 1-21 Credit Hours.

MATH 3012. Applied Combinatorics. 3 Credit Hours.

Elementary combinatorial techniques used in discrete problem solving: counting methods, solving linear recurrences, graph and network models, related algorithms, and combinatorial designs.

MATH 3022. Honors Applied Combinatorics. 3 Credit Hours.

Topics are parallel to those of MATH 3012 with a more rigorous and intensive treatment. Credit is not allowed for both MATH 3012 and 3022.

MATH 3215. Introduction to Probability and Statistics. 3 Credit Hours.

This course is a problem-oriented introduction to the basic concepts of probability and statistics, providing a foundation for applications and further study.

MATH 3225. Honors Probability and Statistics. 3 Credit Hours.

The topics covered parallel those of MATH 3215, with a more rigorous and intensive treatment. Credit is not allowed for both MATH 3215 and 3225.

MATH 3235. Probability Theory. 3 Credit Hours.

This course is a mathematical introduction to probability theory, covering random variables, moments, multivariable distributions, law of large numbers, central limit theorem, and large deviations. Credit not awarded for both MATH 3235 and MATH 3215 or 3225 or 3670.

MATH 3236. Statistical Theory. 3 Credit Hours.

An introduction to theoretical statistics for students with a background in probability. A mathematical formalism for inference on experimental data will be developed. Credit not awared for both MATH 3236 and MATH 3215 or 3225 or 3670.

MATH 3406. A Second Course in Linear Algebra. 3 Credit Hours.

This course will cover important topics in linear algebra not usually discussed in a first-semester course, featuring a mixture of theory and applications.

MATH 3670. Probability and Statistics with Applications. 3 Credit Hours.

Introduction to probability, probability distributions, point estimation, confidence intervals, hypothesis testing, linear regression and analysis of variance. Students cannot receive credit for both MATH 3670 and MATH 3770 or ISYE 3770 or CEE 3770.

MATH 3801. Special Topics. 1 Credit Hour.

Courses on special topics of current interest in mathematics.

MATH 3802. Special Topics. 2 Credit Hours.

Courses on special topics of current interest in mathematics.

MATH 3803. Special Topics. 3 Credit Hours.

Courses on special topics of current interest in mathematics.

MATH 3804. Special Topics. 4 Credit Hours.

Courses on special topics of current interest in mathematics.

MATH 3805. Special Topics. 5 Credit Hours.

Courses on special topics of current interest in mathematics.

MATH 3XXX. Mathematics Elective. 1-21 Credit Hours.

MATH 4012. Algebraic Structures in Coding Theory. 3 Credit Hours.

Introduction to linear error correcting codes with an emphasis on the algebraic tools required, including matrices vector spaces, groups, polynomial rings, and finite fields.

MATH 4022. Introduction to Graph Theory. 3 Credit Hours.

The fundamentals of graph theory: trees, connectivity, Euler torus, Hamilton cycles, matchings, colorings, and Ramsey theory.

MATH 4032. Combinatorial Analysis. 3 Credit Hours.

Combinatorial problem-solving techniques including the use of generating functions, recurrence relations, Polya theory, combinatorial designs, Ramsey theory, matroids, and asymptotic analysis.

MATH 4080. Senior Project I. 2 Credit Hours.

The first of a two-course sequence of faculty-directed independent research culminating in the writing of a senior thesis and its presentation.

MATH 4090. Senior Project II. 2 Credit Hours.

The second course of a two-course sequence of faculty-directed independent research culminating in the writing of a senior thesis and its presentation.

MATH 4107. Introduction to Abstract Algebra I. 3 Credit Hours.

This course develops in the theme of "Arithmetic congruence and abstract algebraic structures". Strong emphasis on theory and proofs.

MATH 4108. Introduction to Abstract Algebra II. 3 Credit Hours.

Continuation of Abstract Algebra I, with emphasis on Galois theory, modules, polynomial fields, and the theory of linear associative algebra.

MATH 4150. Introduction to Number Theory. 3 Credit Hours.

Primes and unique factorization, congruences, Chinese remainder theorem, Diophantine equations, Diophantine approximations, quadratic reciprocity. Applications such as fast multiplication, factorization, and encryption.

MATH 4221. Probability with Applications I. 3 Credit Hours.

Simple random walk and the theory of discrete time Markov chains.

MATH 4222. Probability with Applications II. 3 Credit Hours.

Renewal theory, Poisson processes and continuous time Markov processes, including an introduction to Brownian motion and martingales.

MATH 4255. Monte Carlo Methods. 3 Credit Hours.

Probability distributions, limit laws, and applications through the computer.

MATH 4261. Mathematical Statistics I. 3 Credit Hours.

Sampling distributions, Normal, t, chi-square, and f distributions. Moment-generating function methods, Bayesian estimation, and introduction to hypothesis testing.

MATH 4262. Mathematical Statistics II. 3 Credit Hours.

Hypothesis testing, likelihood ratio tests, nonparametric tests, bivariate and multivariate normal distributions.

MATH 4280. Introduction to Information Theory. 3 Credit Hours.

The measurement and quantification of information. These ideas are applied to the probabilistic analysis of the transmission of information over a channel along which random distortion of the message occurs.

MATH 4305. Topics in Linear Algebra. 3 Credit Hours.

Finite dimensional vector spaces, inner product spaces, least squares, linear transformations, the spectral theorem for normal transformations. Applications to convex sets, positive matrices, difference equations.

MATH 4317. Analysis I. 3 Credit Hours.

Real numbers, topology of Euclidean spaces, Cauchy sequences, completeness, continuity and compactness, uniform continuity, series of functions, Fourier series.

MATH 4318. Analysis II. 3 Credit Hours.

Differentiation of functions of one real variable, Riemann-Stieltjes integral, the derivative in Rn, and integration in Rn.

MATH 4320. Complex Analysis. 3 Credit Hours.

Topics from complex function theory, including contour integration and conformal mapping.

MATH 4347. Partial Differential Equations I. 3 Credit Hours.

Method of characteristics for first- and second-order partial differential equations, conservation laws and shocks, classification of second-order systems and applications.

MATH 4348. Partial Differential Equations II. 3 Credit Hours.

Green's functions and fundamental solutions. Potential, diffusion, and wave equations.

MATH 4431. Introductory Topology. 3 Credit Hours.

Point set topology, topological spaces and metric spaces, continuity and compactness, homotopy, and covering spaces.

MATH 4432. Introduction to Algebraic Topology. 3 Credit Hours.

Introduction to algebraic methods in topology. Includes homotopy, the fundamental group, covering spaces, simplicial complexes. Applications to fixed point theory and group theory.

MATH 4441. Differential Geometry. 3 Credit Hours.

The theory of curves, surfaces, and more generally, manifolds. Curvature, parallel transport, covariant differentiation, Gauss-Bonet theorem.

MATH 4541. Dynamics and Bifurcations I. 3 Credit Hours.

A broad introduction to the local and global behavior of nonlinear dynamical systems arising from maps and ordinary differential equations.

MATH 4542. Dynamics and Bifurcations II. 3 Credit Hours.

A continuation of Dynamics and Bifurcations I.

MATH 4580. Linear Programming. 3 Credit Hours.

A study of linear programming problems, including the simplex method, duality, and sensitivity analysis with applications to matrix games, interger programming, and networks.

MATH 4581. Classical Mathematical Methods in Engineering. 3 Credit Hours.

The Laplace transform and applications, Fourier series, boundary value problems for partial differential equations.

MATH 4640. Numerical Analysis I. 3 Credit Hours.

Introduction to numerical algorithms for some basic problems in computational mathematics. Discussion of both implementation issues and error analysis.

MATH 4641. Numerical Analysis II. 3 Credit Hours.

Introduction to the numerical solution of initial and boundary value problems in differential equations.

MATH 4695. Undergraduate Internship. 1-21 Credit Hours.

Undergraduate internship for academic credit.

MATH 4698. Undergraduate Research Assistantship. 1-12 Credit Hours.

Independent research conducted under the guidance of a faculty member.

MATH 4699. Undergraduate Research. 1-12 Credit Hours.

Independent research conducted under the guidance of a faculty member.

MATH 4755. Mathematical Biology. 3 Credit Hours.

Problems from the life sciences and the mathematical methods for solving them are presented. The underlying biological and mathematical principles and the interrelationships are emphasized. Crosslisted with BIOL 4755.

MATH 4777. Vector and Parallel Scientific Computation. 3 Credit Hours.

Scientific computational algorithms on vector and parallel computers. Speed-up and algorithm complexity, interprocesses communication, synchronization, modern algorithms for linear systems, programming techniques, code optimization. Crosslisted with CS 4777.

MATH 4782. Quantum Information and Quantum Computing. 3 Credit Hours.

Introduction to quantum computing and quantum information theory, formalism of quantum mechanics, quantum gates, algorithms, measurements, coding, and information. Physical realizations and experiments. Crosslisted with PHYS 4782.

MATH 4801. Special Topics. 1 Credit Hour.

Courses on special topics of current interest in mathematics.

MATH 4802. Special Topics. 2 Credit Hours.

Courses on special topics of current interest in mathematics.

MATH 4803. Special Topics. 3 Credit Hours.

Courses on special topics of current interest in mathematics.

MATH 4804. Special Topics. 4 Credit Hours.

Courses on special topics of current interest in mathematics.

MATH 4805. Special Topics. 5 Credit Hours.

Courses on special topics of current interest in mathematics.

MATH 4873. Special Topics. 3 Credit Hours.

This course enables the school of Mathematics to comply with requests for courses in selected topics.

MATH 4999. Reading or Research. 1-21 Credit Hours.

Reading or research in topics of current interest.

MATH 4XXX. Mathematics Elective. 1-21 Credit Hours.

MATH 6001. Introduction to Graduate Studies in Mathematics. 2 Credit Hours.

This course covers practical information helping students start their careers as a professional mathematician. It also satisfies the Georgia Tech RCR requirements for "in-person" training.

MATH 6014. Graph Theory and Combinatorial Structures. 3 Credit Hours.

Fundamentals, connectivity, matchings, colorings, extremal problems, Ramsey theory, planar graphs, perfect graphs. Applications to operations research and the design of efficient algorithms.

MATH 6021. Topology of Euclidean Spaces. 3 Credit Hours.

Metric spaces, normed linear spaces, convexity, and separation polyhedra and simplicial complexes surfaces Brouwer fixed point theorem.

MATH 6112. Advanced Linear Algebra. 3 Credit Hours.

An advanced course in Linear Algebra and applications.

MATH 6121. Modern Abstract Algebra I. 3 Credit Hours.

Graduate-level linear and abstract algebra including groups, finite fields, classical matrix groups and bilinear forms, multilinear algebra, and matroids. First of two courses.

MATH 6122. Modern Abstract Algebra II. 3 Credit Hours.

Graduate-level linear and abstract algebra including rings, fields, modules, some algebraic number theory and Galois theory. Second of two courses.

MATH 6221. Advanced Classical Probability Theory. 3 Credit Hours.

Classical introduction to probability theory including expectation, notions of convergence, laws of large numbers, independence, large deviations, conditional expectation, martingales, and Markov chains.

MATH 6235. Stochastic Processes in Finance II. 3 Credit Hours.

Advanced mathematical modeling of financial markets, derivative securities pricing, and portfolio optimization. Concepts from advanced probability and mathematics are introduced as needed.

MATH 6241. Probability I. 3 Credit Hours.

Develops the probability basis requisite in modern statistical theories and stochastic processes. Topics of this course include measure and integration foundations of probability, distribution functions, convergence concepts, laws of large numbers, and central limit theory. First of two courses.

MATH 6242. Probability II. 3 Credit Hours.

Develops the probability basis requisite in modern statistical theories and stochastic processes. Topics of this course include results for sums of independent random variables, Markov processes, martingales, Poisson processes, Brownian motion, conditional probability and conditional expectation, and topics from ergodic theory. Second of two classes.

MATH 6262. Advanced Statistical Inference I. 3 Credit Hours.

Basic theories of statistical estimation, including optimal estimation in finite samples and asymptotically optimal estimation. A careful mathematical treatment of the primary techniques of estimation utilized by statisticians.

MATH 6263. Testing Statistical Hypotheses. 3 Credit Hours.

Basic theories of testing statistical hypotheses, including a thorough treatment of testing in exponential class families. A careful mathematical treatment of the primary techniques of hypothesis testing utilized by statisticians.

MATH 6266. Linear Statistical Models. 3 Credit Hours.

Basic unifying theory underlying techniques of regression, analysis of variance and covariance, from a geometric point of view. Modern computational capabilities are exploited fully. Students apply the theory to real data through canned and coded programs.

MATH 6267. Multivariate Statistical Analysis. 3 Credit Hours.

Multivariate normal distribution theory, correlation and dependence analysis, regression and prediction, dimension-reduction methods, sampling distributions and related inference problems, selected applications in classification theory, multivariate process control, and pattern recognition.

MATH 6307. Ordinary Differential Equations I. 3 Credit Hours.

This sequence develops the qualitative theory for systems of ordinary differential equations. Topics include stability, Lyapunov functions, Floquet theory, attractors, invariant manifolds, bifurcation theory, normal forms. First of two courses.

MATH 6308. Ordinary Differential Equations II. 3 Credit Hours.

This sequence develops the qualitative theory for systems of differential equations. Topics include stability, Lyapunov functions, Floquet theory, attractors, invariant manifolds, bifurcation theory, and normal forms. Second of two courses.

MATH 6321. Functions of a Complex Variable I. 3 Credit Hours.

Complex integration, including Goursat's theorem classification of singularities, the argument principle, the maximum principle Riemann Mapping theorem analytic continuation and Riemann surfaces range of an analytic function, including Picard's theorem.

MATH 6337. Real Analysis I. 3 Credit Hours.

Measure and integration theory. Topics include measures, measurable functions, integration and differentiation of measures.

MATH 6338. Real Analysis II. 3 Credit Hours.

Topics include Lp spaces, Banach and Hilbert spaces, basic functional analysis.

MATH 6341. Partial Differential Equations I. 3 Credit Hours.

Introduction to the mathematical theory of partial differential equations covering the basic linear models of science and exact solution techniques.

MATH 6342. Partial Differential Equations II. 3 Credit Hours.

This course covers the general mathematical theory of linear stationary and evolution problems plus selected topics chosen from the instructor's interests.

MATH 6421. Algebraic Geometry I. 3 Credit Hours.

The study of zero sets of polynomials: algebraic varieties, regular and rational mappings, the Zariski topology.

MATH 6422. Algebraic Geometry II. 3 Credit Hours.

A continuation of Algebraic Geometry I.

MATH 6441. Algebraic Topology I. 3 Credit Hours.

Simplicial homology. Chain complexes and acyclic carriers. Simplicial approximation. The exact homology sequence. Maps of spheres. Mayer-Vietoris sequence.

MATH 6442. Algebraic Topology II. 3 Credit Hours.

Continuation of MATH 6441. Singular homology. Local homology and manifolds. CW complexes. Cohomology. Duality in manifolds.

MATH 6451. General Topology. 3 Credit Hours.

Introduction to topological and metric spaces. Continuity, compactness, convergence, completion. Product and quotient spaces. Elementary homotopy.

MATH 6452. Differential Topology. 3 Credit Hours.

Manifolds. Differentiable structures. Tangent bundles. Embeddings and immersions. Maps on manifolds. Transversality. Morse-Sard Theorem. Vector bundles.

MATH 6453. Geometric Topology. 3 Credit Hours.

Characteristic classes, Morse theory, three-manifolds, four-manifolds, symplectic and contact manifolds, knot theory.

MATH 6455. Differential Geometry I. 3 Credit Hours.

Core topics in differential, including: Lie groups, curvature, and relations with topology.

MATH 6456. Differential Geometry II. 3 Credit Hours.

Introduces students to topics of current interest in geometry.

MATH 6514. Industrial Mathematics I. 3 Credit Hours.

Applied mathematics techniques to solve real-world problems. Topics include mathematical modeling, asymptotic analysis, differential equations and scientific computation. Prepares the student for MATH 6515.

MATH 6580. Introduction to Hilbert Spaces. 3 Credit Hours.

Geometry, convergence, and structure of linear operators in infinite dimensional spaces. Applications to science and engineering, including integral equations and ordinary partial differential equations.

MATH 6583. Integral Equations and Transforms. 3 Credit Hours.

Volterra and Fredholm linear integral equations relation to differential equations solution methods Fourier, Laplace, and Mellin transforms applications to boundary value problems and integral equations.

MATH 6584. Special Functions of Higher Mathematics. 3 Credit Hours.

Gamma function exponential function orthogonal polynomials Bessel, Legendre, and hypergeometric functions application to singular ordinary differential equations and separation of variables for partial differential equations.

MATH 6635. Numerical Methods in Finance. 3 Credit Hours.

Basic numerical and simulation techniques used in the pricing of derivative securities and in related problems in finance. Some programming experience required.

MATH 6640. Introduction to Numerical Methods for Partial Differential Equations. 3 Credit Hours.

Introduction to the implementation and analysis of numerical algorithms for the numerical solution of the classic partial differential equations of science and engineering. Must have knowledge of a computer programming language, familiarity with partial differential equations and elements of scientific computing.

MATH 6641. Advanced Numerical Methods for Partial Differential Equations. 3 Credit Hours.

Analysis and implementation of numerical methods for nonlinear partial differential equations including elliptic, hyperbolic, and/or parabolic problems. Must have knowledge of classic linear partial differential equations and exposure to numerical methods for partial differential equations at the level of MATH 6640 or numerical linear algebra at the level of MATH 6643.

MATH 6643. Numerical Linear Algebra. 3 Credit Hours.

Introduction to the numerical solution of the classic problems of linear algebra including linear systems, least squares, Singular value decomposition, eigenvalue problems. Crosslisted with CSE 6643.

MATH 6644. Iterative Methods for Systems of Equations. 3 Credit Hours.

Iterative methods for linear and nonlinear systems of equations including Jacobi, G-S, SOR, CG, multigrid, Newton quasi-Newton, updating, and gradient-based methods. Crosslisted with CSE 6644.

MATH 6645. Numerical Approximation Theory. 3 Credit Hours.

Theoretical and computational aspects of polynomial, rational, trigonometric, spline, and wavelet approximation.

MATH 6646. Numerical Methods for Ordinary Differential Equations. 3 Credit Hours.

Analysis and implementation of numerical methods for initial and two-point boundary value problems for ordinary differential equations.

MATH 6647. Numerical Methods for Dynamical Systems. 3 Credit Hours.

Approximation of the dynamical structure of a differential equation and preservation of dynamical structure under discretization. Must be familiar with dynamical systems and numerical methods for initial and boundary value problems in ordinary differential equations.

MATH 6701. Math Methods of Applied Sciences I. 3 Credit Hours.

Review of linear algebra and ordinary differential equations, brief introduction to functions of a complex variable.

MATH 6702. Math Methods of Applied Sciences II. 3 Credit Hours.

Review of vector calculus and its applications to partial differential equations.

MATH 6705. Modeling and Dynamics. 3 Credit Hours.

Mathematical methods for solving problems in the life sciences. Models-based course on basic facts from the theory of ordinary differential equations and numerical methods of their solution. Introduction to the control theory, diffusion theory, maximization, minimization and curve fitting. Math majors may not use this course toward any degree in the School of Mathematics.

MATH 6710. Numerical Methods in Computational Science and Engineering I. 3 Credit Hours.

Introduction to numerical algorithms widely used in computational science and engineering. Numerical linear algebra, linear programming, and applications. Crosslisted with CSE 6710.

MATH 6711. Numerical Methods in Computational Science and Engineering II. 3 Credit Hours.

Efficient numerical techniques for solving partial differential equations and large-scale systems of equations arising from discretization of partial differential equations or variational problems in applications in science and engineering. Crosslisted with CSE 6711.

MATH 6759. Stochastic Processes in Finance I. 3 Credit Hours.

Mathematical modeling of financial markets, derivative securities pricing, and portfolio optimization. Concepts from probability and mathematics are introduced as needed. Crosslisted with ISYE 6759.

MATH 6761. Stochastic Processes I. 3 Credit Hours.

Discrete time Markov chains, Poisson processes, and renewal processes. Transient and limiting behavior. Average cost and utility measures of systems. Algorithms for computing performance measures. Modeling of inventories, and flows in manufacturing and computer networks. Crosslisted with ISYE 6761.

MATH 6762. Stochastic Processes II. 3 Credit Hours.

Continuous time Markov chains. Uniformization, transient and limiting behavior. Brownian motion and martingales. Optional sampling and convergence. Modeling of inventories, finance, flows in manufacturing and computer networks. Crosslisted with ISYE 6762.

MATH 6767. Design and Implementation of Systems to Support. 3 Credit Hours.

Computational Finance Introduction to large scale system design to support computational finance for options, stocks, or other financial instruments. Some programming experience, and previous exposure to stocks, bonds, and options required. Crosslisted with ISYE 6767.

MATH 6769. Fixed Income Securities. 3 Credit Hours.

Description, institutional features, and mathematical modeling of fixed income securities. Use of both deterministic and stochastic models. Crosslisted with ISYE 6769.

MATH 6783. Statistical Techniques of Financial Data Analysis. 3 Credit Hours.

Fundamentals of statistical inference for models used in the modern analysis of financial data. Crosslisted with ISYE 6783.

MATH 6785. The Practice of Quantitative and Computational Finance. 3 Credit Hours.

Case studies, visiting lecturers from financial institutions, student group projects of an advanced nature, and student reports, all centered around quantitative and computational finance. Crosslisted with ISYE and MGT 6785.

MATH 6793. Advanced Topics in Quantitative and Computational Finance. 3 Credit Hours.

Advanced foundational material and analysis techniques in quantitative and computational finance. Crosslisted with ISYE 6793.

MATH 6XXX. Mathematics Elective. 1-21 Credit Hours.

MATH 7000. Master's Thesis. 1-21 Credit Hours.

MATH 7012. Enumerative Combinatorics. 3 Credit Hours.

Fundamental methods of enumeration and asymptotic analysis, including the use of inclusion/exclusion, generating functions, and recurrence relations. Applications to strings over a finite alphabet and graphs.

MATH 7014. Advanced Graph Theory. 3 Credit Hours.

Advanced topics in graph theory. Selection of arguments varies every year.

MATH 7016. Combinatorics. 3 Credit Hours.

Fundamental combinatorial structures including hypergraphs, transversal sets, colorings, Sperner families, intersecting families, packings and coverings, perfect graphs, and Ramsey theory. Algebraic and topological methods, applications.

MATH 7018. Probabilistic Methods in Combinatorics. 3 Credit Hours.

Applications of probabilistic techniques in discrete mathematics, including classical ideas using expectation and variance as well as modern tools, such as martingale and correlation inequalities.

MATH 7244. Stochastic Processes and Stochastic Calculus I. 3 Credit Hours.

An introduction to the Ito stochastic calculus and stochastic differential equations through a development of continuous-time martingales and Markov processes. First of two courses.

MATH 7245. Stochastic Processes and Stochastic Calculus II. 3 Credit Hours.

An introduction to the Ito stochastic calculus and stochastic differential equations through a development of continuous-time martingales and Markov processes. Continuation of MATH 7244.

MATH 7251. High-dimensional probability. 3 Credit Hours.

The goal of this PhD level graduate course is to provide a rigorous introduction to the methods of high-dimensional probability.

MATH 7252. High-dimensional statistics. 3 Credit Hours.

The goal of this PhD level graduate course is to provide a rigorous introduction to the methods of high-dimensional statistics.

MATH 7334. Operator Theory. 3 Credit Hours.

Theory of linear operators on Hilbert space. Spectral theory of bounded and unbounded operators. Applications.

MATH 7337. Harmonic Analysis. 3 Credit Hours.

Fourier analysis in Euclidean space. Basic topics including L1 and L2 theory advanced topics such as distribution theory, uncertainty, Littlewood-Paley theory.

MATH 7338. Functional Analysis. 3 Credit Hours.

Topics include the Hahn-Banach theorems, the Baire Category theorem and its consequences, duality in Banach spaces, locally convex spaces, and additional topics.

MATH 7510. Graph Algorithms. 3 Credit Hours.

Algorithms for graph problems such as maximum flow, covering, matching, coloring, planarity, minimum cuts, shortest paths, and connectivity. Crosslisted with ISYE 7510 and CS 7510.

MATH 7581. Calculus of Variations. 3 Credit Hours.

Minimization of functionals, Euler-Lagrange equations, sufficient conditions for a minimum geodesic, isoperometric, and time of transit problems variational principles of mechanics applications to control theory.

MATH 7586. Tensor Analysis. 3 Credit Hours.

Review of linear algebra, multilinear algebra, algebra of tensors, co- and contravariant tensors, tensors in Riemann spaces, geometrical interpretation of skew tensors.

MATH 7999. Preparation for Doctoral Comprehensive Examination. 1-21 Credit Hours.

MATH 8305. Aural-Oral English Skills for Math ESL International Teaching Assistants. 2 Credit Hours.

Enhancement of English listening/speaking skills for SOM international graduate students, post-docs, and new faculty who speak English as their second language (ESL) and who will be teaching undergraduate students.

MATH 8306. Academic Communication for Intermediate ESL Math International Teaching Assistants. 2 Credit Hours.

Continued enhancement of English listening/speaking skills for current and future SOM graduate international teaching assistants and international lead instructors who speak English as their second language (ESL).

MATH 8307. Academic Communication for Advanced ESL Math International Teaching Assistants. 1 Credit Hour.

Continued enhancement of English listening/speaking skills for current and future SOM graduate international teaching assistants and international lead instructors who speak English as their second language (ESL).

MATH 8801. Special Topics. 1 Credit Hour.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8802. Special Topics. 2 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8803. Special Topics. 3 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8804. Special Topics. 4 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8805. Special Topics. 5 Credit Hours.

This course enables the school of Mathematics to comply with requests for courses in selected topics.

MATH 8811. Special Topics. 1 Credit Hour.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8812. Special Topics. 2 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8813. Special Topics. 3 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8814. Special Topics. 4 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8815. Special Topics. 5 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8821. Special Topics. 1 Credit Hour.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8822. Special Topics. 2 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8823. Special Topics. 3 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8824. Special Topics. 4 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8825. Special Topics. 5 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8831. Special Topics. 1 Credit Hour.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8832. Special Topics. 2 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8833. Special Topics. 3 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8834. Special Topics. 4 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8835. Special Topics. 5 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8841. Special Topics. 1 Credit Hour.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8842. Special Topics. 2 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8843. Special Topics. 3 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8844. Special Topics. 4 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8845. Special Topics. 5 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8851. Special Topics. 1 Credit Hour.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8852. Special Topics. 2 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8853. Special Topics. 3 Credit Hours.

This course enables the school of Mathematics to comply with requests for courses in selected topics.

MATH 8854. Special Topics. 4 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8855. Special Topics. 5 Credit Hours.

This course enables the School of Mathematics to comply with requests for courses in selected topics.

MATH 8863. Advanced Topics in Graph Theory. 3 Credit Hours.

Selection of topics vary with each offering.

MATH 8873. Special Topics. 3 Credit Hours.

This course enables the school of Mathematics to comply with requests for courses in selected topics.

MATH 8900. Special Problems. 1-21 Credit Hours.

MATH 8901. Special Problems. 1-21 Credit Hours.

MATH 8902. Special Problems. 1-21 Credit Hours.

MATH 8903. Special Problems. 1-21 Credit Hours.

MATH 8997. Teaching Assistantship. 1-9 Credit Hours.

For students holding graduate teaching assistantships.

MATH 8998. Research Assistantship. 1-9 Credit Hours.

For students holding graduate research assistantships.

MATH 9000. Doctoral Thesis. 1-21 Credit Hours.

Georgia Institute of Technology
North Avenue, Atlanta, GA 30332
404.894.2000


Curve Fitting via Optimization

where the times are t i and the responses are y i , i = 1 , … , n . The sum of squared errors is the objective function.

Create Sample Data

Usually, you have data from measurements. For this example, create artificial data based on a model with A = 4 0 and λ = 0 . 5 , with normally distributed pseudorandom errors.

Write Objective Function

Write a function that accepts parameters A and lambda and data tdata and ydata , and returns the sum of squared errors for the model y ( t ) . Put all the variables to optimize ( A and lambda ) in a single vector variable ( x ). For more information, see Minimizing Functions of Several Variables.

Save this objective function as a file named sseval.m on your MATLAB® path.

The fminsearch solver applies to functions of one variable, x . However, the sseval function has three variables. The extra variables tdata and ydata are not variables to optimize, but are data for the optimization. Define the objective function for fminsearch as a function of x alone:

For information about including extra parameters such as tdata and ydata , see Parameterizing Functions.

Find the Best Fitting Parameters

Start from a random positive set of parameters x0 , and have fminsearch find the parameters that minimize the objective function.

The result bestx is reasonably near the parameters that generated the data, A = 40 and lambda = 0.5 .

Check the Fit Quality

To check the quality of the fit, plot the data and the resulting fitted response curve. Create the response curve from the returned parameters of your model.


3 Answers 3

This has nothing to do with convexity nor with the method used to determine minima it is purely a matter of logic and order. It is with minimax problems that the real difficulties arise.

Note that your function $g$ depends only on the variable $y$. I'd argue as follows, neglecting questions of existence:

Given $f:quad X imes Y o,qquad (x,y)mapsto f(x,y) ,$ put $g(y):=min_ f(x,y)quad(yin Y) ,qquad S(y):= .$ The set $S(y)$ replaces your $< m argmin>_x f(x,y)$, and hopefully consists of only one point $x^*(y)in X$.

Now we know on each " horizontal" $y=< m const.>$ the minimal value taken by $f$ and the set of points where this minimal value is taken. We proceed to the second step: Put $mu:=min_ g(y), qquad T:= .$ Then $mu$ is the overall minimal value of $f$, and the set of $(x,y)$ where this value is taken is given by (check it!) $f^<-1>(mu)=<(x,y)>|>yin T, xin S(y)> .$ When $S(y)=$ we have $g(y)=figl(x^*(y),yigr)$, and when in the second step we find $T=$, then $mu=figl(x^*(y^*),y^*igr)$, and we are allowed to write $< m argmin>>f=igl(x^*(y^*),y^*igr) .$

Edit: Side note concerning convexity with respect to several variables

In the situation considered by the OP the function $f$ is only separately convex in each of the two variables $x$ and $y$. In this case neither the function $f$ nor $g$ can be expected to be convex. Consider the example $f(x,y):=x^2-4xy+y^2 .$ Then $f_=f_equiv2$, so $f$ is convex in each of the two variables separately but $f(x,y)=(x-2y)^2-3y^2 ag<1>$ shows that $f$ is not convex as a function of two variables. From $(1)$ we deduce $g(y)=-3y^2$, which certainly is not convex. (In this case $inf_> g(y)=-infty$ but I conjecture that one could cook an example of this kind with a finite minimum $mu$.)


Properties

IndexNames — Index names '' (default) | cell array of strings | cell array of character vectors

Index names, specified as a cell array of strings or character vectors. For information on using index names, see Named Index for Optimization Variables.

Data Types: cell

Variables — Optimization variables in object structure of OptimizationVariable objects

This property is read-only.

Optimization variables in the object, specified as a structure of OptimizationVariable objects.