## 4: The Derivative - Mathematics

In the first section of the Limits chapter we saw that the computation of the slope of a tangent line, the instantaneous rate of change of a function, and the instantaneous velocity of an object at (x = a) all required us to compute the following limit.

We also saw that with a small change of notation this limit could also be written as,

This is such an important limit and it arises in so many places that we give it a name. We call it a **derivative**. Here is the official definition of the derivative.

#### Defintion of the Derivative

Note that we replaced all the *a*’s in (eqref*x*’s to acknowledge the fact that the derivative is really a function as well. We often “read” (f'left( x
ight)) as “*f* prime of *x*”.

Let’s compute a couple of derivatives using the definition.

So, all we really need to do is to plug this function into the definition of the derivative, (eqref

First plug the function into the definition of the derivative.

Be careful and make sure that you properly deal with parenthesis when doing the subtracting.

Now, we know from the previous chapter that we can’t just plug in (h = 0) since this will give us a division by zero error. So, we are going to have to do some work. In this case that means multiplying everything out and distributing the minus sign through on the second term. Doing this gives,

Notice that every term in the numerator that didn’t have an *h* in it canceled out and we can now factor an *h* out of the numerator which will cancel against the *h* in the denominator. After that we can compute the limit.

This one is going to be a little messier as far as the algebra goes. However, outside of that it will work in exactly the same manner as the previous examples. First, we plug the function into the definition of the derivative,

Note that we changed all the letters in the definition to match up with the given function. Also note that we wrote the fraction a much more compact manner to help us with the work.

As with the first problem we can’t just plug in (h = 0). So, we will need to simplify things a little. In this case we will need to combine the two terms in the numerator into a single rational expression as follows.

Before finishing this let’s note a couple of things. First, we didn’t multiply out the denominator. Multiplying out the denominator will just overly complicate things so let’s keep it simple. Next, as with the first example, after the simplification we only have terms with *h*’s in them left in the numerator and so we can now cancel an *h* out.

So, upon canceling the *h* we can evaluate the limit and get the derivative.

First plug into the definition of the derivative as we’ve done with the previous two examples.

In this problem we’re going to have to rationalize the numerator. You do remember rationalization from an Algebra class right? In an Algebra class you probably only rationalized the denominator, but you can also rationalize numerators. Remember that in rationalizing the numerator (in this case) we multiply both the numerator and denominator by the numerator except we change the sign between the two terms. Here’s the rationalizing work for this problem,

Again, after the simplification we have only *h*’s left in the numerator. So, cancel the *h* and evaluate the limit.

And so we get a derivative of,

Let’s work one more example. This one will be a little different, but it’s got a point that needs to be made.

Since this problem is asking for the derivative at a specific point we’ll go ahead and use that in our work. It will make our life easier and that’s always a good thing.

So, plug into the definition and simplify.

We saw a situation like this back when we were looking at limits at infinity. As in that section we can’t just cancel the *h*’s. We will have to look at the two one sided limits and recall that

The two one-sided limits are different and so

doesn’t exist. However, this is the limit that gives us the derivative that we’re after.

If the limit doesn’t exist then the derivative doesn’t exist either.

In this example we have finally seen a function for which the derivative doesn’t exist at a point. This is a fact of life that we’ve got to be aware of. Derivatives will not always exist. Note as well that this doesn’t say anything about whether or not the derivative exists anywhere else. In fact, the derivative of the absolute value function exists at every point except the one we just looked at, (x = 0).

The preceding discussion leads to the following definition.

#### Definition

A function (fleft( x
ight)) is called **differentiable** at (x = a) if (f'left( a
ight)) exists and (fleft( x
ight)) is called differentiable on an interval if the derivative exists for each point in that interval.

The next theorem shows us a very nice relationship between functions that are continuous and those that are differentiable.

#### Theorem

If (fleft( x ight)) is differentiable at (x = a) then (fleft( x ight)) is continuous at (x = a).

Note that this theorem does not work in reverse. Consider (fleft( x ight) = left| x ight|) and take a look at,

[mathop

So, (fleft( x ight) = left| x ight|) is continuous at (x = 0) but we’ve just shown above in Example 4 that (fleft( x ight) = left| x ight|) is not differentiable at (x = 0).

#### Alternate Notation

Next, we need to discuss some alternate notation for the derivative. The typical derivative notation is the “prime” notation. However, there is another notation that is used on occasion so let’s cover that.

Given a function (y = fleft( x
ight)) all of the following are equivalent and represent the derivative of (fleft( x
ight)) with respect to *x*.

Because we also need to evaluate derivatives on occasion we also need a notation for evaluating derivatives when using the fractional notation. So, if we want to evaluate the derivative at (x = a) all of the following are equivalent.

Note as well that on occasion we will drop the (left( x ight)) part on the function to simplify the notation somewhat. In these cases the following are equivalent.

As a final note in this section we’ll acknowledge that computing most derivatives directly from the definition is a fairly complex (and sometimes painful) process filled with opportunities to make mistakes. In a couple of sections we’ll start developing formulas and/or properties that will help us to take the derivative of many of the common functions so we won’t need to resort to the definition of the derivative too often.

This does not mean however that it isn’t important to know the definition of the derivative! It is an important definition that we should always know and keep in the back of our minds. It is just something that we’re not going to be working with all that much.

## First Derivative

Consider the function $ f(x) = 3x^4-4x^3-12x^2+3 $ on the interval $[-2,3]$. We cannot find regions of which $f$ is increasing or decreasing, relative maxima or minima, or the absolute maximum or minimum value of $f$ on $[-2,3]$ by inspection. Graphing by hand is tedious and imprecise. Even the use of a graphing program will only give us an approximation for the locations and values of maxima and minima. We can use the first derivative of $f$, however, to find all these things quickly and easily.

Let $f$ be defined on an interval $I$. Let $x_1 in I$ and $x_2 in I$.

Then $f$ is **increasing** on $I$ if $x_1 < x_2$ implies $f(x_1) < f(x_2)$.

The function $f$ is **decreasing** on $I$ if $x_1 < x_2$ implies $f(x_1) > f(x_2)$.

Let $f$ be continuous on an interval $I$ and differentiable on the interior of $I$.

- If $f'(x) > 0$ for all $x in I$, then $f$ is
*increasing*on $I$. - If $f'(x) < 0$ for all $x in I$, then $f$ is
*decreasing*on $I$.

###### Example

The function $f(x) = 3x^4-4x^3-12x^2+3$ has first derivative egin

A function $f$ has a **relative (or local) maximum** at $x_0$ if $f(x_0) geq f(x)$ for all $x$ in some open interval containing $x_0$. The function has a **relative, or local minimum** at $x_0$ if $f(x_0) leq f(x)$ for all $x$ in some open interval containing $x_0$.

Relative maxima and minima are called **relative extrema**.

Relative extrema of $f$ occur at **critical points** of $f$, values $x_0$ for which either $f'(x_0)= 0$ or $f'(x_0)$ is undefined.

**First Derivative Test**

Suppose $f$ is continuous at a critical point $x_0$.

- If $f'(x) > 0$ on an open interval extending left from $x_0$ and $f'(x) < 0$ on an open interval extending right from $x_0$, then $f$ has a relative maximum at $x_0$.
- If $f'(x) < 0$ on an open interval extending left from $x_0$ and $f'(x) > 0$ on an open interval extending right from $x_0$, then $f$ has a relative minimum at $x_0$.
- If $f'(x)$ has the same sign on both an open interval extending left from $x_0$ and an open interval extending right from $x_0$, then $f$ does not have a relative extremum at $x_0$.

In summary, relative extrema occur where $f'(x)$ changes sign.

###### Example

Our function $f(x) = 3x^4-4x^3-12x^2+3$ is differentiable everywhere on $[-2,3]$, with $f'(x) = 0$ for $x=-1,0,2$. These are the three critical points of $f$ on $[-2,3]$. By the First Derivative Test, $f$ has a relative maximum at $x=0$ and relative minima at $x=-1$ and $x=2$.

If $f(x_0) geq f(x)$ for all $x$ in an interval $I$, then $f$ achieves its **absolute maximum** over $I$ at $x_0$.

If $f(x_0) leq f(x)$ for all x in an interval $I$, then $f$ achieves its **absolute minimum** over $I$ at $x_0$.

Absolute maxima and absolute minima are often refered to simply as maxima and minima and are collectively called **extreme values** of $f$.

- If $f$ has an extreme value on an
*open*interval, then the extreme value occurs at a critical point of $f$. - If $f$ has an extreme value on a
*closed*interval, then the extreme value occurs either at a critical point or at an endpoint.

**Extreme Value Theorem**

If a function is continuous on a closed interval, then it achieves both an absolute maximum and an absolute minimum on the interval.

###### Example

Since $f(x) = 3x^4-4x^3-12x^2+3$ is continuous on $[-2,3]$, $f$ must have an absolute maximum and an absolute minimum on $[-2,3]$. We simply need to check the value of $f$ at the critical points $x=-1,0,2$ and at the endpoints $x=-2$ and $x=3$: egin

We have discovered a lot about the shape of $f(x) = 3x^4-4x^3-12x^2+3$ without ever graphing it! Now take a look at the graph and verify each of our conclusions.

#### Key Concepts

Let $f$ be continuous on an interval $I$ and differentiable on the interior of $I$. If $f'(x) > 0$ for all $x in I$, then $f$ is *increasing* on $I$. If $f'(x) < 0$ for all $x in I$, then $f$ is *decreasing* on $I$.

By the First Derivative Test, relative extrema occur where $f'(x)$ changes sign.

If $f$ has an extreme value on a closed interval, then the extreme value occurs either at a a critical point or at an endpoint.

## Derivative Test

Many applications of calculus require us to deduce facts about a function *f* from the information concerning its derivatives. Since *f* &lsquo (*x*) represents the slope of the curve *y* = *f*(*x*) at the point (*x*, *f*(*x*)), it tells us the direction in which the curve proceeds at each point.

### Increasing / Decreasing Test

Find where the function *f*(*x*) = *x* 3 &ndash 12*x* + 1 is increasing and where it is decreasing.

* Step 1:* Find the derivative of

*f*

* Step 2:* Set

*f*&lsquo(

*x*) = 0 to get the critical numbers

* Step 3:* Set up intervals whose endpoints are the critical numbers and determine the sign of

*f*&lsquo(

*x*) for each of the intervals. Use the increasing/decreasing test to determine whether

*f*(

*x*) is increasing or decreasing for each interval.

### The First Derivative Test

Suppose that *c* is a critical number of a continuous function *f*.

1. If

f&lsquo changes from positive to negative atc, thenfhas a local maximum atc.

2. Iff&lsquo changes from negative to positive atc, thenfhas a local minimum atc.3. Iff&lsquo does not change sign atc(f&lsquo is positive at both sides ofcorf&lsquo is negative on both sides), thenfhas no local maximum or minimum atc.

Find the local maximum and minimum values of the function *f*(*x*) = *x* 4 &ndash 2*x* 2 + 3

* Step 1:* Find the derivative of

*f*

* Step 2:* Set

*f*&lsquo(

*x*) = 0 to get the critical numbers

* Step 3:* Set up intervals whose endpoints are the critical numbers and determine the sign of

*f*&lsquo(

*x*) for each of the intervals.

** Step 4:** Use the first derivative test to find the local maximum and minimum values.

*f* &lsquo(*x*) goes from negative to positive at *x* = &ndash1, the First Derivative Test tells us that there is a local minimum at *x* = &ndash1. *f* (&ndash1) = 2 is the local minimum value.

*f* &lsquo(*x*) goes from positive to negative at *x* = 0, the First Derivative Test tells us that there is a local maximum at *x* = 0. *f* (0) = 3 is the local maximum value.

*f* &lsquo(*x*) goes from negative to positive at *x* = 1, the First Derivative Test tells us that there is a local minimum at *x* = 1. *f* (1) = 2 is the local minimum value.

### The Second Derivative Test

We can also use the Second Derivative Test to determine maximum or minimum values.

The Second Derivative Test

Use the Second Derivative Test to find the local maximum and minimum values of the function *f*(*x*) = *x* 4 &ndash 2*x* 2 + 3

* Step 1:* Find the derivative of

*f*

* Step 2:* Set

*f*&lsquo(

*x*) = 0 to get the critical numbers

* Step 3:* Find the second derivative

* Step 4:* Evaluate

*f*&lsquo&rsquoat the critical numbers

*f* &lsquo&rsquo(&ndash1) = 8 > 0, so *f* (&ndash1) = 2 is the local minimum value. *f* &lsquo&rsquo(0) = &ndash 4 < 0, so *f* (0) = 2 is the local maximum value. *f* &lsquo&rsquo(1) = 8 > 0, so *f* (1) = 2 is the local minimum value.

### Videos - Relative Maxima and Minima

Relative Maxima and Minima (Local Maxima and Minima)

Finding relative maxima and minima of a function can be done by looking at a graph of the function. A relative maximum is a point that is higher than the points directly beside it on both sides, and a relative minimum is a point that is lower than the points directly beside it on both sides. Relative maxima and minima are important points in curve sketching, and they can be found by either the first or the second derivative test.

### First Derivative Test for Relative Maximum and Minimum

### Concavity and Inflection Points

### Second Derivative Test for Relative Maximum and Minimum

The second derivative test is useful when trying to find a relative maximum or minimum if a function has a first derivative that is zero at a certain point. Since the first derivative test fails at this point, the point is an inflection point. The second derivative test relies on the sign of the second derivative at that point. If it is positive, the point is a relative minimum, and if it is negative, the point is a relative maximum.

Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.

We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.

## Exercises 2.4

**Ex 2.4.1** Find the derivative of $ds y=f(x)=sqrt<169-x^2>$. (answer)

**Ex 2.4.2** Find the derivative of $ds y=f(t)=80-4.9t^2$. (answer)

**Ex 2.4.3** Find the derivative of $ds y=f(x)=x^2-(1/x)$. (answer)

**Ex 2.4.4** Find the derivative of $ds y=f(x)= ax^2+bx+c$ (where $a$, $b$, and $c$ are constants). (answer)

**Ex 2.4.5** Find the derivative of $ds y=f(x)=x^3$. (answer)

**Ex 2.4.6** Shown is the graph of a function $f(x)$. Sketch the graph of $f'(x)$ by estimating the derivative at a number of points in the interval: estimate the derivative at regular intervals from one end of the interval to the other, and also at "special'' points, as when the derivative is zero. Make sure you indicate any places where the derivative does not exist.

**Ex 2.4.7** Shown is the graph of a function $f(x)$. Sketch the graph of $f'(x)$ by estimating the derivative at a number of points in the interval: estimate the derivative at regular intervals from one end of the interval to the other, and also at "special'' points, as when the derivative is zero. Make sure you indicate any places where the derivative does not exist.

**Ex 2.4.8** Find the derivative of $ds y=f(x)=2/sqrt<2x+1>$ (answer)

**Ex 2.4.9** Find the derivative of $y=g(t)=(2t-1)/(t+2)$ (answer)

**Ex 2.4.10** Find an equation for the tangent line to the graph of $ds f(x)=5-x-3x^2$ at the point $x=2$ (answer)

**Ex 2.4.11** Find a value for $a$ so that the graph of $ds f(x)=x^2+ax-3$ has a horizontal tangent line at $x=4$. (answer)

## Exercises 2.4

**Ex 2.4.1** Find the derivative of $ds y=f(x)=sqrt<169-x^2>$. (answer)

**Ex 2.4.2** Find the derivative of $ds y=f(t)=80-4.9t^2$. (answer)

**Ex 2.4.3** Find the derivative of $ds y=f(x)=x^2-(1/x)$. (answer)

**Ex 2.4.4** Find the derivative of $ds y=f(x)= ax^2+bx+c$ (where $a$, $b$, and $c$ are constants). (answer)

**Ex 2.4.5** Find the derivative of $ds y=f(x)=x^3$. (answer)

**Ex 2.4.6** Shown is the graph of a function $f(x)$. Sketch the graph of $f'(x)$ by estimating the derivative at a number of points in the interval: estimate the derivative at regular intervals from one end of the interval to the other, and also at "special'' points, as when the derivative is zero. Make sure you indicate any places where the derivative does not exist.

**Ex 2.4.7** Shown is the graph of a function $f(x)$. Sketch the graph of $f'(x)$ by estimating the derivative at a number of points in the interval: estimate the derivative at regular intervals from one end of the interval to the other, and also at "special'' points, as when the derivative is zero. Make sure you indicate any places where the derivative does not exist.

**Ex 2.4.8** Find the derivative of $ds y=f(x)=2/sqrt<2x+1>$ (answer)

**Ex 2.4.9** Find the derivative of $y=g(t)=(2t-1)/(t+2)$ (answer)

**Ex 2.4.10** Find an equation for the tangent line to the graph of $ds f(x)=5-x-3x^2$ at the point $x=2$ (answer)

**Ex 2.4.11** Find a value for $a$ so that the graph of $ds f(x)=x^2+ax-3$ has a horizontal tangent line at $x=4$. (answer)

## 4: The Derivative - Mathematics

In this chapter we will start looking at the next major topic in a calculus class, derivatives. This chapter is devoted almost exclusively to finding derivatives. We will be looking at one application of them in this chapter. We will be leaving most of the applications of derivatives to the next chapter.

Here is a listing of the topics covered in this chapter.

The Definition of the Derivative – In this section we define the derivative, give various notations for the derivative and work a few problems illustrating how to use the definition of the derivative to actually compute the derivative of a function.

Interpretation of the Derivative – In this section we give several of the more important interpretations of the derivative. We discuss the rate of change of a function, the velocity of a moving object and the slope of the tangent line to a graph of a function.

Differentiation Formulas – In this section we give most of the general derivative formulas and properties used when taking the derivative of a function. Examples in this section concentrate mostly on polynomials, roots and more generally variables raised to powers.

Product and Quotient Rule – In this section we will give two of the more important formulas for differentiating functions. We will discuss the Product Rule and the Quotient Rule allowing us to differentiate functions that, up to this point, we were unable to differentiate.

Derivatives of Trig Functions – In this section we will discuss differentiating trig functions. Derivatives of all six trig functions are given and we show the derivation of the derivative of (sin(x)) and ( an(x)).

Derivatives of Exponential and Logarithm Functions – In this section we derive the formulas for the derivatives of the exponential and logarithm functions.

Derivatives of Inverse Trig Functions – In this section we give the derivatives of all six inverse trig functions. We show the derivation of the formulas for inverse sine, inverse cosine and inverse tangent.

Derivatives of Hyperbolic Functions – In this section we define the hyperbolic functions, give the relationships between them and some of the basic facts involving hyperbolic functions. We also give the derivatives of each of the six hyperbolic functions and show the derivation of the formula for hyperbolic sine.

Chain Rule – In this section we discuss one of the more useful and important differentiation formulas, The Chain Rule. With the chain rule in hand we will be able to differentiate a much wider variety of functions. As you will see throughout the rest of your Calculus courses a great many of derivatives you take will involve the chain rule!

Implicit Differentiation – In this section we will discuss implicit differentiation. Not every function can be explicitly written in terms of the independent variable, e.g. y = f(x) and yet we will still need to know what f'(x) is. Implicit differentiation will allow us to find the derivative in these cases. Knowing implicit differentiation will allow us to do one of the more important applications of derivatives, Related Rates (the next section)./p>

Related Rates – In this section we will discuss the only application of derivatives in this section, Related Rates. In related rates problems we are give the rate of change of one quantity in a problem and asked to determine the rate of one (or more) quantities in the problem. This is often one of the more difficult sections for students. We work quite a few problems in this section so hopefully by the end of this section you will get a decent understanding on how these problems work.

Higher Order Derivatives – In this section we define the concept of higher order derivatives and give a quick application of the second order derivative and show how implicit differentiation works for higher order derivatives.

## Contents

The derivative of a function is then simply the slope of this tangent line. [Note 2] Even though the tangent line only touches a single point at the point of tangency, it can be approximated by a line that goes through two points. This is known as a secant line. If the two points that the secant line goes through are close together, then the secant line closely resembles the tangent line, and, as a result, its slope is also very similar:

provided such a limit exists. [4] [Note 4] We have thus succeeded in properly defining the derivative of a function, meaning that the 'slope of the tangent line' now has a precise mathematical meaning. Differentiating a function using the above definition is known as differentiation from first principles. Here is a proof, using differentiation from first principles, that the derivative of y = x 2

A closely related concept to the derivative of a function is its differential. When *x* and *y* are real variables, the derivative of *f* at *x* is the slope of the tangent line to the graph of *f* at *x* . Because the source and target of *f* are one-dimensional, the derivative of *f* is a real number. If *x* and *y* are vectors, then the best linear approximation to the graph of *f* depends on how *f* changes in several directions at once. Taking the best linear approximation in a single direction determines a partial derivative, which is usually denoted ∂*y* / ∂*x* . The linearization of *f* in all directions at once is called the total derivative.

The concept of a derivative in the sense of a tangent line is a very old one, familiar to Greek geometers such as Euclid (c. 300 BC), Archimedes (c. 287–212 BC) and Apollonius of Perga (c. 262–190 BC). [5] Archimedes also made use of indivisibles, although these were primarily used to study areas and volumes rather than derivatives and tangents (see *The Method of Mechanical Theorems*).

The use of infinitesimals to study rates of change can be found in Indian mathematics, perhaps as early as 500 AD, when the astronomer and mathematician Aryabhata (476–550) used infinitesimals to study the orbit of the Moon. [6] The use of infinitesimals to compute rates of change was developed significantly by Bhāskara II (1114–1185) indeed, it has been argued [7] that many of the key notions of differential calculus can be found in his work, such as "Rolle's theorem". [8]

The Islamic mathematician, Sharaf al-Dīn al-Tūsī (1135–1213), in his *Treatise on Equations*, established conditions for some cubic equations to have solutions, by finding the maxima of appropriate cubic polynomials. He proved, for example, that the maximum of the cubic *ax* 2 – *x* 3 occurs when *x* = 2*a*/3 , and concluded therefrom that the equation *ax* 2 — *x* 3 = *c* has exactly one positive solution when *c* = 4*a* 3 /27 , and two positive solutions whenever 0 < *c* < 4*a* 3 /27 . [9] The historian of science, Roshdi Rashed, [10] has argued that al-Tūsī must have used the derivative of the cubic to obtain this result. Rashed's conclusion has been contested by other scholars, however, who argue that he could have obtained the result by other methods which do not require the derivative of the function to be known. [11]

The modern development of calculus is usually credited to Isaac Newton (1643–1727) and Gottfried Wilhelm Leibniz (1646–1716), who provided independent [12] and unified approaches to differentiation and derivatives. The key insight, however, that earned them this credit, was the fundamental theorem of calculus relating differentiation and integration: this rendered obsolete most previous methods for computing areas and volumes, [13] which had not been significantly extended since the time of Ibn al-Haytham (Alhazen). [14] For their ideas on derivatives, both Newton and Leibniz built on significant earlier work by mathematicians such as Pierre de Fermat (1607-1665), Isaac Barrow (1630–1677), René Descartes (1596–1650), Christiaan Huygens (1629–1695), Blaise Pascal (1623–1662) and John Wallis (1616–1703). Regarding Fermat's influence, Newton once wrote in a letter that "*I had the hint of this method [of fluxions] from Fermat's way of drawing tangents, and by applying it to abstract equations, directly and invertedly, I made it general.*" [15] Isaac Barrow is generally given credit for the early development of the derivative. [16] Nevertheless, Newton and Leibniz remain key figures in the history of differentiation, not least because Newton was the first to apply differentiation to theoretical physics, while Leibniz systematically developed much of the notation still used today.

Since the 17th century many mathematicians have contributed to the theory of differentiation. In the 19th century, calculus was put on a much more rigorous footing by mathematicians such as Augustin Louis Cauchy (1789–1857), Bernhard Riemann (1826–1866), and Karl Weierstrass (1815–1897). It was also during this period that the differentiation was generalized to Euclidean space and the complex plane.

### Optimization Edit

If *f* is a differentiable function on ℝ (or an open interval) and *x* is a local maximum or a local minimum of *f* , then the derivative of *f* at *x* is zero. Points where *f'*(*x*) = 0 are called *critical points* or *stationary points* (and the value of *f* at *x* is called a *critical value*). If *f* is not assumed to be everywhere differentiable, then points at which it fails to be differentiable are also designated critical points.

If *f* is twice differentiable, then conversely, a critical point *x* of *f* can be analysed by considering the second derivative of *f* at *x* :

- if it is positive,
*x*is a local minimum - if it is negative,
*x*is a local maximum - if it is zero, then
*x*could be a local minimum, a local maximum, or neither. (For example,*f*(*x*) =*x*3 has a critical point at*x*= 0 , but it has neither a maximum nor a minimum there, whereas*f*(*x*) = ±*x*4 has a critical point at*x*= 0 and a minimum and a maximum, respectively, there.)

This is called the second derivative test. An alternative approach, called the first derivative test, involves considering the sign of the *f'* on each side of the critical point.

Taking derivatives and solving for critical points is therefore often a simple way to find local minima or maxima, which can be useful in optimization. By the extreme value theorem, a continuous function on a closed interval must attain its minimum and maximum values at least once. If the function is differentiable, the minima and maxima can only occur at critical points or endpoints.

This also has applications in graph sketching: once the local minima and maxima of a differentiable function have been found, a rough plot of the graph can be obtained from the observation that it will be either increasing or decreasing between critical points.

In higher dimensions, a critical point of a scalar valued function is a point at which the gradient is zero. The second derivative test can still be used to analyse critical points by considering the eigenvalues of the Hessian matrix of second partial derivatives of the function at the critical point. If all of the eigenvalues are positive, then the point is a local minimum if all are negative, it is a local maximum. If there are some positive and some negative eigenvalues, then the critical point is called a "saddle point", and if none of these cases hold (i.e., some of the eigenvalues are zero) then the test is considered to be inconclusive.

#### Calculus of variations Edit

One example of an optimization problem is: Find the shortest curve between two points on a surface, assuming that the curve must also lie on the surface. If the surface is a plane, then the shortest curve is a line. But if the surface is, for example, egg-shaped, then the shortest path is not immediately clear. These paths are called geodesics, and one of the most fundamental problems in the calculus of variations is finding geodesics. Another example is: Find the smallest area surface filling in a closed curve in space. This surface is called a minimal surface and it, too, can be found using the calculus of variations.

### Physics Edit

Calculus is of vital importance in physics: many physical processes are described by equations involving derivatives, called differential equations. Physics is particularly concerned with the way quantities change and develop over time, and the concept of the "**time derivative**" — the rate of change over time — is essential for the precise definition of several important concepts. In particular, the time derivatives of an object's position are significant in Newtonian physics:

- is the derivative (with respect to time) of an object's displacement (distance from the original position) is the derivative (with respect to time) of an object's velocity, that is, the second derivative (with respect to time) of an object's position.

For example, if an object's position on a line is given by

then the object's velocity is

and the object's acceleration is

### Differential equations Edit

A differential equation is a relation between a collection of functions and their derivatives. An ordinary differential equation is a differential equation that relates functions of one variable to their derivatives with respect to that variable. A partial differential equation is a differential equation that relates functions of more than one variable to their partial derivatives. Differential equations arise naturally in the physical sciences, in mathematical modelling, and within mathematics itself. For example, Newton's second law, which describes the relationship between acceleration and force, can be stated as the ordinary differential equation

The heat equation in one space variable, which describes how heat diffuses through a straight rod, is the partial differential equation

Here *u*(*x*,*t*) is the temperature of the rod at position *x* and time *t* and *α* is a constant that depends on how fast heat diffuses through the rod. (2-3¡)-(3+2)

### Mean value theorem Edit

The mean value theorem gives a relationship between values of the derivative and values of the original function. If *f*(*x*) is a real-valued function and *a* and *b* are numbers with *a* < *b* , then the mean value theorem says that under mild hypotheses, the slope between the two points (*a*, *f*(*a*)) and (*b*, *f*(*b*)) is equal to the slope of the tangent line to *f* at some point *c* between *a* and *b* . In other words,

In practice, what the mean value theorem does is control a function in terms of its derivative. For instance, suppose that *f* has derivative equal to zero at each point. This means that its tangent line is horizontal at every point, so the function should also be horizontal. The mean value theorem proves that this must be true: The slope between any two points on the graph of *f* must equal the slope of one of the tangent lines of *f* . All of those slopes are zero, so any line from one point on the graph to another point will also have slope zero. But that says that the function does not move up or down, so it must be a horizontal line. More complicated conditions on the derivative lead to less precise but still highly useful information about the original function.

### Taylor polynomials and Taylor series Edit

The derivative gives the best possible linear approximation of a function at a given point, but this can be very different from the original function. One way of improving the approximation is to take a quadratic approximation. That is to say, the linearization of a real-valued function *f*(*x*) at the point *x*_{0} is a linear polynomial *a* + *b*(*x* − *x*_{0}) , and it may be possible to get a better approximation by considering a quadratic polynomial *a* + *b*(*x* − *x*_{0}) + *c*(*x* − *x*_{0}) 2 . Still better might be a cubic polynomial *a* + *b*(*x* − *x*_{0}) + *c*(*x* − *x*_{0}) 2 + *d*(*x* − *x*_{0}) 3 , and this idea can be extended to arbitrarily high degree polynomials. For each one of these polynomials, there should be a best possible choice of coefficients *a* , *b* , *c* , and *d* that makes the approximation as good as possible.

The limit of the Taylor polynomials is an infinite series called the **Taylor series**. The Taylor series is frequently a very good approximation to the original function. Functions which are equal to their Taylor series are called analytic functions. It is impossible for functions with discontinuities or sharp corners to be analytic moreover, there exist smooth functions which are also not analytic.

### Implicit function theorem Edit

Some natural geometric shapes, such as circles, cannot be drawn as the graph of a function. For instance, if *f*(*x*, *y*) = *x* 2 + *y* 2 − 1 , then the circle is the set of all pairs (*x*, *y*) such that *f*(*x*, *y*) = 0 . This set is called the zero set of *f* , and is not the same as the graph of *f* , which is a paraboloid. The implicit function theorem converts relations such as *f*(*x*, *y*) = 0 into functions. It states that if *f* is continuously differentiable, then around most points, the zero set of *f* looks like graphs of functions pasted together. The points where this is not true are determined by a condition on the derivative of *f* . The circle, for instance, can be pasted together from the graphs of the two functions ± √ 1 - *x* 2 . In a neighborhood of every point on the circle except (−1, 0) and (1, 0) , one of these two functions has a graph that looks like the circle. (These two functions also happen to meet (−1, 0) and (1, 0) , but this is not guaranteed by the implicit function theorem.)

The implicit function theorem is closely related to the inverse function theorem, which states when a function looks like graphs of invertible functions pasted together.

## Pricing

In the arbitrage-free CRR model with *p* strictly between 0 and 1 and *a < r < b* according to Acitvity 2 in the previous lesson we obtain the following prices for forwards and European call and put options

for any time *t*. For instance, consider the market given in Example 3 but with *r=0*. In this case *b=0.01* and *a=-0.03*. The risk neutral measure is given then by

Consider a European call option with maturity of 2 days (*T=2*) and strike price *K=10*(0.97)*. The risk neutral measure and possible payoffs of this call option can be included in the binary tree of the stock price as follows

We find then that the price of this European call option is

It is easy to see that the price of a forward contract with the same maturity and same forward price *K* is given by

By the put-call parity mentioned above we deduce that the price of an European put option with same maturity and same strike is given by

That the call option is more expensive than the put option is due to the fact that in this market, the prices are more likely to go up than down under the risk-neutral probability measure. This does not happen if one considers the initial probabilities *p* and *1-p*.