Articles

7.2: Eigenvalues - Mathematics


Definition 7.2.1. Let (T) in (mathcal{L}(V,V)). Then (lambda) in (mathbb{F}) is an eigenvalue of (T) if there exists a nonzero vector (uin V) such that

[ T u = lambda u.]

The vector (u) is called an eigenvector of (T) corresponding to the eigenvalue (lambda).

Finding the eigenvalues and eigenvectors of a linear operator is one of the most important problems in Linear Algebra. We will see later that this so-called ``eigen-information'' has many uses and applications. (As an example, quantum mechanics is based upon understanding the eigenvalues and eigenvectors of operators on specifically defined vector spaces. These vector spaces are often infinite-dimensional, though, and so we do not consider them further in these notes.)

Example 7.2.2.

  • Let (T) be the zero map defined by (T(v)=0) for all (vin V). Then every vector (u eq 0) is an eigenvector of (T) with eigenvalue (0).
  • Let (I) be the identity map defined by (I(v)=v) for all (vin V). Then every vector (u eq 0) is an eigenvector of (T) with eigenvalue (1).
  • The projection map (P:mathbb{R}^3 o mathbb{R}^3) defined by (P(x,y,z)=(x,y,0)) has eigenvalues (0) and (1). The vector ((0,0,1)) is an eigenvector with eigenvalue (0), and both ((1,0,0)) and ((0,1,0)) are eigenvectors with eigenvalue (1).
  • Take the operator (R:mathbb{F}^2) to (mathbb{F}^2) defined by (R(x,y)=(-y,x)). When (mathbb{F}=mathbb{R}),

(R) can be interpreted as counterclockwise rotation by (90^0). From this interpretation, it is clear that no non-zero vector in (mathbb{R}^2) is mapped to a scalar multiple of itself. Hence, for (mathbb{F}=mathbb{R}), the operator (R) has no eigenvalues.

For (mathbb{F}=mathbb{C}), though, the situation is significantly different! In this case, (lambdain mathbb{C}) is an eigenvalue of (R) if

[ R(x,y) = (-y,x) = lambda (x,y) ]

so that (y=-lambda x) and (x=lambda y). This implies that (y=-lambda^2 y), i.e., that (lambda^2 = -1). The solutions are hence (lambda=pm i). One can check that ((1,-i)) is an eigenvector with eigenvalue (i) and that ((1,i)) is an eigenvector with eigenvalue (-i).

Eigenspaces are important examples of invariant subspaces. Let (Tin mathcal{L}(V,V)), and let (lambdain mathbb{F}) be an eigenvalue of (T). Then

[ V_lambda = { vin V mid Tv = lambda v } ]

is called an eigenspace of (T). Equivalently,

[ V_lambda = kernel(T-lambda I).]

Note that (V_lambda eq {0}) since (lambda) is an eigenvalue if and only if there exists a nonzero vector (u) in (V) such that (Tu=lambda u). We can reformulate this as follows:

(lambda in mathbb{F}) is an eigenvalue of (T) if and only if the operator (T-lambda I) is not injective.

Since the notion of injectivity, surjectivity, and invertibility are equivalent for operators on a finite-dimensional vector space, we can equivalently say either of the following:

  • (lambda in mathbb{F}) is an eigenvalue of (T) if and only if the operator (T-lambda I) is not surjective.
  • (lambda in mathbb{F}) is an eigenvalue of (T) if and only if the operator (T-lambda I) is not invertible.

We close this section with two fundamental facts about eigenvalues and eigenvectors.

Theorem 7.2.3. Let (Tin mathcal{L}(V,V)), and let (lambda_1,ldots,lambda_min mathbb{F}) be (m) distinct eigenvalues of (T) with corresponding nonzero eigenvectors (v_1,ldots,v_m). Then ((v_1,ldots,v_m)) is linearly independent.

Proof. Suppose that ((v_1,ldots,v_m)) is linearly dependent. Then, by the Linear Dependence Lemma, there exists an index (k in {2,ldots,m}) such that

[ v_k in Span(v_1,ldots,v_{k-1}) ]

and such that ((v_1,ldots,v_{k-1})) is linearly independent. This means that there exist scalars (a_1,ldots,a_{k-1}) in (mathbb{F}) such that

[ v_k = a_1 v_1 + cdots + a_{k-1} v_{k-1}. ag{7.2.1} ]

Applying (T) to both sides yields, using the fact that (v_j) is an eigenvector with eigenvalue (lambda_j),

[ lambda_k v_k = a_1 lambda_1 v_1 + cdots + a_{k-1} lambda_{k-1} v_{k-1}.]

Subtracting (lambda_k) times Equation (7.2.1) from this, we obtain

[ 0 = (lambda_k - lambda_1)a_1 v_1 + cdots + (lambda_k-lambda_{k-1}) a_{k-1} v_{k-1}. ]

Since ((v_1,ldots,v_{k-1})) is linearly independent, we must have ((lambda_k-lambda_j)a_j=0) for all (j=1,2,ldots, k-1). By assumption, all eigenvalues are distinct, so (lambda_k-lambda_j eq 0), which implies that (a_j=0) for all (j=1,2,ldots,k-1). But then, by Equation (7.2.1), (v_k=0), which contradicts the assumption that all eigenvectors are nonzero. Hence ((v_1,ldots,v_m)) is linearly independent.

Corollary 7.2.4. Any operator (T in mathcal{L}(V,V)) has at most (dim(V)) distinct eigenvalues.

Proof. Let (lambda_1,ldots,lambda_m) be distinct eigenvalues of (T), and let (v_1,ldots,v_m) be corresponding nonzero eigenvectors. By Theorem 7.2.3, the list ((v_1,ldots,v_m)) is linearly independent. Hence (m le dim(V)).


Solutions for Chapter 7.2: Eigenvalues and Eigenvectors

Solutions for Chapter 7.2: Eigenvalues and Eigenvectors

  • 7.2.1: Compute the eigenvalues and associated eigenvectors of the followin.
  • 7.2.2: Compute the eigenvalues and associated eigenvectors of the followin.
  • 7.2.3: Find the complex eigenvalues and associated eigenvectors for the fo.
  • 7.2.4: Find the complex eigenvalues and associated eigenvectors for the fo.
  • 7.2.5: Find the spectral radius for each matrix in Exercise 1.
  • 7.2.6: Find the spectral radius for each matrix in Exercise 2.
  • 7.2.7: Which of the matrices in Exercise 1 are convergent?
  • 7.2.8: Which of the matrices in Exercise 2 are convergent?
  • 7.2.9: Find the l2 norm for the matrices in Exercise 1.
  • 7.2.10: Find the l2 norm for the matrices in Exercise 2.
  • 7.2.11: Let A1 = 1 0 1 4 1 2 and A2 = 1 2 0 16 1 2 . Show that A1 is not co.
  • 7.2.12: An n n matrix A is called nilpotent if an integer m exists with Am .
  • 7.2.13: Show that the characteristic polynomial p() = det(A I) for the n n .
  • 7.2.14: a. Show that if A is an n n matrix, then det A = &n i=1 i, where i.
  • 7.2.15: 15. Let be an eigenvalue of the n n matrix A and x = 0 be an associ.
  • 7.2.16: Show that if A is symmetric, then ||A||2 = (A).
  • 7.2.17: In Exercise 15 of Section 6.3, we assumed that the contribution a f.
  • 7.2.18: Find matrices A and B for which (A + B) > (A) + (B). (This shows th.
  • 7.2.19: Show that if || || is any natural norm, then (||A1||)1 || ||A|| for.
Textbook: Numerical Analysis
Edition: 9
Author: Richard L. Burden, J. Douglas Faires
ISBN: 9780538733519

This expansive textbook survival guide covers the following chapters and their solutions. Chapter 7.2: Eigenvalues and Eigenvectors includes 19 full step-by-step solutions. This textbook survival guide was created for the textbook: Numerical Analysis, edition: 9. Numerical Analysis was written by and is associated to the ISBN: 9780538733519. Since 19 problems in chapter 7.2: Eigenvalues and Eigenvectors have been answered, more than 31356 students have viewed full step-by-step solutions from this chapter.

Tv = Av + Vo = linear transformation plus shift.

Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, . , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.

When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite :E is diagonal if the Xi are independent.

B j has b replacing column j of A x j = det B j I det A

0,1,1,2,3,5, . satisfy Fn = Fn-l + Fn- 2 = (A7 -A

)I()q -A2). Growth rate Al = (1 + .J5) 12 is the largest eigenvalue of the Fibonacci matrix [ > A].

Columns without pivots these are combinations of earlier columns.

Invert A by row operations on [A I] to reach [I A-I].

Square root of x T x (Pythagoras in n dimensions).

Vector addition and scalar multiplication.

Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

Ln = 2,J, 3, 4, . satisfy Ln = L n- l +Ln- 2 = A1 +A

, with AI, A2 = (1 ± -/5)/2 from the Fibonacci matrix U

All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

In each column, choose the largest available pivot to control roundoff all multipliers have leij I < 1. See condition number.

MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

Pivots = 1 zeros above and below pivots the r nonzero rows of R give a basis for the row space of A.

One free variable is Si = 1, other free variables = o.

Real symmetric A has real A'S and orthonormal q's.

Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.


3 Answers 3

The explanation is pretty simple with a suitable change of basis.

Letting $B = egin 1 & 0 & 1 & 0 i & 0 & -i & 0 0 & 1 & 0 & 1 0 & i & 0 & -i end$ we have $B^<-1>M_B = egin 1+imu & 1 & 0 & 0 -1 & 0 & 0 & 0 0 & 0 & 1-imu & 1 0 & 0 & -1 & 0 end$ Letting $N_mu = egin 1+imu & 1 -1 & 0 end$ , we thus have $B^<-1>AB = egin prod N_ & 0 0 & overline> end$ where the bar denotes entry-wise complex conjugation. Thus the eigenvalues of $A$ are those of $prod N_$ plus those of $overline>$ , which are their complex conjugates. Moreover, since $N_mu$ has determinant $1$ , so does $prod N_$ , so its two eigenvalues are inverses of each other.

EDIT: just a partial answer that does not settle the question completely.

This is a variant of symplectic matrices. Your matrices $M_mu$ are orthogonal wrt the indefinite scalar product induced by $ Omega = egin 0 & 0 & 0 & 1 0 & 0 & 1 & 0 0 & -1 & 0 & 0 -1 & 0 & 0 & 0 end, $ i.e., they satisfy the property $ M^<*>Omega M = Omega. $

One can see that if $(lambda, v)$ is an eigenpair for $M$ , then $(frac<1>, Omega v)$ is an eigenpair for $M^*$ , and hence $frac<1>>$ is in the spectrum of $M$ : $ frac<1>Omega v = frac<1>M^*Omega Mv = frac<1>M^*Omega lambda v = M^*Omega v.$

This property, together with the fact that real matrices have eigenvalues that come in complex conjugate pairs, puts strong constraints on which eigenvalues you can get.

There is also the possibility that the eigenvalues come in the form $lambda_1,lambda_2, frac1, frac1$ , for $lambda_1,lambda_2in mathbb$ . This case actually happens: for instance, $M_2M_<-2>$ has four real eigenvalues, although in this case I get $lambda_1 = lambda_2$ .

Four purely imaginary eigenvalues can also appear: for instance $M_<-2>M_<1/2>$ has eigenvalues $$ .

EDIT: on further thought, I still cannot exclude that the eigenvalues are in the form suggested by Sascha in a comment this is not a counterexample as the real eigenvalues have multiplicity 2. Sorry :(


Eigenvalues and Eigenvectors: An Introduction

The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems of differential equations, analyzing population growth models, and calculating powers of matrices (in order to define the exponential matrix). Other areas such as physics, sociology, biology, economics and statistics have focused considerable attention on "eigenvalues" and "eigenvectors"-their applications and their computations. Before we give the formal definition, let us introduce these concepts on an example.

Example. Consider the matrix

Consider the three column matrices

Next consider the matrix P for which the columns are C 1 , C 2 , and C 3 , i.e.,

We have det ( P ) = 84. So this matrix is invertible. Easy calculations give

Next we evaluate the matrix P -1 AP . We leave the details to the reader to check that we have

Using the matrix multiplication, we obtain

which implies that A is similar to a diagonal matrix. In particular, we have

for . Note that it is almost impossible to find A 75 directly from the original form of A .

This example is so rich of conclusions that many questions impose themselves in a natural way. For example, given a square matrix A , how do we find column matrices which have similar behaviors as the above ones? In other words, how do we find these column matrices which will help find the invertible matrix P such that P -1 AP is a diagonal matrix?

From now on, we will call column matrices vectors . So the above column matrices C 1 , C 2 , and C 3 are now vectors. We have the following definition.

Definition. Let A be a square matrix. A non-zero vector C is called an eigenvector of A if and only if there exists a number (real or complex) such that

If such a number exists, it is called an eigenvalue of A . The vector C is called eigenvector associated to the eigenvalue .

Remark. The eigenvector C must be non-zero since we have

Example. Consider the matrix

So C 1 is an eigenvector of A associated to the eigenvalue 0. C 2 is an eigenvector of A associated to the eigenvalue -4 while C 3 is an eigenvector of A associated to the eigenvalue 3.

It may be interesting to know whether we found all the eigenvalues of A in the above example. In the next page, we will discuss this question as well as how to find the eigenvalues of a square matrix.


Mathematics Behind PCA

PCA can be thought of a s an unsupervised learning problem. The whole process of obtaining principle components from a raw dataset can be simplified in six parts :

  • Take the whole dataset consisting of d+1 dimensions and ignore the labels such that our new dataset becomes d dimensional.
  • Compute the mean for every dimension of the whole dataset.
  • Compute the covariance matrix of the whole dataset.
  • Compute eigenvectors and the corresponding eigenvalues.
  • Sort the eigenvectors by decreasing eigenvalues and choose k eigenvectors with the largest eigenvalues to form a d × k dimensional matrix W.
  • Use this d × k eigenvector matrix to transform the samples onto the new subspace.

So, let’s unfurl the maths behind each of this one by one.

  1. Take the whole dataset consisting of d+1 dimensions and ignore the labels such that our new dataset becomes d dimensional.

Let’s say we have a dataset which is d+1 dimensional. Where d could be thought as X_train and 1 could be thought as y_train (labels) in modern machine learning paradigm. So, X_train + y_train makes up our complete train dataset.

So, after we drop the labels we are left with d dimensional dataset and this would be the dataset we will use to find the principal components. Also, let’s assume we are left with a three-dimensional dataset after ignoring the labels i.e d = 3.

we will assume that the samples stem from two different classes, where one-half samples of our dataset are labeled class 1 and the other half class 2.

Let our data matrix X be the score of three students :

2. Compute the mean of every dimension of the whole dataset.

The data from the above table can be represented in matrix A, where each column in the matrix shows scores on a test and each row shows the score of a student.

So, The mean of matrix A would be

3. Compute the covariance matrix of the whole dataset ( sometimes also called as the variance-covariance matrix)

So, we can compute the covariance of two variables X and Y using the following formula

Using the above formula, we can find the covariance matrix of A. Also, the result would be a square matrix of d ×d dimensions.

Let’s rewrite our original matrix like this

Its covariance matrix would be

Few points that can be noted here is :

  • Shown in Blue along the diagonal, we see the variance of scores for each test. The art test has the biggest variance (720) and the English test, the smallest (360). So we can say that art test scores have more variability than English test scores.
  • The covariance is displayed in black in the off-diagonal elements of the matrix A

a) The covariance between math and English is positive (360), and the covariance between math and art is positive (180). This means the scores tend to covary in a positive way. As scores on math go up, scores on art and English also tend to go up and vice versa.

b) The covariance between English and art, however, is zero. This means there tends to be no predictable relationship between the movement of English and art scores.

4. Compute Eigenvectors and corresponding Eigenvalues

Intuitively, an eigenvector is a vector whose direction remains unchanged when a linear transformation is applied to it.

Now, we can easily compute eigenvalue and eigenvectors from the covariance matrix that we have above.

Let A be a square matrix, ν a vector and λ a scalar that satisfies Aν = λν, then λ is called eigenvalue associated with eigenvector ν of A.

The eigenvalues of A are roots of the characteristic equation


Wolfram Web Resources

The #1 tool for creating Demonstrations and anything technical.

Explore anything with the first computational knowledge engine.

Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more.

Join the initiative for modernizing math education.

Solve integrals with Wolfram|Alpha.

Walk through homework problems step-by-step from beginning to end. Hints help you try the next step on your own.

Unlimited random practice problems and answers with built-in Step-by-step solutions. Practice online or make a printable study sheet.

Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more.


STPM Further Mathematics T

Generally, Taylor series has a lot of uses. We can use it to do one of the following:

A. DERIVE A GIVEN FUNCTION

You were given a list of Maclaurin series in the last section. Now I show them to you again below:

These are not all though. You can still find and derive the Taylor or Maclaurin series of other functions like sin -1 x, coth -1 x or lg x 2 . The method is the same, by listing down the Taylor or Maclaurin series of the functions. For example,
sin -1 x = a + bx + cx 2 + dx 3 + ex 4 + …

and you substitute x = 0 to get a. To get b, you differentiate once and substitute x = 0, and c, differentiate twice, and etc. The coefficients a, b, c and so on might not have a certain order like the functions listed above, but at least you have a reasonable polynomial to estimate the function in the absence of a calculator.

Besides, you could also combine more than 2 functions to find a new Taylor series for them. For example, (1 + x) 2 cos x can be derived from

Adding and subtracting of functions (like sin x + cos x) or even substitution of variables (like e 8x or sin x 2 ) can be easily derived too.

B. DIFFERENTIATE AND INTEGRATE THE SERIES TO GET OTHER RESULTS

Did you notice that the laws of calculus also obeys the rules of power series? Taking cos x for an example, differentiating both sides, gives

This is a very useful information. You can speed up the calculations if you were asked to derive the series of a function which relates to on of the known functions above. By the way, if you were able to find the listing of the polynomials, you would want to learn how to find the summation notation of the derived series as well. Read through your Maths T Sequence & Series, and try to make use of the knowledge you learn there.

C. FINDING LIMITS OF FUNCTIONS

When you are asked to find the limit of a complicated function as x → 0, you can actually make use of the Maclaurin series of the function. For example,

To help you, you might want to learn L’Hôpital’s rule as well. This rule comes really handy in this situation, it states that if f(a) = 0, g(a) = 0, and g’(a) ≠ 0, then

Use this rule when you get a 0/0 results. Remember that this rule only holds if the f(a) = 0 thingy is true.

D. SOLVING DIFFERENTIAL EQUATIONS NUMERICALLY

I believe you already know what are differential equations, just that you only know how to solve a little of them. So here, we are trying to estimate and represent a set of differential equations as a Taylor series, and thus try to estimate the function for values x close to a, when expanded at x = a. I’ll show you an example:

Find the Taylor’s series solution for y up to and including terms in x 4 for the differential equation

Hence, find y correct to 9 d.p. when x = 0.01.

That’s all for this chapter. Still remember the derivations of Poisson Distribution? You probably could explain to your friends now using your knowledge on Power Series. ☺


7. Eigenvalues and Determinants

Eigenvalues anddeterminants reveal quite a bit of information about a matrix. In thislab we will learn how to use MATLAB to compute the eigenvalues ,eigenvectors, and the determinant of a matrix. The emphasis will be on eigenvalues rather than determinant, as the former conceptis more useful than the latter - this should become clear through theexercises.

As you should be aware by now, there is a nice formulafor calculating the determinant of a 2x2 matrix. Even the 3x3 case is notthat difficult. But as matrix size increases so does the complexity ofcalculating determinants. This is where MATLAB, or any other computeralgebra program, comes in.

Lets start by entering the followingmatrices into MATLAB:

To compute the determinants of these matrices we aregoing to use the command det ( ) . That is, to computethe determinant of A we type the following

MATLAB gives us -472 as the answer. Similarly we get

(a) Compute the determinant of B by hand. Make sure you leave enough space in your write-up for the calculations. Show all work.

(b) Use MATLAB to compute the determinants of the following matrices:

Note: In MATLAB the transpose of a matrix is denoted with an apostrophe i.e. A T is given by the command

(c) Which of the above matrices are NOT invertible? Explain your reasoning.

(d) Now we know the determinants of A and B, but suppose then that we lose our original matrices A and B. Which of the determinants in part (b) will we still be able to compute, even without having A or B at hand? Explain your reasoning.

Remark 7.1 The mainusage of determinants in this class relates to the idea of invertibility . When you use MATLAB for that purpose, you have to understand that the programintroduces rounding errors (as you saw in Lab 2). Therefore, there is a possibilitythat a matrix may appear to have zero determinant and yet be invertible. This only applies to matrices with non-integer entries. The abovematrices don't fall into this category as all their entries are integers.

Exercise 7.2 In this exercise we are going to work with the following matrix:

Use det ( ) to compute the determinant of N^100. Do you think that N^100 is invertible? Also use the command to compute the determinant of N.

Now, using the determinant of N as a known quantity, calculate by hand the determinant of N^100. What do you notice?

Hint: look at Remark 7.1 and consider some of the properties of determinants.

For the rest of this lab we'll be looking at eigenvalues and eigenvectors of matrices. The commandthat we will be using here is eig ( ) . We can either useit to compute eigenvalues alone, or with a littlealteration we can get eigenvectors as well. Let's use it on our twomatrices A and B.

Compute the eigenvalues of thematrix B and assign the values to a vector b.

We do this by typing the following

The eigenvalues are 1, 8, 3, 2. There are four of them because our matrix is 4x4. Notice also that it is very easy to compute thedeterminant of B. All we have to do is multiply all the eigenvalues together. Clearly 48 =1*8*3*2. (Further information about this can be found in your linearalgebra book, Linear Algebra and Its Applications by D. Lay, in chapter5 section 2.)

(a) Compute the eigenvalues of the 5x5 Vandermonde matrix V and assign them to a vector v. To enter this matrix into MATLAB use the following command:

If you don't know what a Vandermonde matrix is look here. We strongly encourage you to familiarize yourself with these matrices as they have very nice properties.

(b) Determine if V is invertible by looking at the eigenvalues . Explain your reasoning.

Let's now compute the eigenvalues and eigenvectors of B in one command. To do this we type

1.0000 -0.1980 0.2357 0.9074
0 0.6931 - 0.2357 -0.1815
0 0.6931 0.9428 0.3630
0 0 0 0.1089

MATLAB defines the matrix P which has theeigenvectors of B as its columns and the matrix D as a diagonalmatrix with the corresponding eigenvalues along thediagonal.

An important use of eigenvalues and eigenvectors is in diagonalizing matrices. You will see why we might want to do that in the next section. For now,all we want to do is find an invertible matrix Q such that Q*B*Q -1 = diagonal matrix. But before we do this, we need to make sure thatB is diagonalizable. In general the way to determining diagonalizability of a matrix is to make sure that itseigenvectors are linearly independent. There is another way too. Wealready know the eigenvalues of B so let's putthem to use. B is diagonalizable since all of its eigenvalues are distinct.

Remark 7.2 The last statement says that if a matrix has distinct eigenvalues ,then it is diagonalizable. Note that the converse to the precedingstatement is not necessarily true i.e., if a matrix is diagonalizable, it isnot necessarily true that all its eigenvalues aredistinct. A matrix can be diagonalizable even if it has repeated eigenvalues: think about the identity matrix (alreadydiagonal) whose eigenvalues are all 1.

Here is where our two matrices P and D comein. Type the following command:

1.0000 -0.0000 -0.0000 0.0000
0 8.0000 0 -0.0000
0 0.0000 3.0000 0
0 0 0 2.0000

What you should realize is that we are using linearly independent eigenvectors(from the matrix P) to diagonalize our matrixB. The result is a diagonal matrix with eigenvalues as its diagonal entries.

Remark 7.3 Recall that the inv() command calculates the inverse of a matrix. To see that it really workstry the following command:

Exercise 7.4 Find matrices P and D for our Vandermonde matrix V. Calculate inv( P)*V*P to check your work.

The Fibonacci numbers comprise a well-known sequence inmathematics. The idea is very simple. You start with two numbers, 1and 1, and get each consecutive term by summing the two previous terms. That is, the third term would be 1+1=2, the fourth term 1+2=3, the fifth term2+3=5, and so on, resulting in

1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, . . .

We are going to use the Fibonacci numbers as a model forour own sequence. Here it is:

Instead of adding two consecutive terms to create the next term, we are goingto use the following formula:

That is, each term in the sequence is the sum of the previous term and twotimes the term before that. Let's try to come up with a formula for the n th term in our sequence. Looking at it, it is not that obvious that thereactually exists such a formula.

Our approach is to start by formulating the problem interms of a system of linear equations:

where an is the n th term. Now, each system of linear equations has its matrix representation.In this case we get

We then get the following equation:

Similarly, fn-1 = F*fn-2, sothat we can replace fn-1 with F*fn-2. Then we can replace fn-2, and so on. This will resultin the following formula:

We assume that a0 is the first term in the sequence and

All we have to do now is find the formula for F^n . The best way to approach this task isthrough diagonalization . Inorder to do that we first have to determine if F is diagonalizable. Enter F into MATLAB.

Exercise 7.5 Find matrices P and D for our matrix F as described above and explain why F is diagonalizable.

The above exercise tells us that F = P*D*P -1 . We use this to get the following equation

F^n = [P*D*P -1 ]*[P*D*P -1 ]*. *[P*D*P -1 ] ( n times)

All we need at this point is D^n ,and since D is diagonal the calculation is very simple. The lastthing to do is to plug it all back into the original equation and read off theformula. But before we can do this we have to note that MATLAB likes tonormalize eigenvectors. In this case it means making the entriesirrational, and hence introducing a possibility for floating pointerrors. To counteract this we are going to use the following command:

(a) Include input and output from the above command

(b) Use the above matrix P (the one with fractions) to hand calculate the inverse of P.

(c) Determine the formula for the n th term in our sequence, i.e., the formula for an. (This is a tricky question. Make sure to count the terms correctly.) Use your formula to calculate the 5th, 15th and 20th terms in our sequence.

Even though the process of computing determinants and eigenvalues is quite easy to understand, the complexity ofcalculations can be horrendous. MATLAB is a perfect tool for this. There'sonly one caveat: floating-point errors. However, this obstacle can beovercome if you know they might occur.


Extended Capabilities

C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™.

Usage notes and limitations:

The basis of the eigenvectors can be different in the generated code than in MATLAB ® . In general, in the eigenvalues output, the eigenvalues for real inputs are not sorted so that complex conjugate pairs are adjacent.

Differences in eigenvectors and ordering of eigenvalues can lead to differences in the condition numbers output.


Watch the video: 287 Lecture 23 (October 2021).