# Linear AlgebraEigenanalysis

In this section we will see how we can better understand a linear transformation by breaking it down into

Let be a linear transformation from to . Suppose that is a basis of , that is the span of some of the vectors in , and that is the span of the remaining vectors in . Then any vector in can be written as the sum of a vector in and a vector in . Since , we can see how behaves on all of if we know how it behaves on and on . This decomposition is particularly helpful if and are chosen so that behaves in a simple way on and on .

Given such a decomposition of into the vector spaces and , we can apply the same idea to split and into lower-dimensional vector spaces and repeat until no more splits are possible. The most optimistic outcome of this procedure would be that we get all the way down to one-dimensional subspaces and that acts on each of these subspaces by simply scaling each vector in that subspace by some factor. In other words, we would like to find vectors for which is a scalar multiple of . This leads us to the following definition.

**Definition**

An **eigenvector** of an matrix is a *nonzero* vector with the property that for some (in other words, maps to a vector which is either zero or parallel to ). We call an **eigenvalue** of , and we call the eigenvector together with its eigenvalue an **eigenpair**.

**Example**

Every nonzero vector is an eigenvector (with eigenvalue

**Exercise**

Find a matrix with eigenpairs and . Sketch the images of some gridlines under multiplication by this matrix to show how it scales space along the lines through its eigenvectors. Verbally describe your results below.

*Solution.* Writing out the equations implied by the given eigenpair relations, we see that the first implies that the first column of the matrix is , and the second (together with the first) implies that the second column of the matrix is .

The following gridline images show how the transformation distorts space. Equally spaced points which are separated in the east-west direction get spread out by a factor of 2, while the diagonal line gets stretched out by a factor of 3. Since , this introduces a bottom-left-to-top-right tilt for the images of the vertical gridlines.

## Eigenspaces

If are eigenvectors of with the same eigenvalue and for some weights such that for at least one then is also an eigenvector of with eigenvalue because

Therefore, the set of all eigenvectors corresponding to a particular eigenvalue form a **eigenspace**.

**Exercise**

Let be a matrix, with eigenvectors and

*Solution.* Since

**Exercise**

Let

*Solution.* Let

we see that

**Exercise**

Suppose

*Solution.* Let

When

Therefore,

Since

## Diagonalization

If an

In matrix terms, we can define

where **diagonalizable**.

**Exercise**

Some matrices are not diagonalizable, because they correspond to geometric transformations that cannot be viewed as scaling along any set of axes. Use this geometric intuition to come up with a

*Solution.* Rotation matrices in

does not send any nonzero vector

**Exercise**

Suppose that we have diagonalized

Let

*Solution.* We have

because

By linearity

## Positive definite matrices

A **positive definite** matrix **positive semidefinite** matrix

*Negative definite* and *negative semidefinite* matrices are defined analogously.

**Exercise**

(i) Is the sum of two positive definite matrices necessarily positive definite?

*Solution.* (i) If

for any vector

## The Gram matrix

If *Gram matrix*. The Gram matrix of

**Exercise**

Let

Using your answer above, explain why a Gram matrix is always positive semidefinite, but not necessarily positive definite.

*Solution.* The correct answer is

From this we see that the Gram matrix is positive semidefinite because

**Exercise**

Explain why the rank of

*Solution.* If

Conversely, if

which in turn implies that

Since

## The Spectral Theorem

The eigenspace decomposition of a diagonalizable matrix is even easier to understand if the eigenvectors happen to be orthogonal. It turns out that this happens exactly when the matrix is *symmetric*:

**Theorem** (Spectral Theorem)

If *orthogonally* diagonalizable, meaning that

Conversely, every orthogonally diagonalizable matrix is symmetric.

In other words, if

for some orthogonal matrix

Although it seems that the spectral theorem may be of limited use since so many matrices are not symmetric, we will see that we can associate any rectangular matrix with a symmetric square matrix that we can apply the spectral theorem to and use to extract insight about the original matrix. This beautiful idea is called the **singular value decomposition** and is the subject of the next section.

**Exercise**

Given an invertible matrix

It is possible that a small change in

- Find an invertible
matrix2\times 2 all of whose entries are betweenA and-2 and a vector2 with entries between\mathbf{b} and-2 and another vector2 whose components are nearly equal to those of\widehat{\mathbf{b}} for which\mathbf{b} andA^{-1}\mathbf{b} are not very close.A^{-1}\widehat{\mathbf{b}}

To be concrete, let's say "nearly equal" means "having ratio between 0.99 and 1.01", and let's say that "not very close" means "having a difference whose norm is greater than the norm of either". Find the eigenvalues of your matrix

*Solution.* One simple way to do this is make `solve(array([[1,1],[1, 1.01]]),[1,1])`

returns `[1,0]`

while `solve(array([[1,1],[1, 1.01]]),[1,1.01])`

returns `[0,1]`

.

*Solution.* One simple way to do this is make `[1 1; 1 1.01] [1, 1]`

returns `[1, 0]`

, while `[1 1; 1 1.01] [1, 1.01]`

returns `[0, 1]`

.

from numpy.linalg import solve from numpy import array

[1 1; 1 1.01] \ [1, 1]

The eigenvalues of the matrix `[1 1; 1 1.01]`

are approximately 0.005 and 2.005. In particular, the ratio of the eigenvalues is very large. You will find that the ratio of eigenvalues for your matrix is also large, because a matrix *backwards stable*, meaning that small changes in