Google Find us on Google+ Eigenvalues, eigenvectors and eigendecomposition

Eigenvalues, eigenvectors and eigendecomposition

What do you need to know to understand this topic?

Sections

Eigenwhat?

Eigen means own or self. In linear algebra, eigenvalue, eigenvector and eigendecomposition are terms that are intrinsically related. Eigendecomposition is the method to decompose a square matrix into its eigenvalues and eigenvectors. For a matrix $A$, if $$\begin{equation}A\mathbf{v}=\lambda \mathbf{v}\label{eq:Avlv}\end{equation}$$ then $\mathbf{v}$ is an eigenvector of matrix $A$ and $\lambda$ is the corresponding eigenvalue. That is, if matrix $A$ is multiplied by a vector and the result is a scaled version of the same vector, then it is an eigenvector of $A$ and the scaling factor is its eigenvalue.

Eigendecomposition

So how do we find the eigenvectors of a matrix? From $\eqref{eq:Avlv}$: $$A\mathbf{v}-\lambda I \mathbf{v} = 0$$ $$\begin{equation}(A -\lambda I) \mathbf{v} = 0\label{eq:AlI}\end{equation},$$ where $I$ is the identity matrix. The values of $\lambda$ where $\eqref{eq:AlI}$ holds are the eigenvalues of $A$. It turns out that this equation is equivalent to: $$\begin{equation}det(A-\lambda I) = 0,\label{eq:detAlI}\end{equation}$$ where det() is the determinant of a matrix.

Proof that $det(A-\lambda I) \equiv (A-\lambda I) \mathbf{v}=0$

First, you must know that a matrix is non-invertible if and only if its determinant is zero. So, for the values of $\lambda$ that $\eqref{eq:detAlI}$ holds, $A-\lambda I$ is non-invertible (singular). In those cases, you cannot left-multiply both sides of $\eqref{eq:AlI}$ by $(A-\lambda I)^{-1}$ (since there is no inverse) to get: $$\mathbf{v} = 0,$$ which means that in those cases, the solution for $\eqref{eq:Avlv}$ is different from $\mathbf{v} = 0$ and $\lambda$ is an eigenvalue of $A$.

An example

Let's see the eigendecomposition for the matrix: $$A=\left[\begin{array}{cc}1 & 0\\1 & 3\\\end{array}\right]$$ From $\eqref{eq:detAlI}$: $$det\left(\left[\begin{array}{cc}1-\lambda & 0\\1 & 3-\lambda\\\end{array}\right]\right) = 0$$ $$(1-\lambda)(3-\lambda) = 0$$ we get directly $\lambda_1 = 1$ and $\lambda_2 = 3$. The above expression is usually referred as the characteristic polinomial or characteristic equation of a matrix.
Plugging $\lambda_1$ into $\eqref{eq:Avlv}$, we get: $$\left[\begin{array}{cc}1 & 0\\1 & 3\\\end{array}\right]\left[\begin{array}{c}v_{11}\\v_{12}\\\end{array}\right]= 1 \left[\begin{array}{c}v_{11}\\v_{12}\\\end{array}\right]$$ from which we get $v_{11} = -2v_{12}$. That is, any vector $\mathbf{v_1} = [v_{11}, v_{12}]$ where $v_{11} = -2v_{12}$ is an eigenvector of $A$ with eigenvalue 1.
Plugging $\lambda_2$ into $\eqref{eq:Avlv}$, we get: $$\left[\begin{array}{cc}1 & 0\\1 & 3\\\end{array}\right]\left[\begin{array}{c}v_{21}\\v_{22}\\\end{array}\right]= 3 \left[\begin{array}{c}v_{21}\\v_{22}\\\end{array}\right]$$ from which we get $v_{21} = 0$ and $v_{22} \in \mathbb{R}$. That is, any vector $\mathbf{v_2} = [v_{21}, v_{22}]$ where $v_{21} = 0$ is an eigenvector of $A$ with eigenvalue 3.

Why is eigendecomposition useful?

Referring to our previous example, we can join both eigenvectors and eigenvalues in a single matrix equation: $$A\left[\mathbf{v_1 v_2}\right] = \left[\begin{array}{cc}1 & 0\\1 & 3\\\end{array}\right]\left[\begin{array}{cc}v_{11} & v_{21}\\v_{12} & v_{22}\\\end{array}\right] =\left[\begin{array}{cc}v_{11} & v_{21}\\v_{12} & v_{22}\\\end{array}\right]\left[\begin{array}{cc}\lambda_1 & 0\\0 & \lambda_2\\\end{array}\right] =\left[\mathbf{v_1 v_2}\right]\left[\begin{array}{cc}\lambda_1 & 0\\0 & \lambda_2\\\end{array}\right]$$ If we replace: $$\Lambda = \left[\begin{array}{cc}\lambda_1 & 0\\0 & \lambda_2\\\end{array}\right]$$ $$V = \left[\mathbf{v_1 v_2}\right]$$ it is also true that: $$AV = V\Lambda$$ $$\begin{equation}A = V\Lambda V^{-1}\label{eq:AVLV}\end{equation}$$ Eigendecomposition decomposes a matrix $A$ into a multiplication of a matrix of eigenvectors $V$ and a diagonal matrix of eigenvalues $\Lambda$. This can only be done if a matrix is diagonalizable. In fact, the definition of a diagonalizable matrix $A \in \mathbb{R}^{n \times n}$ is that it can be eigendecomposed into $n$ eigenvectors, so that $V^{-1}AV = \Lambda$.

Matrix inverse with eigendecomposition

From $\eqref{eq:AVLV}$: $$A^{-1} = V^{-1} \Lambda^{-1}V$$ The inverse of $\Lambda$ is just the inverse of each diagonal element (the eigenvalues).

Power of a matrix with eigendecomposition

From $\eqref{eq:AVLV}$: $$A^2 = V \Lambda V^{-1} V \Lambda V^{-1} = V \Lambda^{2} V^{-1}$$ $$A^n = V \Lambda^n V^{-1}$$ The power of $\Lambda$ is just the power of each diagonal element. This becomes much simpler than multiplications of A.

Properties of eigendecomposition

How to compute eigendecomposition?

Calculating the characteristic polinomial and then solving it with the respect to the eigenvalues becomes impractical as the size of the matrix increases. In practice, iterative algorithms are used to eigendecompose a matrix.

Power iteration

Power iteration is an iterative method to calculate the highest eigenvalue and its associated eigenvector. Only the highest value/vector is found, so this method as limited use.

First, we start with some vector $b_0$, which can be an educated guess of the dominant eigenvector or a random vector. Then, iterate through the following equation: $$b_{k+1} = \frac{A b_k}{\left\Vert A b_k \right\Vert}.$$ At each iteration, the vector is left-multiplied by matrix $A$ and normalized, converging to the dominant eigenvector. This method only works if:

Using our example matrix $A$ and initial vector: $$b_0 = \left[\begin{array}{c}1\\1\\\end{array}\right]$$ For the first step: $$b_1 = \frac{\left[\begin{array}{cc}1 & 0\\1 & 3\\\end{array}\right]\left[\begin{array}{c}1\\1\\\end{array}\right]}{\left\Vert\left[\begin{array}{cc}1 & 0\\1 & 3\\\end{array}\right]\left[\begin{array}{c}1\\1\\\end{array}\right]\right\Vert}= \frac{\left[\begin{array}{c}1\\4\\\end{array}\right]}{5} = \left[\begin{array}{c}0.2\\0.8\\\end{array}\right]$$ For the next steps, reuse the last $b$ and: $$b_2= \left[\begin{array}{c}0.07\\0.93\\\end{array}\right], b_3= \left[\begin{array}{c}0.02\\0.98\\\end{array}\right], b_4=\left[\begin{array}{c}0.01\\0.99\\\end{array}\right], b_5= \left[\begin{array}{c}0\\1\\\end{array}\right]$$ and $$ \left\Vert A b_5 \right\Vert = 2.99$$ If you recall, the highest eigenvalue of $A$ is 3 and its eigenvector is $\mathbf{v} = [ v_{21}, v_{22}]$, where $v_{21} = 0$ and $v_{22}$ can have any value.

QR algorithm

The QR algorithm uses the QR decomposition iteratively to make the eigendecomposition. Recall that the QR decomposition decomposes a matrix $A$ into an orthogonal matrix $Q$ and an upper triangular matrix $R$ as $A = QR$.

If I helped you in some way, please help me back by liking this website on the bottom of the page or clicking on the link below. It would mean the world to me!

Show your love:
Tweet