Page 230 - 35Linear Algebra
P. 230

230                                                               Eigenvalues and Eigenvectors


                            direction also satisfies this equation because L(cv) = cL(v) = λcv. More
                            generally, any non-zero vector v that solves

                                                             L(v) = λv


                            is called an eigenvector of L, and λ (which now need not be zero) is an
                            eigenvalue. Since the direction is all we really care about here, then any other
                            vector cv (so long as c 6= 0) is an equally good choice of eigenvector. Notice
                            that the relation “u and v point in the same direction” is an equivalence
                            relation.
                               In our example of the linear transformation L with matrix


                                                               −4 3
                                                                        ,
                                                              −10 7
                            we have seen that L enjoys the property of having two invariant directions,
                            represented by eigenvectors v 1 and v 2 with eigenvalues 1 and 2, respectively.
                               It would be very convenient if we could write any vector w as a linear
                            combination of v 1 and v 2 . Suppose w = rv 1 +sv 2 for some constants r and s.
                            Then
                                       L(w) = L(rv 1 + sv 2 ) = rL(v 1 ) + sL(v 2 ) = rv 1 + 2sv 2 .

                            Now L just multiplies the number r by 1 and the number s by 2. If we could
                            write this as a matrix, it would look like:


                                                              1 0     s
                                                              0 2     t

                            which is much slicker than the usual scenario


                                                    x      a b     x     ax + by
                                                 L      =             =             .
                                                    y      c d     y     cx + dy
                            Here, s and t give the coordinates of w in terms of the vectors v 1 and v 2 . In
                            the previous example, we multiplied the vector by the matrix L and came up
                            with a complicated expression. In these coordinates, we see that L has a very
                            simple diagonal matrix, whose diagonal entries are exactly the eigenvalues
                            of L.
                               This process is called diagonalization. It makes complicated linear sys-
                            tems much easier to analyze.


                                                      230
   225   226   227   228   229   230   231   232   233   234   235