Page 138 - 35Linear Algebra
P. 138
138 Matrices
This fact has an obvious yet important consequence:
Theorem 7.3.1. Let M be a matrix and x a column vector. If
Mx = 0
then the vector x is orthogonal to the rows of M.
Remark Remember that the set of all vectors that can be obtained by adding up
scalar multiples of the columns of a matrix is called its column space . Similarly the
row space is the set of all row vectors obtained by adding up multiples of the rows
of a matrix. The above theorem says that if Mx = 0, then the vector x is orthogonal
to every vector in the row space of M.
We know that r × k matrices can be used to represent linear transforma-
k
r
tions R → R via
k
X
i j
i
(MV ) = m v ,
j
j=1
which is the same rule used when we multiply an r × k matrix by a k × 1
vector to produce an r × 1 vector.
i
Likewise, we can use a matrix N = (n ) to define a linear transformation
j
of a vector space of matrices. For example
N
s
r
L: M −→ M ,
k
k
s
X j
i
i
i
L(M) = (l ) where l = n m .
k k j k
j=1
This is the same as the rule we use to multiply matrices. In other words,
L(M) = NM is a linear transformation.
i
i
Matrix Terminology Let M = (m ) be a matrix. The entries m are called
j i
2
1
diagonal, and the set {m , m , . . .} is called the diagonal of the matrix.
2
1
Any r × r matrix is called a square matrix. A square matrix that is
zero for all non-diagonal entries is called a diagonal matrix. An example
of a square diagonal matrix is
2 0 0
0 3 0 .
0 0 0
138