Sponsored Links

Rabu, 01 November 2017

Sponsored Links

Week 9 - Column space and row space - YouTube
photo src: i.ytimg.com

In linear algebra, the column space (also called the range or image) of a matrix A is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.

Let F {\displaystyle \mathbb {F} } be a field. The column space of an m × n matrix with components from F {\displaystyle \mathbb {F} } is a linear subspace of the m-space F m {\displaystyle \mathbb {F} ^{m}} . The dimension of the column space is called the rank of the matrix and is at most min(m, n). A definition for matrices over a ring K {\displaystyle \mathbb {K} } is also possible.

The row space is defined similarly.

This article considers matrices of real numbers. The row and column spaces are subspaces of the real spaces Rn and Rm respectively.


Video Row and column spaces



Overview

Let A be an m-by-n matrix. Then

  1. rank(A) = dim(rowsp(A)) = dim(colsp(A)),
  2. rank(A) = number of pivots in any echelon form of A,
  3. rank(A) = the maximum number of linearly independent rows or columns of A.

If one considers the matrix as a linear transformation from Rn to Rm, then the column space of the matrix equals the image of this linear transformation.

The column space of a matrix A is the set of all linear combinations of the columns in A. If A = [a1, ...., an], then colsp(A) = span {a1, ...., an}.

The concept of row space generalizes to matrices over C, the field of complex numbers, or over any field.

Intuitively, given a matrix A, the action of the matrix A on a vector x will return a linear combination of the columns of A weighted by the coordinates of x as coefficients. Another way to look at this is that it will (1) first project x into the row space of A, (2) perform an invertible transformation, and (3) place the resulting vector y in the column space of A. Thus the result y = A x must reside in the column space of A. See singular value decomposition for more details on this second interpretation.

Example

Given a matrix J:

J = [ 2 4 1 3 2 - 1 - 2 1 0 5 1 6 2 2 2 3 6 2 5 1 ] {\displaystyle J={\begin{bmatrix}2&4&1&3&2\\-1&-2&1&0&5\\1&6&2&2&2\\3&6&2&5&1\end{bmatrix}}}

the rows are r1 = (2,4,1,3,2), r2 = (-1,-2,1,0,5), r3 = (1,6,2,2,2), r4 = (3,6,2,5,1). Consequently, the row space of J is the subspace of R5 spanned by { r1, r2, r3, r4 }. Since these four row vectors are linearly independent, the row space is 4-dimensional. Moreover, in this case it can be seen that they are all orthogonal to the vector n = (6,-1,4,-4,0), so it can be deduced that the row space consists of all vectors in R5 that are orthogonal to n.


Maps Row and column spaces



Column space

Definition

Let K be a field of scalars. Let A be an m × n matrix, with column vectors v1v2, ..., vn. A linear combination of these vectors is any vector of the form

c 1 v 1 + c 2 v 2 + ? + c n v n , {\displaystyle c_{1}\mathbf {v} _{1}+c_{2}\mathbf {v} _{2}+\cdots +c_{n}\mathbf {v} _{n},}

where c1c2, ..., cn are scalars. The set of all possible linear combinations of v1, ... ,vn is called the column space of A. That is, the column space of A is the span of the vectors v1, ... , vn.

Any linear combination of the column vectors of a matrix A can be written as the product of A with a column vector:

A [ c 1 ? c n ] = [ a 11 ? a 1 n ? ? ? a m 1 ? a m n ] [ c 1 ? c n ] = [ c 1 a 11 + ? + c n a 1 n ? ? ? c 1 a m 1 + ? + c n a m n ] = c 1 [ a 11 ? a m 1 ] + ? + c n [ a 1 n ? a m n ] = c 1 v 1 + ? + c n v n {\displaystyle {\begin{array}{rcl}A{\begin{bmatrix}c_{1}\\\vdots \\c_{n}\end{bmatrix}}&=&{\begin{bmatrix}a_{11}&\cdots &a_{1n}\\\vdots &\ddots &\vdots \\a_{m1}&\cdots &a_{mn}\end{bmatrix}}{\begin{bmatrix}c_{1}\\\vdots \\c_{n}\end{bmatrix}}={\begin{bmatrix}c_{1}a_{11}+&\cdots &+c_{n}a_{1n}\\\vdots &\vdots &\vdots \\c_{1}a_{m1}+&\cdots &+c_{n}a_{mn}\end{bmatrix}}=c_{1}{\begin{bmatrix}a_{11}\\\vdots \\a_{m1}\end{bmatrix}}+\cdots +c_{n}{\begin{bmatrix}a_{1n}\\\vdots \\a_{mn}\end{bmatrix}}\\&=&c_{1}\mathbf {v} _{1}+\cdots +c_{n}\mathbf {v} _{n}\end{array}}}

Therefore, the column space of A consists of all possible products Ax, for x ? Cn. This is the same as the image (or range) of the corresponding matrix transformation.

Example
If A = [ 1 0 0 1 2 0 ] {\displaystyle A={\begin{bmatrix}1&0\\0&1\\2&0\end{bmatrix}}} , then the column vectors are v1 = (1, 0, 2)T and v2 = (0, 1, 0)T.
A linear combination of v1 and v2 is any vector of the form
c 1 [ 1 0 2 ] + c 2 [ 0 1 0 ] = [ c 1 c 2 2 c 1 ] {\displaystyle c_{1}{\begin{bmatrix}1\\0\\2\end{bmatrix}}+c_{2}{\begin{bmatrix}0\\1\\0\end{bmatrix}}={\begin{bmatrix}c_{1}\\c_{2}\\2c_{1}\end{bmatrix}}\,}
The set of all such vectors is the column space of A. In this case, the column space is precisely the set of vectors (xyz) ? R3 satisfying the equation z = 2x (using Cartesian coordinates, this set is a plane through the origin in three-dimensional space).

Basis

The columns of A span the column space, but they may not form a basis if the column vectors are not linearly independent. Fortunately, elementary row operations do not affect the dependence relations between the column vectors. This makes it possible to use row reduction to find a basis for the column space.

For example, consider the matrix

A = [ 1 3 1 4 2 7 3 9 1 5 3 1 1 2 0 8 ] . {\displaystyle A={\begin{bmatrix}1&3&1&4\\2&7&3&9\\1&5&3&1\\1&2&0&8\end{bmatrix}}{\text{.}}}

The columns of this matrix span the column space, but they may not be linearly independent, in which case some subset of them will form a basis. To find this basis, we reduce A to reduced row echelon form:

[ 1 3 1 4 2 7 3 9 1 5 3 1 1 2 0 8 ] ~ [ 1 3 1 4 0 1 1 1 0 2 2 - 3 0 - 1 - 1 4 ] ~ [ 1 0 - 2 1 0 1 1 1 0 0 0 - 5 0 0 0 5 ] ~ [ 1 0 - 2 0 0 1 1 0 0 0 0 1 0 0 0 0 ] . {\displaystyle {\begin{bmatrix}1&3&1&4\\2&7&3&9\\1&5&3&1\\1&2&0&8\end{bmatrix}}\sim {\begin{bmatrix}1&3&1&4\\0&1&1&1\\0&2&2&-3\\0&-1&-1&4\end{bmatrix}}\sim {\begin{bmatrix}1&0&-2&1\\0&1&1&1\\0&0&0&-5\\0&0&0&5\end{bmatrix}}\sim {\begin{bmatrix}1&0&-2&0\\0&1&1&0\\0&0&0&1\\0&0&0&0\end{bmatrix}}{\text{.}}}

At this point, it is clear that the first, second, and fourth columns are linearly independent, while the third column is a linear combination of the first two. (Specifically, v3 = -2v1 + v2.) Therefore, the first, second, and fourth columns of the original matrix are a basis for the column space:

[ 1 2 1 1 ] , [ 3 7 5 2 ] , [ 4 9 1 8 ] . {\displaystyle {\begin{bmatrix}1\\2\\1\\1\end{bmatrix}},\;\;{\begin{bmatrix}3\\7\\5\\2\end{bmatrix}},\;\;{\begin{bmatrix}4\\9\\1\\8\end{bmatrix}}{\text{.}}}

Note that the independent columns of the reduced row echelon form are precisely the columns with pivots. This makes it possible to determine which columns are linearly independent by reducing only to echelon form.

The above algorithm can be used in general to find the dependence relations between any set of vectors, and to pick out a basis from any spanning set. A different algorithm for finding a basis from a spanning set is given in the row space article; finding a basis for the column space of A is equivalent to finding a basis for the row space of the transpose matrix AT.

Dimension

The dimension of the column space is called the rank of the matrix. The rank is equal to the number of pivots in the reduced row echelon form, and is the maximum number of linearly independent columns that can be chosen from the matrix. For example, the 4 × 4 matrix in the example above has rank three.

Because the column space is the image of the corresponding matrix transformation, the rank of a matrix is the same as the dimension of the image. For example, the transformation R4 -> R4 described by the matrix above maps all of R4 to some three-dimensional subspace.

The nullity of a matrix is the dimension of the null space, and is equal to the number of columns in the reduced row echelon form that do not have pivots. The rank and nullity of a matrix A with n columns are related by the equation:

rank ( A ) + nullity ( A ) = n . {\displaystyle {\text{rank}}(A)+{\text{nullity}}(A)=n.\,}

This is known as the rank-nullity theorem.

Relation to the left null space

The left null space of A is the set of all vectors x such that xTA = 0T. It is the same as the null space of the transpose of A. The product of the matrix AT and the vector x can be written in terms of the dot product of vectors:

A T x = [ v 1 ? x v 2 ? x ? v n ? x ] , {\displaystyle A^{\mathsf {T}}\mathbf {x} ={\begin{bmatrix}\mathbf {v} _{1}\cdot \mathbf {x} \\\mathbf {v} _{2}\cdot \mathbf {x} \\\vdots \\\mathbf {v} _{n}\cdot \mathbf {x} \end{bmatrix}},}

because row vectors of AT are transposes of column vectors vk of A. Thus ATx = 0 if and only if x is orthogonal (perpendicular) to each of the column vectors of A.

It follows that the left null space (the null space of AT) is the orthogonal complement to the column space of A.

For a matrix A, the column space, row space, null space, and left null space are sometimes referred to as the four fundamental subspaces.

For matrices over a ring

Similarly the column space (sometimes disambiguated as right column space) can be defined for matrices over a ring K as

? k = 1 n v k c k {\displaystyle \sum \limits _{k=1}^{n}\mathbf {v} _{k}c_{k}}

for any c1, ..., cn, with replacement of the vector m-space with "right free module", which changes the order of scalar multiplication of the vector vk to the scalar ck such that it is written in an unusual order vector-scalar.


How To Space Rows and Columns Evenly in Leuchtturm 1917 A5 ...
photo src: s-media-cache-ak0.pinimg.com


Row space

Definition

Let K be a field of scalars. Let A be an m × n matrix, with row vectors r1, r2, ... , rm. A linear combination of these vectors is any vector of the form

c 1 r 1 + c 2 r 2 + ? + c m r m , {\displaystyle c_{1}\mathbf {r} _{1}+c_{2}\mathbf {r} _{2}+\cdots +c_{m}\mathbf {r} _{m},}

where c1, c2, ... , cm are scalars. The set of all possible linear combinations of r1, ... , rm is called the row space of A. That is, the row space of A is the span of the vectors r1, ... , rm.

For example, if

A = [ 1 0 2 0 1 0 ] , {\displaystyle A={\begin{bmatrix}1&0&2\\0&1&0\end{bmatrix}},}

then the row vectors are r1 = (1, 0, 2) and r2 = (0, 1, 0). A linear combination of r1 and r2 is any vector of the form

c 1 ( 1 , 0 , 2 ) + c 2 ( 0 , 1 , 0 ) = ( c 1 , c 2 , 2 c 1 ) . {\displaystyle c_{1}(1,0,2)+c_{2}(0,1,0)=(c_{1},c_{2},2c_{1}).\,}

The set of all such vectors is the row space of A. In this case, the row space is precisely the set of vectors (x, y, z) ? K3 satisfying the equation z = 2x (using Cartesian coordinates, this set is a plane through the origin in three-dimensional space).

For a matrix that represents a homogeneous system of linear equations, the row space consists of all linear equations that follow from those in the system.

The column space of A is equal to the row space of AT.

Basis

The row space is not affected by elementary row operations. This makes it possible to use row reduction to find a basis for the row space.

For example, consider the matrix

A = [ 1 3 2 2 7 4 1 5 2 ] . {\displaystyle A={\begin{bmatrix}1&3&2\\2&7&4\\1&5&2\end{bmatrix}}.}

The rows of this matrix span the row space, but they may not be linearly independent, in which case the rows will not be a basis. To find a basis, we reduce A to row echelon form:

r1, r2, r3 represents the rows.

[ 1 3 2 2 7 4 1 5 2 ] ~ ? r 2 - 2 r 1 [ 1 3 2 0 1 0 1 5 2 ] ~ ? r 3 - r 1 [ 1 3 2 0 1 0 0 2 0 ] ~ ? r 3 - 2 r 2 [ 1 3 2 0 1 0 0 0 0 ] ~ ? r 1 - 3 r 2 [ 1 0 2 0 1 0 0 0 0 ] . {\displaystyle {\begin{bmatrix}1&3&2\\2&7&4\\1&5&2\end{bmatrix}}\underbrace {\sim } _{r_{2}-2r_{1}}{\begin{bmatrix}1&3&2\\0&1&0\\1&5&2\end{bmatrix}}\underbrace {\sim } _{r_{3}-r_{1}}{\begin{bmatrix}1&3&2\\0&1&0\\0&2&0\end{bmatrix}}\underbrace {\sim } _{r_{3}-2r_{2}}{\begin{bmatrix}1&3&2\\0&1&0\\0&0&0\end{bmatrix}}\underbrace {\sim } _{r_{1}-3r_{2}}{\begin{bmatrix}1&0&2\\0&1&0\\0&0&0\end{bmatrix}}.}

Once the matrix is in echelon form, the nonzero rows are a basis for the row space. In this case, the basis is { (1, 3, 2), (0, 1, 0) }. Another possible basis { (1, 0, 2), (0, 1, 0) } comes from a further reduction.

This algorithm can be used in general to find a basis for the span of a set of vectors. If the matrix is further simplified to reduced row echelon form, then the resulting basis is uniquely determined by the row space.

It is sometimes convenient to find a basis for the row space from among the rows of the original matrix instead (for example, this result is useful in giving an elementary proof that the determinantal rank of a matrix is equal to its rank). Since row operations can affect linear dependence relations of the row vectors, such a basis is instead found indirectly using the fact that the column space of AT is equal to the row space of A. Using the example matrix A above, find AT and reduce it to row echelon form:

A T = [ 1 2 1 3 7 5 2 4 2 ] ~ [ 1 2 1 0 1 2 0 0 0 ] . {\displaystyle A^{T}={\begin{bmatrix}1&2&1\\3&7&5\\2&4&2\end{bmatrix}}\sim {\begin{bmatrix}1&2&1\\0&1&2\\0&0&0\end{bmatrix}}.}

The pivots indicate that the first two columns of AT form a basis of the column space of AT. Therefore, the first two rows of A (before any row reductions) also form a basis of the row space of A.

Dimension

The dimension of the row space is called the rank of the matrix. This is the same as the maximum number of linearly independent rows that can be chosen from the matrix, or equivalently the number of pivots. For example, the 3 × 3 matrix in the example above has rank two.

The rank of a matrix is also equal to the dimension of the column space. The dimension of the null space is called the nullity of the matrix, and is related to the rank by the following equation:

rank ( A ) + nullity ( A ) = n , {\displaystyle \operatorname {rank} (A)+\operatorname {nullity} (A)=n,}

where n is the number of columns of the matrix A. The equation above is known as the rank-nullity theorem.

Relation to the null space

The null space of matrix A is the set of all vectors x for which Ax = 0. The product of the matrix A and the vector x can be written in terms of the dot product of vectors:

A x = [ r 1 ? x r 2 ? x ? r m ? x ] , {\displaystyle A\mathbf {x} ={\begin{bmatrix}\mathbf {r} _{1}\cdot \mathbf {x} \\\mathbf {r} _{2}\cdot \mathbf {x} \\\vdots \\\mathbf {r} _{m}\cdot \mathbf {x} \end{bmatrix}},}

where r1, ... , rm are the row vectors of A. Thus Ax = 0 if and only if x is orthogonal (perpendicular) to each of the row vectors of A.

It follows that the null space of A is the orthogonal complement to the row space. For example, if the row space is a plane through the origin in three dimensions, then the null space will be the perpendicular line through the origin. This provides a proof of the rank-nullity theorem (see dimension above).

The row space and null space are two of the four fundamental subspaces associated with a matrix A (the other two being the column space and left null space).

Relation to coimage

If V and W are vector spaces, then the kernel of a linear transformation TV -> W is the set of vectors v ? V for which T(v) = 0. The kernel of a linear transformation is analogous to the null space of a matrix.

If V is an inner product space, then the orthogonal complement to the kernel can be thought of as a generalization of the row space. This is sometimes called the coimage of T. The transformation T is one-to-one on its coimage, and the coimage maps isomorphically onto the image of T.

When V is not an inner product space, the coimage of T can be defined as the quotient space V / ker(T).


Finding Basis for Row Space and Column Space - YouTube
photo src: i.ytimg.com


See also

  • Euclidean subspace

Left column: Orbital structure of the phase (x, ˙ x) space. Top ...
photo src: www.researchgate.net


Notes


2555 Math 3 lecture 5 Ch 5 5 row space column space and null space ...
photo src: i.ytimg.com


References

Textbooks

  • Banerjee, Sudipto; Roy, Anindya (June 6, 2014), Linear Algebra and Matrix Analysis for Statistics (1st ed.), CRC Press, ISBN 978-1-42-009538-8 
  • Strang, Gilbert (July 19, 2005), Linear Algebra and Its Applications (4th ed.), Brooks Cole, ISBN 978-0-03-010567-8 
  • Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0-387-98259-0 
  • Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7 
  • Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8 
  • Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3 
  • Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International 
  • Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall 

Matrix Vector Spaces.html
photo src: sites.oxy.edu


External links

  • Weisstein, Eric W. "Row Space". MathWorld. 
  • Weisstein, Eric W. "Column Space". MathWorld. 
  • Gilbert Strang, MIT Linear Algebra Lecture on the Four Fundamental Subspaces at Google Video, from MIT OpenCourseWare
  • Khan Academy video tutorial
  • Lecture on column space and nullspace by Gilbert Strang of MIT
  • Row Space and Column Space

Source of the article : Wikipedia

Comments
0 Comments