fccjxxw.com

线性代数英文ppt3_图文

Linear Algebra
Chapter 4 General Vector Spaces

4.1 General Vector Spaces and Subspaces
Our aim in this section will be to focus on the algebraic properties of Rn. We draw up a set of axioms based on the properties of Rn. Any set that satisfies these axioms will have similar algebraic properties to the vector space Rn.

Definition
A vector space is a set V of elements called vectors, having operations of addition and scalar multiplication defined on it that satisfy the following conditions. (u, v, and w are arbitrary elements of V, and c and d are scalars.) Closure Axioms 1. The sum u + v exists and is an element of V. (V is closed under addition.) 2. cu is an element of V. (V is closed under scalar multiplication.)
Ch04_2

Examples
(1) V={ …, -3, -1, 1, 3, 5, 7, …} V is not closed under addition because 1+3=4 ? V.

(2)

Z={ …, -2, -1, 0, 1, 2, 3, 4, …}

Z is closed under addition because for any a, b ? Z, a + b ?Z. Z is not closed under scalar multiplication because ? is a scalar, for any odd a ? Z, (?)a ? Z.
Ch04_3

Definition of Vector Space (continued)
Addition Axioms 3. u + v = v + u (commutative property) 4. u + (v + w) = (u + v) + w (associative property) 5. There exists an element of V, called the zero vector, denoted 0, such that u + 0 = u. 6. For every element u of V there exists an element called the negative of u, denoted -u, such that u + (-u) = 0. Scalar Multiplication Axioms 7. c(u + v) = cu + cv 8. (c + d)u = cu + du 9. c(du) = (cd)u 10.1u = u
Ch04_4

A Vector Space in R3

Ch04_5

Vector Spaces of Matrices
? p q? Let M 22 ? { ? | p, q, r , s ? R }. Prove that M22 is a vector space. ? ? r s?

Proof

Axiom 1:

?a b ? ?e Let u ? ? and v ? ? ? ?c d ? ?g

f? ? M 22 . ? h?

a b? ? e f ? ?a ? e b ? f ? ? u?v ? ? ? ? ? ? ? ?c d ? ? g h ? ? ?c ? g d ? h ? ? u + v is a 2 ? 2 matrix. Thus M22 is closed under addition.
Axiom 3 and 4: From our previous discussions we know that 2 ? 2 matrices are communicative and associative under addition (Theorem 2.2).
Ch04_6

Axiom 5: 0 0? ? The 2 ? 2 zero matrix is 0 ? ?0 0? , since ? ? a b ? ?0 0 ? ? a b ? ? u?0 ? ? ? ?u ? ? ? ? ? ? c d 0 0 c d ? ? ? ? ? ? Axiom 6: a b? - a - b? ? ? If u ? , then - u ? , since ? ? ? ? c d c d ? ? ? ? a b ? ? - a - b ? ? a - a b - b ? ?0 0 ? u ? (-u) ? ? ? ? ? ?0 ? ? ? ? ? ? ? ? ? c d ? ? - c - d ? ? c - c d - d ? ?0 0 ? The set of m ? n matrices, Mmn, is a vector space.
? p q? W ?{? | p, q, r , s ? 0}. ? ? r s?
Ch04_7

Vector Spaces of Functions
Prove that V={ f | f: R ?R } is a vector space. Let f, g ? V, c ? R. For example: f: R ?R, f(x)=2x, g: R ?R, g(x)=x2+1. Axiom 1: f + g is defined by (f + g)(x) = f(x) + g(x). ? f + g : R ?R ? f + g ? V. Thus V is closed under addition. Axiom 2: cf is defined by (cf)(x) = c? f(x). ? cf : R ?R ? cf ? V. Thus V is closed under scalar multiplication.
Ch04_8

Vector Spaces of Functions

(跳過)

Axiom 5: Let 0 be the function such that 0(x) = 0 for every x?R. 0 is called the zero function. We get (f + 0)(x) = f(x) + 0(x) = f(x) + 0 = f(x) for every x?R. Thus f + 0 = f. 0 is the zero vector. Axiom 6: Let the function –f defined by (-f )(x) = -f (x).
[ f ? (- f )](x) ? f ( x) ? (- f )(x) ? f ( x) - [ f ( x)] ?0 ? 0( x)

Thus [f + (-f )] = 0, -f is the negative of f.
V={ f | f(x)=ax2+bx+c for some a,b,c ?R}
Ch04_9

The Complex Vector Space Cn
Let (u1 , ..., un ) be a sequence of n complexnumbers. T heset of all such sequences is denotedC n . Let operat ions of addition and scalar multiplica tion (by a complexscalar c) be defined on C n as follows:

(跳過)

(u1 , ..., un ) ? (v1 , ..., vn ) ? (u1 ? v1 , ..., un ? vn ) c(u1 , ..., un ) ? (cu1 , ..., cun )
It can be shown that Cn with these two operations is a complex vector space.

Ch04_10

Theorem 4.1
Let V be a vector space, v a vector in V, 0 the zero vector of V, c a scalar, and 0 the zero scalar. Then (a) 0v = 0 (b) c0 = 0 (c) (-1)v = -v (d) If cv = 0, then either c = 0 or v = 0. Proof (a) 0v + 0v = (0 + 0)v = 0v (0v + 0v) + (-0v) = 0v + (-0v) 0v + [0v + (-0v)] = 0, 0v + 0 = 0, 0v = 0 (c) (-1)v + v = (-1)v + 1v = [(-1) + 1]v = 0v = 0
Ch04_11

Subspaces
In general, a subset of a vector space may or may not satisfy the closure axioms. However, any subset that is closed under both of these operations satisfies all the other vector space properties.

Definition
Let V be a vector space and U be a nonempty subset of V. If U is a vector space under the operations of addition and scalar multiplication of V it is called a subspace of V.

Ch04_12

Example 1
Let W be the subset of R3 consisting of all vectors of the form (a, a, b). Show that W is a subspace of R3. Solution Let (a, a, b), (c, c, d) ? W, and let k ?R. We get (a, a, b) + (c, c, d) = (a+c, a+c, b+d) ? W k(a, a, b) = (ka, ka, kb) ? W Thus W is a subspace of R3.

Ch04_13

Example 1’
Let W be the set of vectors of the form (a, a2, b). Show that W is not a subspace of R3. Solution Let (a, a2, b), (c, c2, d) ? W. (a, a2, b) + (c, c2, d) = (a+ c, a2 + c2, b + d) ? ( a + c, ( a + c) 2 , b + d ) Thus (a, a2, b) + (c, c2, d) ? W. W is not closed under addition. W is not a subspace.

Ch04_14

Example 2
Prove that the set U of 2 ? 2 diagonal matrices is a subspace of the vector space M22 of 2 ? 2 matrices. Solution (+) Let u ? ?a 0? and v ? ? p 0 ? ?U. ? ? ?0 b? ? ? 0 q? ? a 0? ? p 0 ? ? a ? p 0 ? ? ? ? We get u ? v ? ? ? ? ? 0 b 0 q b ? q? ? ? ? ? ? ? 0 ? ? u + v ? U. ? U is closed under addition.

a 0? ?ca 0 ? (?) Let c ?R. We get cu ? c ? ? ? ? ?0 b? ? ? 0 cb ? ?
? cu ? U. ? U is closed under scalar multiplication. ? U is a subspace of M22.
Ch04_15

Example 3

(跳過)

Let Pn denoted the set of real polynomial functions of degree ? n. Prove that Pn is a vector space if addition and scalar multiplication are defined on polynomials in a pointwise manner. Solution Let f and g ? Pn, where

f ( x) ? an x n ? an-1 x n-1 ? ... ? a1 x ? a0 and g ( x) ? bn x n ? bn-1 x n -1 ? ... ? b1 x ? b0

(+) ( f ? g )(x)

? f ( x) ? g ( x) ? [an x n ? an -1 x n -1 ? ... ? a1 x ? a0 ] ? [bn x n ? bn -1 x n -1 ? ... ? b1 x ? b0 ] ? (an ? bn ) x n ? (an -1 ? bn -1 ) x n -1 ? ... ? (a1 ? b1 ) x ? (a0 ? b0 )

(f + g)(x) is a polynomial of degree ? n. Thus f + g ? Pn. Pn is closed under addition.

Ch04_16

(跳過) (?) Let c ?R, (cf )( x) ? c[ f ( x)] ? c[an x n ? an -1 x n -1 ? ... ? a1 x ? a0 ] ? can x n ? can -1 x n -1 ? ... ? ca1 x ? ca0 (cf)(x) is a polynomial of degree ? n. Thus cf ? Pn. Pn is closed under scalar multiplication. By (+) and (?), Pn is a subspace of the vector space V of functions. Therefore Pn is a vector space.

Ch04_17

Theorem 4.2
Let U be a subspace of a vector space V. U contains the zero vector of V.

Proof
Let u be an arbitrary vector in U and 0 be the zero vector of V. Let 0 be the zero scalar. By Theorem 4.5(a) we know that 0u = 0. Since U is closed under scalar multiplication, this means that 0 is in U. Note. Let 0 be the zero vector of V, U is a subset of V. If 0 ?U ? U is not a subspace of V. If 0 ?U ? check (+)(?) to determine if U is a subspace of V.
Ch04_18

Example 4
Let W be the set of vectors of the form (a, a, a+2). Show that W is not a subspace of R3.

Solution
If (a, a, a+2) = (0, 0, 0), then a = 0 and a + 2 = 0. ?? ? (0, 0, 0) ? W. ? W is not a subspace of R3.

Ch04_19

4.2 Linear Combinations
W={(a, a, b) | a,b ?R} ? R3
(a, a, b) = a (1,1,0) + b (0,0,1) ? Any vector in W can be represented by (1,1,0) and (0,0,1) e.g., (2, 2, 3) = 2 (1,1,0) + 3 (0,0,1) (-1, -1, 7) = -1 (1,1,0) + 7 (0,0,1)

Definition
Let v1, v2, …, vm be vectors in a vector space V. The vector v in V is a linear combination of v1, v2, …, vm if there exist scalars c1, c2, …, cm such that v can be written v = c1v1 + c2v2 + … + cmvm ? Any vector in W is linear combination of (1,1,0) and (0,0,1).
Ch04_20

Example
The vector (7, 3, 2) is a linear combination of the vector (1, 3, 0) and (2, -3, 1) since (7, 3, 2) = 3(1, 3, 0) + 2(2, -3, 1) The vector (3, 4, 2) is not a linear combination of (1, 1, 0) and (2, 3, 0) because there are no values of c1 and c2 for which (3, 4, 2) = c1(1, 1, 0) + c2(2, 3, 0) is true.

Ch04_21

Example 1
Determine whether or not the vector (8, 0, 5) is a linear combination of the vectors (1, 2, 3), (0, 1, 4), and (2, -1, 1). Solution Suppose c1(1, 2, 3)+c2(0, 1, 4)+c3(2, -1, 1)=(8, 0, 5).
? 2c3 ? 8 ? c1 ? ? ?2c1 ? c2 - c3 ? 0 ? ?3c1 ? 4c2 ? c3 ? 5

? c1 ? 2, c2 ? -1, c3 ? 3

Thus (8, 0, 5) is a linear combination of (1, 2, 3), (0, 1, 4), and (2, -1, 1).

Ch04_22

Example 2
Determine whether the vector (4, 5, 5) is a linear combination of the vectors (1, 2, 3), (-1, 1, 4), and (3, 3, 2).

Solution Suppose c1 (1, 2, 3) ? c2 (-1, 1, 4) ? c3 (3, 3, 2) ? (4, 5, 5)
? c1 - c2 ? 3c3 ? 4 ? ? ?2c1 ? c2 ? 3c3 ? 5 ? c1 ? -2r ? 3, c2 ? r - 1, c3 ? r ?3c ? 4c ? 2c ? 5 2 3 ? 1

Thus (4, 5, 5) can be expressed in many ways as a linear combination of (1, 2, 3), (-1, 1, 4), and (3, 3, 2):
(4, 5, 5) ? (-2r ? 3)(1, 2, 3) ? (r - 1)(-1, 1, 4) ? r (2, 3, 6)

Ch04_23

Example 2’
Show that the vector (3, -4, -6) cannot be expressed as a linear combination of the vectors (1, 2, 3), (-1, -1, -2), and (1, 4, 5).

Solution Suppose c1 (1, 2, 3) ? c2 (-1, - 1, - 2) ? c3 (1, 4, 5) ? (3, - 4, - 6) ? ? c1 - c 2 ? c3 ? 3 ? ? 2c1 - c 2 ? 4c3 ? -4 ?3c - 2c ? 5c ? - 6 2 3 ? 1
This system has no solution. Thus (3, -4, -6) is not a linear combination of (1, 2, 3), (-1, -1, -2), and (1, 4, 5).

Ch04_24

Spanning a Vector Space
Definition
Let v1, v2, …, vm be vectors in a vector space V. These vectors span V if every vector in V can be expressed as a linear combination of them. {v1, v2, …, vm} is called a spanning set of V.

Ch04_25

Example 3
Show that the vectors (1, 2, 0), (0, 1, -1), and (1, 1, 2) span R3. Solution Let (x, y, z) be an arbitrary element of R3. Suppose ( x, y, z ) ? c1 (1, 2, 0) ? c2 (0, 1, - 1) ? c3 (1, 1, 2)

? ( x, y, z) ? (c1 ? c3 , 2c1 ? c2 ? c3 , - c2 ? 2c3 )
? c3 ? x ? c1 ? ? ?2c1 ? c 2 ? c3 ? y ? - c 2 ? 2c 3 ? z ?

?

?c1 ? 3 x - y - z ? ?c 2 ? -4 x ? 2 y ? z ?c ? -2 x ? y ? z ? 3

? ( x, y, z ) ? (3x - y - z )(1,2,0) ? (-4 x ? 2 y ? z )(0, 1, - 1) ? (-2 x ? y ? z )(1, 1, 2)

?The vectors (1, 2, 0), (0, 1, -1), and (1, 1, 2) span R3.
Ch04_26

Theorem 4.3
Let v1, …, vm be vectors in a vector space V. Let U be the set consisting of all linear combinations of v1, …, vm . Then U is a subspace of V spanned the vectors v1, …, vm . U is said to be the vector space generated by v1, …, vm. It is denoted Span{v1, …, vm}.

Proof
(+) Let u1 = a1v1 + … + amvm and u2 = b1v1 + … + bmvm ? U. Then u1 + u2 = (a1v1 + … + amvm) + (b1v1 + … + bmvm) = (a1 + b1) v1 + … + (am + bm) vm ? u1 + u2 is a linear combination of v1, …, vm . ? u1 + u2 ? U. ? U is closed under vector addition.
Ch04_27

(?)Let c ?R. Then cu1 = c(a1v1 + … + amvm) = ca1v1 + … + camvm) ? cu1 is a linear combination of v1, …, vm . ? cu1 ? U. ? U is closed under scalar multiplication. Thus U is a subspace of V. By the definition of U, every vector in U can be written as a linear combination of v1, …, vm . Thus v1, …, vm span U.

Ch04_28

Example 4
Consider the vectors (-1, 5, 3) and (2, -3, 4) in R3. Let U =Span{(-1, 5, 3), (2, -3, 4)}. U will be a subspace of R3 consisting of all vectors of the form c1(-1, 5, 3) + c2(2, -3, 4). The following are examples of some of the vectors in U, which are obtained by given c1 and c2 various values.
c1 ? 1, c2 ? 1; vector(1, 2, 7) c1 ? 2, c2 ? 3; vector(4, 1, 18)

We can visualize U. U is made up of all vectors in the plane defined by the vectors (-1, 5, 3) and (2, -3, 4).

Ch04_29

Figure 4.1
Ch04_30

We can generalize this result. Let v1 and v2 be vectors in the space R3. The subspace U generated by v1 and v2 is the set of all vectors of the form c1v1 + c2v2. If v1 and v2 are not colinear, then U is the plane defined by v1 and v2 .

Figure 4.2

Ch04_31

Example 5
Let v1 and v2 span a subspace U of a vector V. Let k1 and k2 be nonzero scalars. Show that k1v1 and k2v2 also span U.

Solution
Choose any vector v ? U. Since v1 and v2 span U, ? There exist a, b ? R such that v = av1 + bv2

a b We can write v ? (k1v1 ) ? (k2 v 2 ) k1 k2
? k1v1 and k2v2 span U.

Ch04_32

Example 6
- 1 7? Determine whether the matrix ? is a linear combination ? ? ? 8 - 1? of the matrices ?1 0?, ?2 - 3?, and ?0 1? in the vector space ? ? ?2 1? ? ? ?0 2 ? ? ? 2 0? ? M22 of 2 ? 2 matrices.
Solution Suppose

1 0? 2 - 3? 0 1? ?- 1 7 ? ? ? ? c1 ? c2 ? c3 ? ? ? ? ? ? ? 2 1 0 2 2 0 ? ? ? ? ? ? ? ? 8 - 1? ?

Then

? c1 ? 2c2 ?2c ? 2c 2 ? 1

- 3c2 ? c3 ? ?- 1 7 ? ?? ? c1 ? 2c2 ? ? 8 - 1? ?

Ch04_33

? c1 ? 2c2 ? -1 ? ?- 3c2 ? c3 ? 7 ? ? 2c1 ? 2c3 ? 8 ? ? c1 ? 2c2 ? -1 This system has the unique solution c1 = 3, c2 = -2, c3 = 1. Therefore ?- 1 7 ? ? 3?1 0? - 2?2 - 3? ? ?0 1? ? ? ? 8 - 1? ? ?2 1? ? ? ?0 2 ? ? ? ? 2 0? ?

Ch04_34

Example 7
Solution Suppose c1 f ? c2 g ? h. Then

(跳過)

Show that the function h(x) =4x2+3x-7 lies in the space Span{f, g} generated by f(x) = 2x2-5 and g(x) = x+1.

c1 (2x 2 - 5) ? c2 ( x ? 1) ? 4x 2 ? 3x - 7 2c1 x 2 ? c2 x - 5c1 ? c2 ? 4x 2 ? 3x - 7

2c1 ? 4 ? ? ?? c2 ? 3 ?- 5c ? c ? -7 1 2 ?

? c1 ? 2, c2 ? 3

? h ? 2 f ? 3g.

Therefore the function h(x) lies in Span{f, g}.
Ch04_35

Homework
Exercise 4.2: 2, 4, 6, 18, 20

Ch04_36

4.3 Linear Dependence and Independence
The concepts of dependence and independence of vectors are useful tools in constructing “efficient” spanning sets for vector spaces – sets in which there are no redundant vectors.

Definition
(a) The set of vectors { v1, …, vm } in a vector space V is said to be linearly dependent if there exist scalars c1, …, cm, not all zero, such that c1v1 + … + cmvm = 0 (b) The set of vectors { v1, …, vm } is linearly independent if c1v1 + … + cmvm = 0 can only be satisfied when c1 = 0, …, cm = 0.
Ch04_37

Example 1
Determine whether the set{(1, 2, 0), (0, 1, -1), (1, 1, 2)} is linearly dependent in R3. Solution

Suppose c1 (1, 2, 0) ? c2 (0, 1, -1) ? c3 (1, 1, 2) ? 0
? c3 ? 0 ? c1 ? ? ? 2c1 ? c2 ? c3 ? 0 ? ? - c ? 2c ? 0 2 3 ?

c1 = 0 c2 = 0 is the unique solution. c3 = 0

Thus the set of vectors is linearly independent.

Ch04_38

Example 2

(跳過)

(a) Show that the set {x2+1, 3x–1, –4x+1} is linearly independent in P2. (b) Show that the set {x+1, x–1, –x+5} is linearly dependent in P1. Solution

(a) Suppose c1(x2 + 1) + c2(3x – 1) + c3(– 4x + 1) = 0 ? c1x2 +(3c2 – 4c3)x + c1 – c2 + c3 = 0 ? c1 = 0, c2 = 0, c3 = 0 is the unique solution ? linearly independent
(b) Suppose c1(x+1) + c2(x –1) + c3(–x+5) = 0 ? (c1+ c2 – c3)x + c1 – c2 +5c3 = 0 ? many solutions ? linearly dependent
Ch04_39

Theorem 4.4
A set consisting of two or more vectors in a vector space is linearly dependent if and only if it is possible to express one of the vectors as a linearly combination of the other vectors. Proof (?) Let the set { v1, v2, …, vm } be linearly dependent. Therefore, there exist scalars c1, c2, …, cm, not all zero, such that c1v1 + c2v2 + … + cmvm = 0 Assume that c1 ? 0. The above identity can be rewritten

? - c2 ? ? - cm ? v1 ? ? ?v2 ? ? ? ? ?vm ? c1 ? ? c1 ?

Thus, v1 is a linear combination of v2, …, vm.

Ch04_40

(?) Conversely, assume that v1 is a linear combination of v2, …, vm. Therefore, there exist scalars d2, …, dm, such that v1 = d2v2 + … + dmvm Rewrite this equation as 1v1 + (- d2)v2 + … + (-dm)vm = 0 Thus the set {v1, v2, …, vm} is linearly dependent, completing the proof.

Ch04_41

Linear Dependence of {v1, v2}

{v1, v2} linearly dependent; vectors lie on a line

{v1, v2} linearly independent; vectors do not lie on a line

Figure 4.3 Linear dependence and independence of {v1, v2} in R3.
Ch04_42

Linear Dependence of {v1, v2, v3}

{v1, v2, v3} linearly dependent; vectors lie in a plane

{v1, v2, v3} linearly independent; vectors do not lie in a plane

Figure 4.4 Linear dependence and independence of {v1, v2, v3} in R3.
Ch04_43

Theorem 4.5
Let V be a vector space. Any set of vectors in V that contains the zero is linearly dependent.

Proof Consider the set {0, v2, …, vm}, which contains the zero vectors. Let us examine the identity
c1 0 ? c2 v 2 ? ? ? cm v m ? 0

We see that the identity is true for c1 = 1, c2 = 0, …, cm = 0 (not all zero). Thus the set of vectors is linearly dependent, proving the theorem.

Ch04_44

Theorem 4.6
Let the set {v1, …, vm} be linearly dependent in a vector space V. Any set of vectors in V that contains these vectors will also be linearly dependent. Proof Since the set {v1, …, vm} is linearly dependent, there exist scalars c1, …, cm, not all zero, such that c1 v1 ? ? ? cm v m ? 0 Consider the set of vectors {v1, …, vm, vm+1, …, vn}, which contains the given vectors. There are scalars, not all zero, namely c1, …, cm, 0, …, 0 such that c1 v1 ? ? ? cm v m ? 0 v m ?1 ? ? ? 0 v n ? 0 Thus the set {v1, …, vm, vm+1, …, vn} is linearly dependent.
Ch04_45

Example 3
Let the set {v1, v2} be linearly independent. Prove that {v1 + v2, v1 – v2} is also linearly independent.

Solution
Suppose We get
a( v1 ? v 2 ) ? b( v1 - v 2 ) ? 0

(1)

av1 ? av 2 ? bv1 - bv 2 ? 0 (a ? b) v1 ? (a - b) v 2 ? 0
Since {v1, v2} is linearly independent a?b ? 0 a -b ? 0 Thus system has the unique solution a = 0, b = 0. Returning to identity (1) we get that {v1 + v2, v1 – v2} is linearly independent.
Ch04_46

4.4 Properties of Bases
Theorem 4.7 Let the vectors v1, …, vn span a vector space V. Each vector in V can be expressed uniquely as a linear combination of these vectors if and only if the vectors are linearly independent.

Proof (a) (?) Assume that v1, …, vn are linearly independent. Let v ? V. Since v1, …, vn span V, we can express v as a linear combination of these vectors. Suppose we can write
v ? a1 v1 ? ? ? an v n and v ? b1 v1 ? ? ? bn v n

? ?

a1 v1 ? ? ? an v n ? b1 v1 ? ? ? bn v n (a1 - b1 ) v1 ? ? ? (an - bn ) v n ? 0

Since v1, …, vn are linearly independent, a1 – b1 = 0, …, an – bn = 0, implying that a1 = b1, …, an = bn. ? unique

Ch04_47

(b) (?) Let v ? V. Assume that v can be written in only one way as a linear combination of v1, …, vn. Note that 0v1+ …+ 0vn= 0. If c1v1+ …+ cnvn= 0, it implies that c1 = 0, c2 = 0, …, cn = 0. ? v1, …, vn are linearly independent.

Definition
A finite set of vectors {v1, …, vm} is called a basis for a vector space V if the set spans V and is linearly independent.
Ch04_48

Theorem 4.8
Let {v1, …, vn } be a basis for a vector space V. If {w1, …, wm} is a set of more than n vectors in V, then this set is linearly dependent. Proof Suppose

c1w1 ? ? ? cm w m ? 0 We will show that values of c1, …, cm are not all zero.

(1)

The set {v1, …, vn} is a basis for V. Thus each of the vectors w1, …, wm can be expressed as a linear combination of v1, …, vn. Let
w1 ? a11 v1 ? a12 v 2 ? ? ? a1n v n ? w m ? am1 v1 ? am 2 v 2 ? ? ? amn v n
Ch04_49

Substituting for w1, …, wm into Equation (1) we get
c1 (a11v1 ? a12 v 2 ? ?? a1n v n ) ? ?? cm (am1v1 ? am2 v 2 ? ?? amn v n ) ? 0

Rearranging, we get (c1a11 ? c2a21 ? ?? cmam1 ) v1 ? ?? (c1a1n ? c2a2n ? ?? cmamn ) v n ? 0 Since v1, …, vn are linear independent, a11c1 ? a21c2 ? ? ? am1cm ? 0 ? a1n c1 ? a2 n c2 ? ? ? amn cm ? 0
Since m > n, there are many solutions in this system. Thus the set {w1, …, wm} is linearly dependent.

Ch04_50

Theorem 4.9
Any two bases for a vector space V consist of the same number of vectors.

Proof
Let {v1, …, vn} and {w1, …, wm} be two bases for V. By Theorem 4.8, Thus n = m. m ? n and n ? m

Definition
If a vector space V has a basis consisting of n vectors, then the dimension of V is said to be n. We write dim(V) for the dimension of V. ? V is finite dimensional if such a finite basis exists. ? V is infinite dimensional otherwise.

Ch04_51

Example 1
Consider the set {{1, 2, 3), (-2, 4, 1)} of vectors in R3. These vectors generate a subspace V of R3 consisting of all vectors of the form v ? c1 (1, 2, 3) ? c2 (-2, 4, 1) The vectors (1, 2, 3) and (-2, 4, 1) span this subspace. Furthermore, since the second vector is not a scalar multiple of the first vector, the vectors are linearly independent. Therefore {{1, 2, 3), (-2, 4, 1)} is a basis for V. Thus dim(V) = 2. We know that V is, in fact, a plane through the origin.

Ch04_52

Theorem 4.10
(a) The origin is a subspace of R3. The dimension of this subspace is defined to be zero. (b) The one-dimensional subspaces of R3 are lines through the origin. (c) The two-dimensional subspaces of R3 are planes through the origin.

Figure 4.5 One and two-dimensional subspaces of R3

Ch04_53

Proof (a) Let V be the set {(0, 0, 0)}, consisting of a single elements, the zero vector of R3. Let c be the arbitrary scalar. Since (0, 0, 0) + (0, 0, 0) = (0, 0, 0) and c(0, 0, 0) = (0, 0, 0) V is closed under addition and scalar multiplication. It is thus a subspace of R3. The dimension of this subspaces is defined to be zero. (b) Let v be a basis for a one-dimensional subspace V of R3. Every vector in V is thus of the form cv, for some scalar c. We know that these vectors form a line through the origin. (c) Let {v1, v2}be a basis for a two-dimensional subspace V of R3. Every vector in V is of the form c1v1 + c2v2. V is thus a plane through the origin.
Ch04_54

Theorem 4.11
Let V be a vector space of dimension n. (a) If S = {v1, …, vn} is a set of n linearly independent vectors in V, then S is a basis for V. (b) If S = {v1, …, vn} is a set of n vectors V that spans V, then S is a basis for V.

Let V be a vector space, S = {v1, …, vn} is a set of vectors in V. (a) dim(V) = |S|. (b) S is a linearly independent set. (c) S spans V.

S is a basis of V.

Ch04_55

Example 2
Prove that the set B={(1, 3, -1), (2, 1, 0), (4, 2, 1)} is a basis for R3. Solution Since dim(R3)=|B|=3. It suffices to show that this set is linearly independent or it spans R3. Let us check for linear independence. Suppose
c1 (1, 3, - 1) ? c2 (2, 1, 0) ? c3 (4, 2, 1) ? (0, 0, 0)

This identity leads to the system of equations c1 ? 2c2 ? 4c3 ? 0 3c1 ? c2 ? 2c3 ? 0 - c1 ? c3 ? 0 This system has the unique solution c1 = 0, c2 = 0, c3 = 0. Thus the vectors are linearly independent. The set {(1, 3, -1), (2, 1, 0), (4, 2, 1)} is therefore a basis for R3.
Ch04_56

Theorem 4.12
Let V be a vector space of dimension n. Let {v1, …, vm} be a set of m linearly independent vectors in V, where m < n. Then there exist vectors vm+1, …, vn such that {v1, …, vm, vm+1, …, vn } is a basis of V.

Ch04_57

Example 3
State (with a brief explanation) whether the following statements are true or false. (a) The vectors (1, 2), (-1, 3), (5, 2) are linearly dependent in R2. (b) The vectors (1, 0, 0), (0, 2, 0), (1, 2, 0) span R3. (c) {(1, 0, 2), (0, 1, -3)} is a basis for the subspace of R3 consisting of vectors of the form (a, b, 2a -3b). (d) Any set of two vectors can be used to generate a twodimensional subspace of R3. Solution (a) True: The dimension of R2 is two. Thus any three vectors are linearly dependent. (b) False: The three vectors are linearly dependent. Thus they cannot span a three-dimensional space.
Ch04_58

(c) True: The vectors span the subspace since (a, b, 2a, -3b) = a(1, 0, 2) + b(0, 1, -3) The vectors are also linearly independent since they are not colinear. (d) False: The two vectors must be linearly independent.

Ch04_59

Homework
Exercise 4.4: 5, 6, 7, 16, 20, 21, 23, 25

Ch04_60

4.5 Rank
Rank enables one to relate matrices to vectors, and vice versa.

Definition
Let A be an m ? n matrix. The rows of A may be viewed as row vectors r1, …, rm, and the columns as column vectors c1, …, cn. Each row vector will have n components, and each column vector will have m components, The row vectors will span a subspace of Rn called the row space of A, and the column vectors will span a subspace of Rm called the column space of A.

Ch04_61

Example 1
Consider the matrix
? 1 2 - 1 2? ?3 4 1 6? ?5 4 1 0? ? ? (1) The row vectors of A are r1 ? (1, 2, - 1, 2), r2 ? (3, 4, 1, 6), r3 ? (5, 4, 1, 0)

These vectors span a subspace of R4 called the row space of A. (2) The column vectors of A are ?1 ? ? 2? ?- 1? ? 2? c1 ? ?3? c 2 ? ?4? c3 ? ? 1? c 4 ? ?6? ? ? ? ? ? ? ? ? 5 4 1 ? ? ? ? ? ? ?0 ? These vectors span a subspace of R3 called the column space of A.
Ch04_62

Theorem 4.13
The row space and the column space of a matrix A have the same dimension.

Proof Let u1, …, um be the row vectors of A. The ith vector is u i ? ( ai1 , ai 2 , ..., ain ) Let the dimension of the row space be s. Let the vectors v1, …, vs form a basis for the row space. Let the jth vector of this set be v j ? (b j1 , b j 2 , ..., b jn )
Each of the row vectors of A is a linear combination of v1, …, vs. Let u1 ? c11 v 1 ? c12 v 2 ? ? ? c1s v s ? u m ? cm1 v1 ? cm 2 v 2 ? ? ? cms v s
Ch04_63

Equating the ith components of the vectors on the left and right, we get a1i ? c11b1i ? c12b2i ? ? ? c1s bsi ? ami ? cm1b1i ? cm 2b2i ? ? ? cms bsi This may be written ? a1i ? ? c1s ? ? c11 ? ? c12 ? ? ? ? ? b1i ? ? ? ? b2i ? ? ? ? ? ? bsi ? ? ? ? ? ? ? ? ? ? ? a c c ? m1 ? ? m2 ? ? mi ? ?cms ? This implies that each column vector of A lies in a space spanned by a single set of s vectors. Since s is the dimension of the row space of A, we get dim(column space of A) ? dim(row space of A)
Ch04_64

By similar reasoning, we can show that dim(row space of A) ? dim(column space of A) Combining these two results we see that dim(row space of A) = dim(column space of A), proving the theorem.

Definition
The dimension of the row space and the column space of a matrix A is called the rank of A. The rank of A is denoted rank(A).
Ch04_65

Example 2
Determine the rank of the matrix ?1 2 3 ? A ? ?0 1 2? ?2 5 8 ? ? ? Solution The third row of A is a linear combination of the first two rows: (2, 5, 8) = 2(1, 2, 3) + (0, 1, 2) Hence the three rows of A are linearly dependent. The rank of A must be less than 3. Since (1, 2, 3) is not a scalar multiple of (0, 1, 2), these two vectors are linearly independent. These vectors form a basis for the row space of A. Thus rank(A) = 2.

Ch04_66

Theorem 4.14
The nonzero row vectors of a matrix A that is in reduced echelon form are a basis for the row space of A. The rank of A is the number of nonzero row vectors. Proof Let A be an m ? n matrix with nonzero row vectors of r1, …, rt. Consider the identity k1r1 ? k 2r2 ? ? ? kt rt ? 0 Where k1, …, kt are scalars. The first nonzero element of r1 is 1. r1 is the only one of the row to have a nonzero number in this component. Thus, on adding the vectors k1r1 , k 2r2 ,..., kt rt , we get a vector whose first component is k1. On equating this vector to zero, we get k1 = 0. The identity then reduced to k 2r2 ? ? ? kt rt ? 0
Ch04_67

The first nonzero element of r2 is 1, and it is the only of these remaining row vectors with a nonzero number in this component. Thus k2 = 0. Similarly, k3, …, kt are all zero. The vector r1, …, rt are therefore linearly independent. These vectors span the row space of A. They thus form a basis for the row space of A. The dimension of the row space is t. The rank of A is t, the number of nonzero row vectors in A.

Ch04_68

Example 3
Find the rank of the matrix
?1 ?0 A?? 0 ? ?0 2 0 0 0 0 1 0 0 0? 0? 1? ? 0?

This matrix is in reduced echelon form. There are three nonzero row vectors, namely (1, 2, 0, 0), (0, 0, 1, 0), and (0, 0, 0, 1). According to the previous theorem, these three vectors form a basis for the row space of A. Rank(A) = 3.

Ch04_69

Theorem 4.15
Let A and B be row equivalent matrices. Then A and B have the same the row space. rank(A) = rank(B). Theorem 4.16 Let E be a reduced echelon form of a matrix A. The nonzero row vectors of E form a basis for the row space of A. The rank of A is the number of nonzero row vectors in E.

Ch04_70

Example 4
Find a basis for the row space of the following matrix A, and determine its rank. ?1 2 3 ? A ? ? 2 5 4? ?1 1 5 ? ? ? Solution Use elementary row operations to find a reduced echelon form of the matrix A. We get 3? ? 1 0 7? ?1 2 3 ? ? 1 2 ? 2 5 4 ? ? ?0 1 - 2 ? ? ?0 1 - 2 ? ?1 1 5 ? ?0 - 1 ? ?0 0 ? 2 0 ? ? ? ? ? ? The two vectors (1, 0, 7), (0, 1, -2) form a basis for the row space of A. Rank(A) = 2.
Ch04_71

Example 5
Find a basis for the column space of the following matrix A.
A
? ? ? ?? ? ? ? ?

1

1

0? ?
?

Solution

A

The column space of A becomes the row space of At. Let us find a basis for the row space of At.
2 - 1? ? 1 2 - 1? ? 1 0 5? ?1 ?1 3 - 4? ? ?0 1 - 3? ? ?0 1 - 3? ? 6? 6? 0? ?0 - 2 ? ? ?0 - 2 ? ? ?0 0 ?

? ? t ?? ? ? ? ? ?

1 1

2 -1? ? ? 3 -4? 6
? ? ? ?

2 3 -2? ? -1 -4 6? ? ?

0 -2

?1 ? ? 0? ? ? ? ? ? ?0?, ? 1? form a basis for the column space of A. ?5 ? ?- 3?

Ch04_72

Example 6
Find a basis for the subspace V of R4 spanned by the vectors (1, 2, 3, 4), (-1, -1, -4, -2), (3, 4, 11, 8) Solution

? 1 2 3 4? Let A ? ?- 1 - 1 - 4 - 2? . ? ? 3 4 11 8? ?
3 4? ? 1 2 3 4? ? 1 0 5 0? ? 1 2 ?- 1 - 1 - 4 - 2? ? ?0 1 -1 2? ? ?0 1 - 1 2? ? 8? ? 3 4 11 ? ? ?0 - 2 2 - 4? ? ? ?0 0 0 0? ?

? (1, 0, 5, 0) and (0, 1, -1, 2) form a basis for the subspace V.

? dim(V) = 2.

Ch04_73

Theorem 4.17
Consider a system AX=B of m equations in n variables (a) If the augmented matrix and the matrix of coefficients have the same rank r and r = n, the solution is unique. (b) If the augmented matrix and the matrix of coefficients have the same rank r and r < n, there are many solutions. (c) If the augmented matrix and the matrix of coefficients do not have the same rank, a solution does not exist.

Ch04_74

Theorem 4.18
Let A be an n ? n matrix. The following statements are equivalent. (a) |A| ? 0 (A is nonsingular). (b) A is invertible. (c) A is row equivalent to In. (d) The system of equations AX = B has a unique solution. (e) rank(A) = n. (f) The column vectors of A form a basis for Rn.

Ch04_75

Homework
Exercise 4.5: 5, 7, 10, 12

Ch04_76

4.6 Orthonormal Vectors and Projections
Definition
A set of vectors in a vector space V is said to be an orthogonal (正交) set if every pair of vectors in the set is orthogonal. The set is said to be an orthonormal set if it is orthogonal and each vector is a unit vector.

Ch04_77

Example 1
Show that the Solution
? ? 3 4 ? ? 4 3 ?? ( 1 , 0 , 0 ), ? 0, , ?, ? 0, , - ?? set ? ? 5 5 ? ? 5 5 ?? ?

is an orthonormal set.

(1) orthogonal:

? ? 4 3 (1,0,0) ? ?0, 5 ,- 5 ? ? 0; 3 4 4 3 ?0, 5 , 5 ?? ?0, 5 ,- 5 ? ? 0;
3 4 (1,0,0) ? 0, 5 , 5 ? 0;

(2) unit vector:
(1,0,0) ? 12 ? 0 2 ? 0 2 ? 1
3 4 0, 5 ,5
2

? ? ? 0 ? ? ? ? ? ? ?1 4 3 4 ? ? ?- 3 ? ? 1 ?0, 5 ? ? 0 ? ?5 ,-5 5
3 5
2

4 5

2

2

2

2

Thus the set is thus an orthonormal set.
Ch04_78

Theorem 4.19
An orthogonal set of nonzero vectors in a vector space is linearly independent.

Proof Let {v1, …, vm} be an orthogonal set of nonzero vectors in a vector space V. Let us examine the identity c1v1 + c2v2 + … + cmvm = 0 Let vi be the ith vector of the orthogonal set. Take the dot product of each side of the equation with vi and use the properties of the dot product. We get (c1v1 ? c2 v 2 ? ? ? cm v m ) ? v i ? 0 ? v i c1v1 ? v i ? c2 v 2 ? v i ? ? ? cm v m ? v i ? 0 Since the vectors v1, …, v2 are mutually orthogonal, vj?vi = 0 unless j = i. Thus ci v i ? v i ? 0 Since vi is a nonzero, then vi?vi ? 0. Thus ci = 0. Letting i = 1, …, m, we get c1 = 0, cm = 0, proving that the vectors are linearly independent. Ch04_79

Definition
A basis is an orthogonal set is said to be an orthogonal basis. A basis that is an orthonormal set is said to be an orthonormal basis. Standard Bases ? R2: {(1, 0), (0, 1)} ? R3: {(1, 0, 0), (0, 1, 0), (0, 0, 1)} orthonormal bases ? Rn: {(1, …, 0), …, (0, …, 1)}

Theorem 4.20
Let {u1, …, un} be an orthonormal basis for a vector space V. Let v be a vector in V. v can be written as a linearly combination of these basis vectors as follows: v ? ( v ? u1 )u1 ? ( v ? u 2 )u 2 ? ? ? ( v ? u n )u n
Ch04_80

Example 2
The following vectors u1, u2, and u3 form an orthonormal basis for R3. Express the vector v = (7, -5, 10) as a linear combination of these vectors.
? 3 4? ? 4 3? u1 ? (1, 0, 0), u 2 ? ? 0, , ?, u 3 ? ? 0, , - ? ? 5 5? ? 5 5?
v ? u1 ? (7, - 5, 10)? (1, 0, 0) ? 7 ? 3 4? v ? u 2 ? (7, - 5, 10)? ? 0, , ? ? 5 ? 5 5? ? 4 3? v ? u 3 ? (7, - 5, 10)? ? 0, , - ? ? -10 ? 5 5?

Solution

Thus

v ? 7u1 ? 5u2 -10u3
Ch04_81

Orthogonal Matrices
An orthogonal matrix is an invertible matrix that has the property A-1 = At Theorem 4.21 (Orthogonal Matrix Theorem) The following statements are equivalent. (a) A is orthogonal. (b) The column vectors of A form an orthonormal set. (c) The row vectors of A form an orthonormal set. Proof (a ?bc) A is orthogonal ? A-1 = At ? AtA= I and AAt= I ? The column vectors of A form an orthonormal set, and the row vectors of A form an orthonormal set.
Ch04_82

Theorem 4.22
If A is an orthogonal matrix, then (a) |A| = ?1. (b) A-1 is an orthogonal matrix. Proof (a) AAt= I ? |AAt| = |A||At| = |A||A-1| = 1 (b) (A-1)t (A-1)= AAt= I ? A-1 is an orthogonal matrix ? |A| = ?1

Ch04_83

Projection of One vector onto Another Vector
Let v and u be vectors in Rn with angel a (0 ? a ? p) between them.

Figure 4.7

OA : the projection of v onto u

OA ? OB cosa ? || v || cosa v ?u v ?u ? || v || ? || v || || u || || u || v ?u u v ?u ? OA ? ( )( )? u || u || || u || u ? u v ?u Note : If a ? p / 2 then ? 0. u ?u

v ?u u. So we define proj u v ? u ?u
Ch04_84

Definition
The projection of a vector v onto a nonzero vector u in Rn is denoted projuv and is defined by v ?u proju v ? u u ?u

O

Figure 4.8
Ch04_85

Example 3
Determine the projection of the vector v = (6, 7) onto the vector u = (1, 4). Solution
v ? u ? (6, 7) ? (1, 4) ? 6 ? 28 ? 34 u ? u ? (1, 4) ? (1, 4) ? 1 ? 16 ? 17

Thus

v ?u 34 proju v ? u ? (1, 4) ? (2, 8) u ?u 17 The projection of v onto u is (2, 8).

Ch04_86

Theorem 4.23
The Gram-Schmidt Orthogonalization Process
Let {v1, …, vn} be a basis for a vector space V. The set of vectors {u1, …, un} defined as follows is orthogonal. To obtain an orthonormal basis for V, normalize each of the vectors u1, …, un . u1 ? v1 u 2 ? v 2 - proju1 v 2 u 3 ? v 3 - proju1 v 3 - proju 2 v 3 ? u n ? v n - proju1 v n - ? - proju n -1 v n

Figure 4.9

Ch04_87

Example 4
The set {(1, 2, 0, 3), (4, 0, 5, 8), (8, 1, 5, 6)} is linearly independent in R4. The vectors form a basis for a three-dimensional subspace V of R4. Construct an orthonormal basis for V. Solution Let v1 = (1, 2, 0, 3), v2 = (4, 0, 5, 8), v3 = (8, 1, 5, 6)}. Use the Gram-Schmidt process to construct an orthogonal set {u1, u2, u3} from these vectors. Let u1 ? v1 ? (1, 2, 0, 3)
Let u 2 ? v 2 - proju1 v 2 ? v 2 (v2 ? u2 ) u1 ? (2, - 4, 5, 2) (u1 ? u1 )

Let u 3 ? v 3 - proju1 v 3 - proju 2 v 3 ( v 3 ? u1 ) (v3 ? u2 ) ? v3 u1 u 2 ? (4, 1, 0, - 2) (u1 ? u1 ) (u 2 ? u 2 )
Ch04_88

The set {(1, 2, 0, 3), (2, -4, 5, 2), (4, 1, 0, -2)} is an orthogonal basis for V. Normalize them to get an orthonormal basis:
(1, 2, 0, 3) ? 12 ? 2 2 ? 0 2 ? 32 ? 14 (2, - 4, 5, 2) ? 2 2 ? (-4) 2 ? 52 ? 2 2 ? 7 (4, 1, 0, - 2) ? 4 2 ? 12 ? 0 2 ? (-2) 2 ? 21

? orthonormal basis for V:
2 3 ? ? 2 4 5 2? ? 4 1 2 ?? ?? 1 , , 0, , , 0, ?, ? , - , , ?, ? ?? ?? 14 ? ? 7 7 7 7 ? ? 21 21 21 ?? ?? 14 14

Ch04_89

Projection of a Vector onto a Subspace
Definition
Let W be a subspace of Rn, Let {u1, …, um} be an orthonormal basis for W. If v is a vector in Rn, the projection of v onto W is denoted projWv and is defined by projW v ? ( v ? u1 )u1 ? ( v ? u 2 )u 2 ? ? ? ( v ? u m )u m

Figure 4.11

Ch04_90

Theorem 4.24
Let W be a subspace of Rn. Every vector v in Rn can be written uniquely in the form v = w + w? where w is in W and w? is orthogonal to W. The vectors w and w? are w = projWv and w? = v – projWv

Figure 4.12

Ch04_91

Example 5
Consider the vector v = (3, 2, 6) in R3. Let W be the subspace of R3 consisting of all vectors of the form (a, b, b). Decompose v into the sum of a vector that lies in W and a vector orthogonal to W. Solution We need an orthonormal basis for W. We can write an arbitrary vector of W as follows (a, b, b) = a(1, 0,0) + b( 0, 1, 1) The set {(1, 0, 0), (0, 1, 1)} spans W and is linearly independent. It forms a basis for W. The vectors are orthogonal. Normalize each vector to get an orthonormal basis {u1, u2} for W, where

u1 ? (1, 0, 0),

1 1 ? u2 ? ? 0 , , ? ? 2 2? ?
Ch04_92

We get w ? projW v ? ( v ? u1 )u1 ? ( v ? u 2 )u 2
? ? 1 1 ? ?? 1 1 ? ? ((3, 2, 6) ? (1, 0, 0))(1, 0, 0) ? ? (3, 2, 6) ? ? 0, , , ? ?? 0, ? 2 2 2 2 ? ? ?? ? ? ? (3, 0, 0) ? (0, 4, 4) ? (3, 4, 4) and w ? ? v - projW v ? (3, 2, 6) - (3, 4, 4) ? (0, - 2, 2)

Thus the desired decomposition of v is (3, 2, 6) = (3, 4, 4) + (0, -2, 2) In this decomposition the vector (3, 4, 4) lies in W and the vector (0, -2, 2) is orthogonal to W.

Ch04_93

Distance of a Point from a Subspace
The distance of a point from a subspace is the distance of the point from its projection in the subspace.
d (x, W ) ? x - projW x

Figure 4.13

Ch04_94

Example 6
Find the distance of the point x = (4, 1, -7) of R3 from the subspace W consisting of all vectors of the form (a, b, b). Solution The previous example tells us that the set {u1, u2} where 1 1 ? ? u1 ? (1, 0, 0), u 2 ? ? 0, , ? 2 2 ? ? is an orthonormal basis for W. We compute projWx projW x ? (x ? u1 )u1 ? (x ? u 2 )u 2
? ((4, 1, - 7) ? (1, 0, 0))(1, 0, 0) ? (4, 1, - 7) ?

?

?

0,

1 1 , 2 2

???

0,

1 1 , 2 2

?

? (4, 0, 0) ? (0, - 3, - 3) ? (4, - 3, - 3) Thus, the distance from x to W is x - projW x ? (4, 1, - 7) - (4, - 3, - 3)

? (0, 4, - 4) ? 32
Ch04_95 更多相关文章：