## KoreanFoodie's Study

## Solutions to Linear Algebra, Stephen H. Friedberg, Fourth Edition (Chapter 2) 본문

**Solution Manuals/Linear Algebra, 4th Edition: Friedberg**

### Solutions to Linear Algebra, Stephen H. Friedberg, Fourth Edition (Chapter 2)

hashnut 2019. 6. 15. 13:09Solution maual to Linear Algebra, Fourth Edition, Stephen H. Friedberg. (Chapter 2)

Solutions to Linear Algebra, Fourth Edition, Stephen H. Friedberg. (Chapter 2)

Linear Algebra solution manual, Fourth Edition, Stephen H. Friedberg. (Chapter 2)

Linear Algebra solutions Friedberg. (Chapter 2)

1.Label the following statements as true or false. In each part, V and W
are finite-dimensional vector spaces (over F ), and T is a function from
V to W.
(a)(b)(c)(d)(e)(f )(g)(h)If T is linear, then T preserves sums and scalar products.
If T(x + y) = T(x) + T(y), then T is linear.
T is one-to-one if and only if the only vector x such that T(x) = 0
is x = 0 .
If T is linear, then T(0 V ) = 0 W .
If T is linear, then nullity(T) + rank(T) = dim(W).
If T is linear, then T carries linearly independent subsets of V onto
linearly independent subsets of W.
If T, U : V → W are both linear and agree on a basis for V, then
T = U.
Given x1 , x2 ∈ V and y1 , y2 ∈ W, there exists a linear transforma-
tion T : V → W such that T(x1 ) = y1 and T(x2 ) = y2 .
For Exercises 2 through 6, prove that T is a linear transformation, and find
bases for both N(T) and R(T). Then compute the nullity and rank of T, and
verify the dimension theorem. Finally, use the appropriate theorems in this
section to determine whether T is one-to-one or onto.
2. T : R3 → R2 defined by T(a1 , a2 , a3 ) = (a1 − a2 , 2a3 ).
3. T : R2 → R3 defined by T(a1 , a2 ) = (a1 + a2 , 0, 2a1 − a2 ).
4. T : M2×3 (F ) → M2×2 (F ) defined by
a11
a12
a13
2a11 − a12
a13 + 2a12
T
=
.
a21
a22
a23
0
0
5. T : P2 (R) → P3 (R) defined by T(f (x)) = xf (x) + f (x).
Sec. 2.1
Linear Transformations, Null Spaces, and Ranges
75
6.T : Mn×n (F ) → F defined by T(A) = tr(A). Recall (Example 4, Sec-
tion 1.3) that
n
tr(A) =
Aii .
i=1
7. Prove properties 1, 2, 3, and 4 on page 65.
8. Prove that the transformations in Examples 2 and 3 are linear.
9.10.In this exercise, T : R2 → R2 is a function. For each of the following
parts, state why T is not linear.
(a)
(b)
(c)
(d)
(e)
T(a1 , a2 ) =T(a1 , a2 ) =T(a1 , a2 ) =T(a1 , a2 ) =T(a1 , a2 ) =(1, a2 )
(a1 , a21 )
(sin a1, 0)
(|a1 |, a2 )
(a1 + 1, a2 )
Suppose that T : R2 → R2 is linear, T(1, 0) = (1, 4), and T(1, 1) = (2, 5).
What is T(2, 3)? Is T one-to-one?
11.12.13.14.Prove that there exists a linear transformation T : R2 → R3 such that
T(1, 1) = (1, 0, 2) and T(2, 3) = (1, −1, 4). What is T(8, 11)?
Is there a linear transformation T : R3 → R2 such that T(1, 0, 3) = (1, 1)
and T(−2, 0, −6) = (2, 1)?
Let V and W be vector spaces, let T : V → W be linear, and let
{w1 , w2 , . . . , wk } be a linearly independent subset of R(T). Prove that
if S = {v1 , v2 , . . . , vk } is chosen so that T(vi) = wi for i = 1, 2, . . . , k,
then S is linearly independent.
Let V and W be vector spaces and T : V → W be linear.
(a) Prove that T is one-to-one if and only if T carries linearly inde-
pendent subsets of V onto linearly independent subsets of W.
(b) Suppose that T is one-to-one and that S is a subset of V. Prove
that S is linearly independent if and only if T(S) is linearly inde-
pendent.
(c) Suppose β = {v1 , v2 , . . . , vn } is a basis for V and T is one-to-one
and onto. Prove that T(β) = {T(v1 ), T(v2 ), . . . , T(vn )} is a basis
for W.
15.Recall the definition of P(R) on page 10. Define
x
T : P(R) → P(R) by T(f (x)) =
f (t) dt.
0
Prove that T linear and one-to-one, but not onto.
76
Chap. 2
Linear Transformations and Matrices
16. Let T : P(R) → P(R) be defined by T(f (x)) = f (x). Recall that T is
linear. Prove that T is onto, but not one-to-one.
17. Let V and W be finite-dimensional vector spaces and T : V → W be
linear.
(a) Prove that if dim(V) < dim(W), then T cannot be onto.
(b) Prove that if dim(V) > dim(W), then T cannot be one-to-one.
18. Give an example of a linear transformation T : R2 → R2 such that
N(T) = R(T).
19.Give an example of distinct linear transformations T and U such that
N(T) = N(U) and R(T) = R(U).
20.Let V and W be vector spaces with subspaces V1 and W1 , respectively.
If T : V → W is linear, prove that T(V1) is a subspace of W and that
{x ∈ V : T(x) ∈ W1 } is a subspace of V.
21.Let V be the vector space of sequences described in Example 5 of Sec-
tion 1.2. Define the functions T, U : V → V by
T(a1 , a2, . . .) = (a2 , a3 , . . .) and U(a1 , a2 , . . .) = (0, a1 , a2 , . . .).
T and U are called the left shift and right shift operators on V,
respectively.
(a) Prove that T and U are linear.
(b) Prove that T is onto, but not one-to-one.
(c) Prove that U is one-to-one, but not onto.
22.23.Let T : R3 → R be linear. Show that there exist scalars a, b, and c such
that T(x, y, z) = ax + by + cz for all (x, y, z) ∈ R3 . Can you generalize
this result for T : Fn → F ? State and prove an analogous result for
T : Fn → Fm .
Let T : R3 → R be linear. Describe geometrically the possibilities for
the null space of T. Hint: Use Exercise 22.
The following definition is used in Exercises 24–27 and in Exercise 30.
Definition. Let V be a vector space and W1 and W2 be subspaces of
V such that V = W1 ⊕ W2 . (Recall the definition of direct sum given in the
exercises of Section 1.3.) A function T : V → V is called the projection on
W1 along W2 if, for x = x1 + x2 with x1 ∈ W1 and x2 ∈ W2 , we have
T(x) = x1 .
24. Let T : R2 → R2 . Include figures for each of the following parts.
Sec. 2.1
Linear Transformations, Null Spaces, and Ranges
77
25.26.(a) Find a formula for T(a, b), where T represents the projection on
the y-axis along the x-axis.
(b) Find a formula for T(a, b), where T represents the projection on
the y-axis along the line L = {(s, s) : s ∈ R}.
Let T : R3 → R3 .
(a)(b)(c)If T(a, b, c) = (a, b, 0), show that T is the projection on the xy-
plane along the z-axis.
Find a formula for T(a, b, c), where T represents the projection on
the z-axis along the xy-plane.
If T(a, b, c) = (a − c, b, 0), show that T is the projection on the
xy-plane along the line L = {(a, 0, a) : a ∈ R}.
Using the notation in the definition above, assume that T : V → V is
the projection on W1 along W2 .
(a)
(b)
(c)
(d)
Prove that T is linear and W1 = {x ∈ V : T(x) = x}.
Prove that W1 = R(T) and W2 = N(T).
Describe T if W1 = V.
Describe T if W1 is the zero subspace.
27. Suppose that W is a subspace of a finite-dimensional vector space V.
(a) Prove that there exists a subspace W and a function T : V → V
such that T is a projection on W along W .
(b) Give an example of a subspace W of a vector space V such that
there are two projections on W along two (distinct) subspaces.
The following definitions are used in Exercises 28–32.
Definitions. Let V be a vector space, and let T : V → V be linear. A
subspace W of V is said to be T-invariant if T(x) ∈ W for every x ∈ W, that
is, T(W) ⊆ W. If W is T-invariant, we define the restriction of T on W to
be the function TW : W → W defined by TW (x) = T(x) for all x ∈ W.
Exercises 28–32 assume that W is a subspace of a vector space V and that
T : V → V is linear. Warning: Do not assume that W is T-invariant or that
T is a projection unless explicitly stated.
28. Prove that the subspaces {0 }, V, R(T), and N(T) are all T-invariant.
29. If W is T-invariant, prove that TW is linear.
30. Suppose that T is the projection on W along some subspace W . Prove
that W is T-invariant and that TW = IW .
31. Suppose that V = R(T)⊕W and W is T-invariant. (Recall the definition
of direct sum given in the exercises of Section 1.3.)
78
Chap. 2
Linear Transformations and Matrices
(a)(b)(c)Prove that W ⊆ N(T).
Show that if V is finite-dimensional, then W = N(T).
Show by example that the conclusion of (b) is not necessarily true
if V is not finite-dimensional.
32.Suppose that W is T-invariant. Prove that N(TW ) = N(T) ∩ W and
R(TW ) = T(W).
33. Prove Theorem 2.2 for the case that β is infinite, that is, R(T) =
span({T(v) : v ∈ β}).
34.Prove the following generalization of Theorem 2.6: Let V and W be
vector spaces over a common field, and let β be a basis for V. Then for
any function f : β → W there exists exactly one linear transformation
T : V → W such that T(x) = f (x) for all x ∈ β.
Exercises 35 and 36 assume the definition of direct sum given in the exercises
of Section 1.3.
35.Let V be a finite-dimensional vector space and T : V → V be linear.
(a) Suppose that V = R(T) + N(T). Prove that V = R(T) ⊕ N(T).
(b) Suppose that R(T) ∩ N(T) = {0 }. Prove that V = R(T) ⊕ N(T).
Be careful to say in each part where finite-dimensionality is used.
36.Let V and T be as defined in Exercise 21.
(a)(b)Prove that V = R(T)+N(T), but V is not a direct sum of these two
spaces. Thus the result of Exercise 35(a) above cannot be proved
without assuming that V is finite-dimensional.
Find a linear operator T1 on V such that R(T1 ) ∩ N(T1 ) = {0 } but
V is not a direct sum of R(T1 ) and N(T1 ). Conclude that V being
finite-dimensional is also essential in Exercise 35(b).
37.A function T : V → W between vector spaces V and W is called additive
if T(x + y) = T(x) + T(y) for all x, y ∈ V. Prove that if V and W
are vector spaces over the field of rational numbers, then any additive
function from V into W is a linear transformation.
38.Let T : C → C be the function defined by T(z) = z. Prove that T is
additive (as defined in Exercise 37) but not linear.
39.Prove that there is an additive function T : R → R (as defined in Ex-
ercise 37) that is not linear. Hint: Let V be the set of real numbers
regarded as a vector space over the field of rational numbers. By the
corollary to Theorem 1.13 (p. 60), V has a basis β. Let x and y be two
distinct vectors in β, and define f : β → V by f (x) = y, f (y) = x, and
f (z) = z otherwise. By Exercise 34, there exists a linear transformation
Sec. 2.2
The Matrix Representation of a Linear Transformation
79
T : V → V such that T(u) = f (u) for all u ∈ β. Then T is additive, but
for c = y/x, T(cx) = cT(x).
The following exercise requires familiarity with the definition of quotient space
given in Exercise 31 of Section 1.3.
40.Let V be a vector space and W be a subspace of V. Define the mapping
η : V → V/W by η(v) = v + W for v ∈ V.
(a) Prove that η is a linear transformation from V onto V/W and that
N(η) = W.
(b) Suppose that V is finite-dimensional. Use (a) and the dimen-
sion theorem to derive a formula relating dim(V), dim(W), and
dim(V/W).
(c) Read the proof of the dimension theorem. Compare the method of
solving (b) with the method of deriving the same result as outlined
in Exercise 35 of Section 1.6.
1.2.3.4.Label the following statements as true or false. Assume that V and
W are finite-dimensional vector spaces with ordered bases β and γ,
respectively, and T, U : V → W are linear transformations.
(a)
(b)
(c)
(d)
(e)
(f )
For any scalar a, aT + U is a linear transformation from V to W.
[T]γβ = [U]γβ implies that T = U.
If m = dim(V) and n = dim(W), then [T]γβ is an m × n matrix.
[T + U]γβ = [T]γβ + [U]γβ .
L(V, W) is a vector space.
L(V, W) = L(W, V).
Let β and γ be the standard ordered bases for Rn and Rm , respectively.
For each linear transformation T : Rn → Rm , compute [T]γβ .
(a)
T :
R2
→ R3 defined by T(a1 , a2 ) = (2a1 − a2 , 3a1 + 4a2 , a1 ).
(b)
T :
R3
→ R2 defined by T(a1 , a2 , a3 ) = (2a1 + 3a2 − a3 , a1 + a3 ).
(c)
T :
R3
→ R defined by T(a1 , a2 , a3 ) = 2a1 + a2 − 3a3 .
(d)
T :
R3
→ R3 defined by
T(a1 , a2 , a3 ) = (2a2 + a3 , −a1 + 4a2 + 5a3 , a1 + a3 ).
(e) T : Rn → Rn defined by T(a1 , a2 , . . . , an ) = (a1 , a1 , . . . , a1 ).
(f ) T : Rn → Rn defined by T(a1 , a2 , . . . , an ) = (an , an−1, . . . , a1 ).
(g) T : Rn → R defined by T(a1 , a2 , . . . , an ) = a1 + an .
Let T : R2 → R3 be defined by T(a1 , a2) = (a1 − a2 , a1 , 2a1 + a2 ). Let β
be the standard ordered basis for R2 and γ = {(1, 1, 0), (0, 1, 1), (2, 2, 3)}.
Compute [T]γβ . If α = {(1, 2), (2, 3)}, compute [T]γα .
Define
T : M2×2 (R) → P2 (R)
by
T
a
c
b
d
= (a + b) + (2d)x + bx2 .
Let
1 0
0
1
0
0
0
0
β =
,
,
,
and γ = {1, x, x2 }.
0 0
0
0
1
0
0
1
Compute [T]γβ .
5. Let
1
0
0
1
0
0
0
0
α =
,
,
,
,
0
0
0
0
1
0
0
1
β = {1, x, x2 },
and
γ = {1}.
Sec. 2.2
The Matrix Representation of a Linear Transformation
85
(a) Define T : M2×2 (F ) → M2×2 (F ) by T(A) = At . Compute [T]α .
(b) Define
f (0) 2f (1)
T : P2 (R) → M2×2 (R)
by
T(f (x)) =
0
f (3)
,
(c)(d)(e)where denotes differentiation. Compute [T]α
.
βDefine T : M2×2 (F ) → F by T(A) = tr(A). Compute [T]γα .
Define T : P2 (R) → R by T(f (x)) = f (2). Compute [T]γβ .
If
1
−2
A =
,
0
4
(f )(g)compute [A]α .
If f (x) = 3 − 6x + x2 , compute [f (x)]β .
For a ∈ F , compute [a]γ .
6. Complete the proof of part (b) of Theorem 2.7.
7. Prove part (b) of Theorem 2.8.
8. † Let V be an n-dimensional vector space with an ordered basis β. Define
T : V → Fn by T(x) = [x]β . Prove that T is linear.
9.Let V be the vector space of complex numbers over the field R. Define
T : V → V by T(z) = z, where z is the complex conjugate of z. Prove
that T is linear, and compute [T]β , where β = {1, i}. (Recall by Exer-
cise 38 of Section 2.1 that T is not linear if V is regarded as a vector
space over the field C.)
10.Let V be a vector space with the ordered basis β = {v1 , v2 , . . . , vn }.
Define v0 = 0 . By Theorem 2.6 (p. 72), there exists a linear trans-
formation T : V → V such that T(vj ) = vj + vj−1 for j = 1, 2, . . . , n.
Compute [T]β .
11.Let V be an n-dimensional vector space, and let T : V → V be a linear
transformation. Suppose that W is a T-invariant subspace of V (see the
exercises of Section 2.1) having dimension k. Show that there is a basis
β for V such that [T]β has the form
A
B
,
O
C
where A is a k × k matrix and O is the (n − k) × k zero matrix.
86
Chap. 2
Linear Transformations and Matrices
12.Let V be a finite-dimensional vector space and T be the projection on
W along W , where W and W are subspaces of V. (See the definition
in the exercises of Section 2.1 on page 76.) Find an ordered basis β for
V such that [T]β is a diagonal matrix.
13.14.Let V and W be vector spaces, and let T and U be nonzero linear
transformations from V into W. If R(T) ∩ R(U) = {0 }, prove that
{T, U} is a linearly independent subset of L(V, W).
Let V = P(R), and for j ≥ 1 define Tj (f (x)) = f (j) (x), where f (j) (x)
is the jth derivative of f (x). Prove that the set {T1 , T2 , . . . , Tn } is a
linearly independent subset of L(V) for any positive integer n.
15. Let V and W be vector spaces, and let S be a subset of V. Define
S 0 = {T ∈ L(V, W) : T(x) = 0 for all x ∈ S}. Prove the following
statements.
(a) S 0 is a subspace of L(V, W).
(b) If S1 and S2 are subsets of V and S1 ⊆ S2 , then S2 0 ⊆ S1 0 .
(c) If V1 and V2 are subspaces of V, then (V1 + V2 )0 = V1 0 ∩ V2 0 .
16.Let V and W be vector spaces such that dim(V) = dim(W), and let
T : V → W be linear. Show that there exist ordered bases β and γ for
V and W, respectively, such that [T]γβ is a diagonal matrix.
1.Label the following statements as true or false. In each part, V, W,
and Z denote vector spaces with ordered (finite) bases α, β, and γ,
respectively; T : V → W and U : W → Z denote linear transformations;
and A and B denote matrices.
(a) [UT]γα = [T]βα [U]γβ .
(b)
[T(v)]β = [T]βα[v]α for all v ∈ V.
(c)
[U(w)]β = [U]βα [w]β for all w ∈ W.
(d)
(e)
(f )
(g)
(h)
(i)
(j)
[IV ]α = I.
[T2 ]βα = ([T]βα )2 .
A2 = I implies that A = I or A = −I.
T = LA for some matrix A.
A2 = O implies that A = O, where O denotes the zero matrix.
LA+B = LA + LB .
If A is square and Aij = δij for all i and j, then A = I.
2. (a) Let
1
3
1
0
−3
A =
,
B =
,
2
−1
4
1
2
⎛ ⎞
2
C =
1
1 4
,
and D = ⎝−2⎠ .
−1
−2 0
3
Compute A(2B + 3C), (AB)D, and A(BD).
(b) Let
⎛
⎞
⎛
⎞
2 5
3 −2 0
A = ⎝−3 1⎠ , B = ⎝1 −1 4⎠ , and C = 4
0
3 .
4 2
5
5 3
Compute At , At B, BC t , CB, and CA.
3. Let g(x) = 3 + x. Let T : P2 (R) → P2 (R) and U : P2 (R) → R3 be the
linear transformations respectively defined by
T(f (x)) = f (x)g(x) + 2f (x) and U (a + bx + cx2 ) = (a + b, c, a − b).
Let β and γ be the standard ordered bases of P2 (R) and R3 , respectively.
Sec. 2.3
Composition of Linear Transformations and Matrix Multiplication
97
(a)(b)Compute [U]γβ , [T]β , and [UT]γβ directly. Then use Theorem 2.11
to verify your result.
Let h(x) = 3 − 2x + x2 . Compute [h(x)]β and [U(h(x))]γ . Then
use [U]γβ from (a) and Theorem 2.14 to verify your result.
4.5.For each of the following parts, let T be the linear transformation defined
in the corresponding part of Exercise 5 of Section 2.2. Use Theorem 2.14
to compute the following vectors:
1 4
(a) [T(A)]α , where A =
.
−1 6
(b) [T(f (x))]α , where f (x)
= 4 − 6x + 3x2
.
1 3
(c) [T(A)]γ , where A =
.
2 4
(d) [T(f (x))]γ , where f (x) = 6 − x + 2x2 .
Complete the proof of Theorem 2.12 and its corollary.
6. Prove (b) of Theorem 2.13.
7. Prove (c) and (f) of Theorem 2.15.
8.9.10.11.12.13.Prove Theorem 2.10. Now state and prove a more general result involv-
ing linear transformations with domains unequal to their codomains.
Find linear transformations U, T : F2 → F2 such that UT = T0 (the zero
transformation) but TU = T0 . Use your answer to find matrices A and
B such that AB = O but BA = O.
Let A be an n × n matrix. Prove that A is a diagonal matrix if and
only if Aij = δij Aij for all i and j.
Let V be a vector space, and let T : V → V be linear. Prove that T2 = T0
if and only if R(T) ⊆ N(T).
Let V, W, and Z be vector spaces, and let T : V → W and U : W → Z
be linear.
(a) Prove that if UT is one-to-one, then T is one-to-one. Must U also
be one-to-one?
(b) Prove that if UT is onto, then U is onto. Must T also be onto?
(c) Prove that if U and T are one-to-one and onto, then UT is also.
Let A and B be n × n matrices. Recall that the trace of A is defined
by
n
tr(A) =
Aii .
i=1
Prove that tr(AB) = tr(BA) and tr(A) = tr(At ).
98
Chap. 2
Linear Transformations and Matrices
14.Assume the notation in Theorem 2.13.
(a) Suppose that z is a (column) vector in Fp . Use Theorem 2.13(b)
to prove that Bz is a linear combination of the columns of B. In
particular, if z = (a1 , a2 , . . . , ap )t , then show that
p
Bz =
aj vj .
j=1
(b) Extend (a) to prove that column j of AB is a linear combination
of the columns of A with the coefficients in the linear combination
being the entries of column j of B.
(c) For any row vector w ∈ Fm , prove that wA is a linear combination
of the rows of A with the coefficients in the linear combination
being the coordinates of w. Hint: Use properties of the transpose
operation applied to (a).
(d) Prove the analogous result to (b) about rows: Row i of AB is a
linear combination of the rows of B with the coefficients in the
linear combination being the entries of row i of A.
†15. Let M and A be matrices for which the product matrix M A is defined.
If the jth column of A is a linear combination of a set of columns
of A, prove that the jth column of M A is a linear combination of the
corresponding columns of M A with the same corresponding coefficients.
16.Let V be a finite-dimensional vector space, and let T : V → V be linear.
(a) If rank(T) = rank(T2 ), prove that R(T) ∩ N(T) = {0 }. Deduce
that V = R(T) ⊕ N(T) (see the exercises of Section 1.3).
(b) Prove that V = R(Tk ) ⊕ N(Tk ) for some positive integer k.
17.Let V be a vector space. Determine all linear transformations T : V → V
such that T = T2 . Hint: Note that x = T(x) + (x − T(x)) for every
x in V, and show that V = {y : T(y) = y} ⊕ N(T) (see the exercises of
Section 1.3).
18.Using only the definition of matrix multiplication, prove that multipli-
cation of matrices is associative.
19.For an incidence matrix A with related matrix B defined by Bij = 1 if
i is related to j and j is related to i, and Bij = 0 otherwise, prove that
i belongs to a clique if and only if (B 3 )ii > 0.
20.Use Exercise 19 to determine the cliques in the relations corresponding
to the following incidence matrices.
Sec. 2.4
Invertibility and Isomorphisms
⎛
⎞
⎛
⎞
0
1
0
1
0
0
1
1
⎜1
0
0
0⎟
⎜1
0
0
1⎟
(a) ⎜
⎟
(b)
⎜
⎟
⎝0
1
0
1⎠
⎝1
0
0
1⎠
1
0
1
0
1
0
1
0
99
21.Let A be an incidence matrix that is associated with a dominance rela-
tion. Prove that the matrix A + A2 has a row [column] in which each
entry is positive except for the diagonal entry.
22. Prove that the matrix
⎛
⎞
0
1
0
A = ⎝0
0
1⎠
1
0
0
corresponds to a dominance relation. Use Exercise 21 to determine
which persons dominate [are dominated by] each of the others within
two stages.
23.Let A be an n × n incidence matrix that corresponds to a dominance
relation. Determine the number of nonzero entries of A.
1.Label the following statements as true or false. In each part, V and
W are vector spaces with ordered (finite) bases α and β, respectively,
T : V → W is linear, and A and B are matrices.
−1
(a) [T]βα
= [T−1 ]βα .
(b) T is invertible if and only if T is one-to-one and onto.
(c) T = LA , where A = [T]βα .
(d) M2×3 (F ) is isomorphic to F5 .
(e) Pn (F ) is isomorphic to Pm (F ) if and only if n = m.
(f ) AB = I implies that A and B are invertible.
(g) If A is invertible, then (A−1 )−1 = A.
(h) A is invertible if and only if LA is invertible.
(i) A must be square in order to possess an inverse.
2.For each of the following linear transformations T, determine whether
T is invertible and justify your answer.
(a)
T :
R2 → R3 defined by T(a1 , a2 ) = (a1 − 2a2 , a2 , 3a1 + 4a2 ).
(b)
T :
R2 → R3 defined by T(a1 , a2 ) = (3a1 − a2 , a2 , 4a1 ).
(c)
T :
R3 → R3 defined by T(a1 , a2 , a3 ) = (3a1 − 2a3, a2 , 3a1 + 4a2 ).
(d)
T :
P3 (R) → P2 (R) defined by T(p(x)) = p (x).
a b
(e) T : M2×2 (R) → P2 (R) defined by T
= a + 2bx + (c + d)x2 .
c d
a b
a + b
a
(f ) T : M2×2 (R) → M2×2 (R) defined by T
=
.
c d
c
c + d
Sec. 2.4
Invertibility and Isomorphisms
107
3.Which of the following pairs of vector spaces are isomorphic?your answers.
(a)
(b)
(c)
(d)
F3 and P3 (F ).
F4 and P3 (F ).
M2×2 (R) and P3 (R).
V = {A ∈ M2×2 (R) : tr(A) = 0} and R4 .
Justify
4. † Let A and B be n × n invertible matrices. Prove that AB is invertible
and (AB)−1 = B −1 A−1 .
5. † Let A be invertible. Prove that At is invertible and (At )−1 = (A−1 )t .
6. Prove that if A is invertible and AB = O, then B = O.
7.Let A be an n × n matrix.
(a)(b)Suppose that A2 = O. Prove that A is not invertible.
Suppose that AB = O for some nonzero n × n matrix B. Could A
be invertible? Explain.
8. Prove Corollaries 1 and 2 of Theorem 2.18.
9.Let A and B be n × n matrices such that AB is invertible. Prove that A
and B are invertible. Give an example to show that arbitrary matrices
A and B need not be invertible if AB is invertible.
10. † Let A and B be n × n matrices such that AB = In .
(a) Use Exercise 9 to conclude that A and B are invertible.
(b) Prove A = B −1 (and hence B = A−1 ). (We are, in effect, saying
that for square matrices, a “one-sided” inverse is a “two-sided”
inverse.)
(c) State and prove analogous results for linear transformations de-
fined on finite-dimensional vector spaces.
11.12.13.14.Verify that the transformation in Example 5 is one-to-one.
Prove Theorem 2.21.
Let ∼ mean “is isomorphic to.” Prove that ∼ is an equivalence relation
on the class of vector spaces over F .
Let
a
a + b
V =
: a, b, c ∈ F
.
0
c
Construct an isomorphism from V to F3 .
108
Chap. 2
Linear Transformations and Matrices
15.Let V and W be finite-dimensional vector spaces, and let T : V → W be
a linear transformation. Suppose that β is a basis for V. Prove that T
is an isomorphism if and only if T(β) is a basis for W.
16.Let B be an n × n invertible matrix. Define Φ : Mn×n (F ) → Mn×n (F )
by Φ(A) = B −1 AB. Prove that Φ is an isomorphism.
17. † Let V and W be finite-dimensional vector spaces and T : V → W be an
isomorphism. Let V0 be a subspace of V.
(a)(b)Prove that T(V0 ) is a subspace of W.
Prove that dim(V0 ) = dim(T(V0 )).
18. Repeat Example 7 with the polynomial p(x) = 1 + x + 2x2 + x3 .
19.In Example 5 of Section 2.1, the mapping T : M2×2 (R) → M2×2 (R) de-
fined by T(M ) = M t for each M ∈ M2×2 (R) is a linear transformation.
Let β = {E 11 , E 12 , E 21 , E 22 }, which is a basis for M2×2 (R), as noted in
Example 3 of Section 1.6.
(a)(b)Compute [T]β .
Verify that LA φβ (M ) = φβ T(M ) for A = [T]β and
1
2
M =
.
3
4
20. † Let T : V → W be a linear transformation from an n-dimensional vector
space V to an m-dimensional vector space W. Let β and γ be ordered
bases for V and W, respectively. Prove that rank(T) = rank(LA ) and
that nullity(T) = nullity(LA ), where A = [T]γβ . Hint: Apply Exercise 17
to Figure 2.2.
21.Let V and W be finite-dimensional vector spaces with ordered bases
β = {v1 , v2 , . . . , vn } and γ = {w1 , w2 , . . . , wm }, respectively. By The-
orem 2.6 (p. 72), there exist linear transformations Tij : V → W such
that
wi if k = j
Tij (vk ) =
0
if k = j.
First prove that {Tij : 1 ≤ i ≤ m, 1 ≤ j ≤ n} is a basis for L(V, W).
Then let M ij be the m × n matrix with 1 in the ith row and jth column
and 0 elsewhere, and prove that [Tij ]γβ = M ij . Again by Theorem 2.6,
there exists a linear transformation Φ : L(V, W) → Mm×n (F ) such that
Φ(Tij ) = M ij . Prove that Φ is an isomorphism.
Sec. 2.4
Invertibility and Isomorphisms
109
22.Let c0 , c1 , . . . , cn be distinct scalars from an infinite field F . Define
T : Pn (F ) → Fn+1 by T(f ) = (f (c0 ), f (c1 ), . . . , f (cn )). Prove that T is
an isomorphism. Hint: Use the Lagrange polynomials associated with
c0 , c1 , . . . , cn.
23.Let V denote the vector space defined in Example 5 of Section 1.2, and
let W = P(F ). Define
n
T : V → W
by
T(σ) =
σ(i)xi ,
i=0
where n is the largest integer such that σ(n) = 0. Prove that T is an
isomorphism.
The following exercise requires familiarity with the concept of quotient space
defined in Exercise 31 of Section 1.3 and with Exercise 40 of Section 2.1.
24. Let T : V → Z be a linear transformation of a vector space V onto a
vector space Z. Define the mapping
T : V/N(T) → Z
by
T(v + N(T)) = T(v)
for any coset v + N(T) in V/N(T).
(a) Prove that T is well-defined; that is, prove that if v + N(T) =
v + N(T), then T(v) = T(v ).
(b) Prove that T is linear.
(c) Prove that T is an isomorphism.
(d) Prove that the diagram shown in Figure 2.3 commutes; that is,
prove that T = Tη.
V
T
- Z
η
T
U
V/N(T)
Figure 2.3
25.Let V be a nonzero vector space over a field F , and suppose that S is
a basis for V. (By the corollary to Theorem 1.13 (p. 60) in Section 1.7,
every vector space has a basis). Let C(S, F ) denote the vector space of
all functions f ∈ F(S, F ) such that f (s) = 0 for all but a finite number
110
Chap. 2
Linear Transformations and Matrices
of vectors in S. (See Exercise 14 of Section 1.3.) Let Ψ : C(S, F ) → V
be the function defined by
Ψ(f ) =
f (s)s.
s∈S,f (s)=0
Prove that Ψ is an isomorphism. Thus every nonzero vector space can
be viewed as a space of functions.
1.Label(a)(b)(c)(d)(e)the following statements as true or false.
Suppose that β = {x1 , x2 , . . . , xn } and β = {x 1 , x 2 , . . . , x n } are
ordered bases for a vector space and Q is the change of coordinate
matrix that changes β -coordinates into β-coordinates. Then the
jth column of Q is [xj ]β .
Every change of coordinate matrix is invertible.
Let T be a linear operator on a finite-dimensional vector space V,
let β and β be ordered bases V,for and let Q be the change of
coordinate matrix that changes β -coordinates into β-coordinates.
Then [T]β = Q[T]β Q−1 .
The matrices A, B ∈ Mn×n (F ) are called similar if B = Qt AQ for
some Q ∈ Mn×n (F ).
Let T be a linear operator on a finite-dimensional vector space V.
Then for any ordered bases β and γ for V, [T]β is similar to [T]γ .
2.For each of the following pairs of ordered bases β and β for R2 , find
the change of coordinate matrix that changes β -coordinates into β-
coordinates.
(a)
β
= {e1 , e2 } and β = {(a1 , a2 ), (b1 , b2 )}
(b)
β
= {(−1, 3), (2, −1)} and β = {(0, 10), (5, 0)}
(c)
β
= {(2, 5), (−1, −3)} and β = {e1 , e2 }
(d)
β
= {(−4, 3), (2, −1)} and β = {(2, 1), (−4, 1)}
3.For each of the following pairs of ordered bases β and β for P2 (R),
find the change of coordinate matrix that changes β -coordinates into
β-coordinates.
(a) β = {x2 , x, 1} and
β = {a2 x2 + a1 x + a0 , b2 x2 + b1 x + b0 , c2 x2 + c1 x + c0 }
(b) β = {1, x, x2} and
β = {a2 x2 + a1 x + a0 , b2 x2 + b1 x + b0 , c2 x2 + c1 x + c0 }
(c) β = {2x2 − x, 3x2 + 1, x2 } and β = {1, x, x2 }
(d) β = {x2 − x + 1, x + 1, x2 + 1} and
β = {x2 + x + 4, 4x2 − 3x + 2, 2x2 + 3}
(e) β = {x2 − x, x2 + 1, x − 1} and
β = {5x2 − 2x − 3, −2x2 + 5x + 5, 2x2 − x − 3}
(f ) β = {2x2 − x + 1, x2 + 3x − 2, −x2 + 2x + 1} and
β = {9x − 9, x2 + 21x − 2, 3x2 + 5x + 2}
4. Let T be the linear operator on R2 defined by
a
2a + b
T
=
,
b
a − 3b
Sec. 2.5
The Change of Coordinate Matrix
let β be the standard ordered basis for R2 , and let
1
1
β =
,
.
1
2
117
Use Theorem 2.23 and the fact that
−1
1 1
2 −1
=
1 2
−1
1
to find [T]β .
5. Let T be the linear operator on P1 (R) defined by T(p(x)) = p (x),
the derivative of p(x). Let β = {1, x} and β = {1 + x, 1 − x}. Use
Theorem 2.23 and the fact that
⎛
⎞
−1
1
1
1
1
= ⎝ 2
1
2
⎠
1 −1
2
− 12
to find [T]β .
6. For each matrix A and ordered basis β, find [LA ]β . Also, find an invert-
ible matrix Q such that [L A ]β = Q−1 AQ.
1 3
1
1
(a) A =
and β =
,
1 1
1
2
1 2
1
1
(b) A =
and β =
,
2 1
1
−1
⎛
⎞
⎧⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎫
1 1 −1
⎨ 1
1
1 ⎬
(c) A = ⎝2 0
1⎠ and β = ⎝1⎠ , ⎝0⎠ , ⎝1⎠
⎩
⎭
1 1
0
1
1
2
⎛
⎞
⎧⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎫
13
1
4
⎨
1
1
1 ⎬
(d) A = ⎝ 1 13 4⎠ and β = ⎝ 1⎠ , ⎝ −1⎠ , ⎝1⎠
⎩
⎭
4
4 10
−2
0
1
7. In R2 , let L be the line y = mx, where m = 0. Find an expression for
T(x, y), where
(a) T is the reflection of R2 about L.
(b) T is the projection on L along the line perpendicular to L. (See
the definition of projection in the exercises of Section 2.1.)
8. Prove the following generalization of Theorem 2.23. Let T : V → W be
a linear transformation from a finite-dimensional vector space V to a
finite-dimensional vector space W. Let β and β be ordered bases for
118
Chap. 2
Linear Transformations and Matrices
V, and let γ and γ be ordered bases for W. Then [T]γ β = P −1 [T]γβ Q,
where Q is the matrix that changes β -coordinates into β-coordinates
and P is the matrix that changes γ -coordinates into γ-coordinates.
9.10.Prove that “is similar to” is an equivalence relation on Mn×n (F ).
Prove that if A and B are similar n × n matrices, then tr(A) = tr(B).
Hint: Use Exercise 13 of Section 2.3.
11. Let V be a finite-dimensional vector space with ordered bases α, β,
and γ.
(a) Prove that if Q and R are the change of coordinate matrices that
change α-coordinates into β-coordinates and β-coordinates into
γ-coordinates, respectively, then RQ is the change of coordinate
matrix that changes α-coordinates into γ-coordinates.
(b) Prove that if Q changes α-coordinates into β-coordinates, then
Q−1 changes β-coordinates into α-coordinates.
12. Prove the corollary to Theorem 2.23.
13. † Let V be a finite-dimensional vector space over a field F , and let β =
{x1 , x2 , . . . , xn } be an ordered basis for V. Let Q be an n × n invertible
matrix with entries from F . Define
n
x j =
Qij xi
for 1 ≤ j ≤ n,
i=1
and set β = {x 1 , x 2 , . . . , x n }. Prove that β is a basis for V and hence
that Q is the change of coordinate matrix changing β -coordinates into
β-coordinates.
14.Prove the converse of Exercise 8: If A and B are each m × n matrices
with entries from a field F , and if there exist invertible m × m and n × n
−1matrices P and Q, respectively, such that B = P AQ, then there exist
an n-dimensional vector space V and an m-dimensional vector space W
(both over F ), ordered bases β and β for V and γ and γ for W, and a
linear transformation T : V → W such that
A = [T]γβ
and B = [T]γ β .
Hints: Let V = Fn , W = Fm , T = LA , and β and γ be the standard
ordered bases for Fn and Fm , respectively. Now apply the results of
Exercise 13 to obtain ordered bases β and γ from β and γ via Q and
P , respectively.
1.Label the following statements as true or false. Assume that all vector
spaces are finite-dimensional.
(a)(b)(c)(d)(e)(f )(g)Every linear transformation is a linear functional.
A linear functional defined on a field may be represented as a 1 × 1
matrix.
Every vector space is isomorphic to its dual space.
Every vector space is the dual of some other vector space.
If T is an isomorphism from V onto V∗ and β is a finite ordered
basis for V, then T(β) = β ∗ .
If T is a linear transformation from V to W, then the domain of
(Tt )t is V∗∗ .
If V is isomorphic to W, then V∗ is isomorphic to W ∗ .
124
Chap. 2
Linear Transformations and Matrices
(h) The derivative of a function may be considered as a linear func-
tional on the vector space of differentiable functions.
2.For the following functions f on a vector space V, determine which are
linear functionals.
(a)
V = P(R); f(p(x)) = 2p (0) + p (1), where denotes differentiation
(b)
V = R2 ; f(x, y) = (2x, 4y)
(c)
V = M2×2 (F ); f(A) = tr(A)
(d)
V = R3 ; f(x, y, z) = x2 + y2 + z 2
1
(e)
V = P(R); f(p(x)) = p(t) dt
0(f )
V = M2×2 (F ); f(A) = A11
3.4.For each of the following vector spaces V and bases β, find explicit
formulas for vectors of the dual basis β ∗ for V ∗ , as in Example 4.
(a) V = R3 ; β = {(1, 0, 1), (1, 2, 1), (0, 0, 1)}
(b) V = P2 (R); β = {1, x, x2 }
Let V = R3 , and define f1 , f2 , f3 ∈ V∗ as follows:
5.f1 (x, y, z) = x − 2y,
f2 (x, y, z) = x + y + z,
f3 (x, y, z) = y − 3z.
Prove that {f1 , f2 , f3 } is a basis for V∗ , and then find a basis for V for
which it is the dual basis.
Let V = P1 (R), and, for p(x) ∈ V, define f1 , f2 ∈ V∗ by
1
2
f1 (p(x)) =
p(t) dt
and f2 (p(x)) =
p(t) dt.
0
0
Prove that {f1 , f2 } is a basis for V ∗ , and find a basis for V for which it
is the dual basis.
6.Define f ∈ (R2 )∗ by f(x, y) = 2x + y and T : R2 → R2 by T(x, y) =
(3x + 2y, x).
(a)(b)(c)Compute Tt (f).
Compute [Tt ]β ∗ , where β is the standard ordered basis for R2 and
β ∗ = {f1 , f2 } is the dual basis, by finding scalars a, b, c, and d such
that Tt (f1 ) = af1 + cf2 and Tt (f2 ) = bf1 + df2 .
Compute [T]β and ([T]β )t , and compare your results with (b).
7.Let V = P1 (R) and W = R2 with respective standard ordered bases β
and γ. Define T : V → W by
T(p(x)) = (p(0) − 2p(1), p(0) + p (0)),
where p (x) is the derivative of p(x).
Sec. 2.6
Dual Spaces
125
(a)(b)(c)For f ∈ W∗ defined by f(a, b) = a − 2b, compute Tt (f).
∗
Compute [Tt ]β γ∗ without appealing to Theorem 2.25.
Compute [T]γβ and its transpose, and compare your results with
(b).
8.9.Show that every plane through the origin in R3 may be identified with
the null space of a vector in (R3 )∗ . State an analogous result for R2 .
Prove that a function T : Fn → Fm is linear if and only if there exist
f1 , f2 , . . . , fm ∈ (Fn )∗ such that T(x) = (f1(x), f2 (x), . . . , fm (x)) for all
x ∈ Fn . Hint: If T is linear, define fi (x) = (gi T)(x) for x ∈ Fn ; that is,
fi = Tt (gi ) for 1 ≤ i ≤ m, where {g1 , g2 , . . . , gm } is the dual basis of
the standard ordered basis for Fm .
10.Let(a)(b)(c)V= Pn(F ), and let c0 , c1 , . . . , cn be distinct scalars in F .
For 0 ≤ i ≤ n, define fi ∈ V∗ by fi(p(x)) = p(ci ). Prove that
{f0 , f1 , . . . , fn } is a basis for V∗ . Hint: Apply any linear combi-
nation of this set that equals the zero transformation to p(x) =
(x − c1 )(x − c2 ) · · · (x − cn ), and deduce that the first coefficient is
zero.
Use the corollary to Theorem 2.26 and (a) to show that there exist
unique polynomials p0 (x), p1 (x), . . . , pn (x) such that pi (cj ) = δij
for 0 ≤ i ≤ n. These polynomials are the Lagrange polynomials
defined in Section 1.6.
For any scalars a0 , a1 , . . . , an (not necessarily distinct), deduce that
there exists a unique polynomial q(x) of degree at most n such that
q(ci ) = ai for 0 ≤ i ≤ n. In fact,
n
q(x) =
ai pi (x).
i=0
(d)(e)Deduce the Lagrange interpolation formula:
n
p(x) =
p(ci )pi (x)
i=0
for any p(x) ∈ V.
Prove that
b
n
p(t) dt =
p(ci )di ,
a
i=0
where
b
di =
pi (t) dt.
a
126
Chap. 2
Linear Transformations and Matrices
Suppose now that
i(b − a)
ci = a +
for i = 0, 1, . . . , n.
n
For n = 1, the preceding result yields the trapezoidal rule for
evaluating the definite integral of a polynomial. For n = 2, this
result yields Simpson’s rule for evaluating the definite integral of
a polynomial.
11.Let V and W be finite-dimensional vector spaces over F , and let ψ1 and
ψ2 be the isomorphisms between V and V∗∗ and W and W∗∗ , respec-
tively, as defined in Theorem 2.26. Let T : V → W be linear, and define
Ttt = (Tt )t . Prove that the diagram depicted in Figure 2.6 commutes
(i.e., prove that ψ2 T = Ttt ψ1).
T
V −−−−→ W
⏐
⏐
⏐
⏐ψ1 !
!ψ
2
V∗∗ −−−−→ Ttt
W ∗∗
Figure 2.6
12.Let V be a finite-dimensional vector space with the ordered basis β.
∗∗Prove that ψ(β) = β , where ψ is defined in Theorem 2.26.
In Exercises 13 through 17, V denotes a finite-dimensional vector space over
F . For every subset S of V, define the annihilator S 0 of S as
13.14.S 0 = {f ∈ V∗ : f(x) = 0 for all x ∈ S}.
(a)(b)(c)(d)(e)Prove that S 0 is a subspace of V∗ .
If W is a subspace of V and x ∈ W, prove that there exists f ∈ W 0
such that f(x) = 0.
Prove that (S 0 )0 = span(ψ(S)), where ψ is defined as in Theo-
rem 2.26.
For subspaces W1 and W2 , prove that W1 = W2 if and only if
W1 0 = W2 0 .
For subspaces W1 and W2 , show that (W1 + W2 )0 = W1 0 ∩ W2 0 .
Prove that if W is a subspace of V, then dim(W) + dim(W0 ) = dim(V).
Hint: Extend an ordered basis {x1 , x2 , . . . , xk } of W to an ordered ba-
sis β = {x1 , x2 , . . . , xn } of V. Let β ∗ = {f1 , f2 , . . . , fn }. Prove that
{fk+1 , fk+2 , . . . , fn } is a basis for W 0 .
Sec. 2.7
Homogeneous Linear Differential Equations with Constant Coefficients 127
15. Suppose that W is a finite-dimensional vector space and that T : V → W
is linear. Prove that N(Tt ) = (R(T))0 .
16.Use Exercises 14 and 15 to deduce that rank(LAt ) = rank(LA ) for any
A ∈ Mm×n (F ).
17.Let T be a linear operator on V, and let W be a subspace of V. Prove
that W is T-invariant (as defined in the exercises of Section 2.1) if and
only if W0 is Tt -invariant.
18.Let V be a nonzero vector space over a field F , and let S be a basis
for V. (By the corollary to Theorem 1.13 (p. 60) in Section 1.7, every
vector space has a basis.) Let Φ : V ∗ → L(S, F ) be the mapping defined
by Φ(f) = fS , the restriction of f to S. Prove that Φ is an isomorphism.
Hint: Apply Exercise 34 of Section 2.1.
19.Let V be a nonzero vector space, and let W be a proper subspace of V
(i.e., W = V). Prove that there exists a nonzero linear functional f ∈ V∗
such that f(x) = 0 for all x ∈ W. Hint: For the infinite-dimensional
case, use Exercise 34 of Section 2.1 as well as results about extending
linearly independent sets to bases in Section 1.7.
20. Let V and W be nonzero vector spaces over the same field, and let
T : V → W be a linear transformation.
(a)(b)Prove that T is onto if and only if Tt is one-to-one.
Prove that Tt is onto if and only if T is one-to-one.
Hint: Parts of the proof require the result of Exercise 19 for the infinite-
dimensional case.
1.Label(a)(b)(c)(d)(e)(f )(g)the following statements as true or false.
The set of solutions to an nth-order homogeneous linear differential
equation with constant coefficients is an n-dimensional subspace of
C∞ .
The solution space of a homogeneous linear differential equation
with constant coefficients is the null space of a differential operator.
The auxiliary polynomial of a homogeneous linear differential
equation with constant coefficients is a solution to the differential
equation.
Any solution to a homogeneous linear differential equation with
constant coefficients is of the form aect or atk ect , where a and c
are complex numbers and k is a positive integer.
Any linear combination of solutions to a given homogeneous linear
differential equation with constant coefficients is also a solution to
the given equation.
For any homogeneous linear differential equation with constant
coefficients having auxiliary polynomial p(t), if c1 , c2 , . . . , ck are
the distinct zeros of p(t), then {ec1 t , ec2 t , . . . , eck t } is a basis for
the solution space of the given differential equation.
Given any polynomial p(t) ∈ P(C), there exists a homogeneous lin-
ear differential equation with constant coefficients whose auxiliary
polynomial is p(t).
Sec. 2.7
Homogeneous Linear Differential Equations with Constant Coefficients 141
2. For each of the following parts, determine whether the statement is true
or false. Justify your claim with either a proof or a counterexample,
whichever is appropriate.
(a) Any finite-dimensional subspace of C∞ is the solution space of a
homogeneous linear differential equation with constant coefficients.
(b) There exists a homogeneous linear differential equation with con-
stant coefficients whose solution space has the basis {t, t2 }.
(c) For any homogeneous linear differential equation with constant
coefficients, if x is a solution to the equation, so is its derivative
x .
Given two polynomials p(t) and q(t) in P(C), if x ∈ N(p(D)) and y ∈
N(q(D)), then
(d) x + y ∈ N(p(D)q(D)).
(e) xy ∈ N(p(D)q(D)).
3.Find a basis for the solution space of each of the following differential
equations.
(a)
y + 2y + y = 0
(b)
y = y(c)
y (4) − 2y (2) + y = 0
(d)
y + 2y + y = 0
(e)
y (3) − y(2) + 3y (1) + 5y = 0
4.Find a basis for each of the following subspaces of C∞ .
(a)(b)(c)N(D2 − D − I)
N(D3 − 3D2 + 3D − I)
N(D3 + 6D2 + 8D)
5.6.Show that C∞ is a subspace of F(R, C).
(a) Show that D : C∞ → C∞ is a linear operator.
(b) Show that any differential operator is a linear operator on C∞ .
7.Prove that if {x, y} is a basis for a vector space over C, then so is
1
1
(x + y), (x − y) .
2
2i
8.Consider a second-order homogeneous linear differential equation with
constant coefficients in which the auxiliary polynomial has distinct con-
jugate complex roots a + ib and a − ib, where a, b ∈ R. Show that
{eat cos bt, eat sin bt} is a basis for the solution space.
142
9.Chap. 2
Linear Transformations and Matrices
Suppose that {U1 , U2 , . . . , Un } is a collection of pairwise commutative
linear operators on a vector space V (i.e., operators such that UiUj =
Uj Ui for all i, j). Prove that, for any i (1 ≤ i ≤ n),
N(Ui ) ⊆ N(U1 U2 · · · Un ).
10.Prove Theorem 2.33 and its corollary. Hint: Suppose that
b1 e c1 t + b2 ec2 t + · · · + bn ecn t = 0
(where the ci ’s are distinct).
To show the bi’s are zero, apply mathematical induction on n as follows.
Verify the theorem for n = 1. Assuming that the theorem is true for
n − 1 functions, apply the operator D − cn I to both sides of the given
equation to establish the theorem for n distinct exponential functions.
11.Prove Theorem 2.34. Hint: First verify that the alleged basis lies in
the solution space. Then verify that this set is linearly independent by
mathematical induction on k as follows. The case k = 1 is the lemma
to Theorem 2.34. Assuming that the theorem holds for k − 1 distinct
ci ’s, apply the operator (D − ck I)nk to any linear combination of the
alleged basis that equals 0 .
12.Let V be the solution space of an nth-order homogeneous linear differ-
ential equation with constant coefficients having auxiliary polynomial
p(t). Prove that if p(t) = g(t)h(t), where g(t) and h(t) are polynomials
of positive degree, then
N(h(D)) = R(g(DV )) = g(D)(V),
where DV : V → V is defined by DV (x) = x for x ∈ V. Hint: First prove
g(D)(V) ⊆ N(h(D)). Then prove that the two spaces have the same
finite dimension.
13. A differential equation
y (n) + an−1 y (n−1) + · · · + a1 y (1) + a0 y = x
is called a nonhomogeneous linear differential equation with constant
coefficients if the ai ’s are constant and x is a function that is not iden-
tically zero.
(a) Prove that for any x ∈ C∞ there exists y ∈ C∞ such that y is
a solution to the differential equation. Hint: Use Lemma 1 to
Theorem 2.32 to show that for any polynomial p(t), the linear
operator p(D) : C∞ → C∞ is onto.
Sec. 2.7
Homogeneous Linear Differential Equations with Constant Coefficients 143
(b)Let V be the solution space for the homogeneous linear equation
y(n) + an−1 y(n−1) + · · · + a1y (1) + a0 y = 0 .
Prove that if z is any solution to the associated nonhomogeneous
linear differential equation, then the set of all solutions to the
nonhomogeneous linear differential equation is
{z + y : y ∈ V}.
14.Given any nth-order homogeneous linear differential equation with con-
stant coefficients, prove that, for any solution x and any t0 ∈ R, if
x(t0 ) = x (t0 ) = · · · = x(n−1) (t0 ) = 0, then x = 0 (the zero function).
Hint: Use mathematical induction on n as follows. First prove the con-
clusion for the case n = 1. Next suppose that it is true for equations of
order n − 1, and consider an nth-order differential equation with aux-
iliary polynomial p(t). Factor p(t) = q(t)(t − c), and let z = q((D))x.
Show that z(t0 ) = 0 and z − cz = 0 to conclude that z = 0 . Now apply
the induction hypothesis.
15. Let V be the solution space of an nth-order homogeneous linear dif-
ferential equation with constant coefficients. Fix t0 ∈ R, and define a
mapping Φ : V → Cn by
⎛
⎞
x(t0 )
⎜ x (t0 ) ⎟
⎜
⎟
Φ(x) = ⎜
⎝
..
.
⎟ ⎠
for each x in V.
x(n−1) (t0 )
(a)(b)Prove that Φ is linear and its null space is the zero subspace of V.
Deduce that Φ is an isomorphism. Hint: Use Exercise 14.
Prove the following: For any nth-order homogeneous linear dif-
ferential equation with constant coefficients, any t0 ∈ R, and any
complex numbers c0 , c1 , . . . , cn−1 (not necessarily distinct), there
exists exactly one solution, x, to the given differential equation
such that x(t0 ) = c0 and x(k) (t0 ) = ck for k = 1, 2, . . . n − 1.
16.Pendular Motion. It is well known that the motion of a pendulum is
approximated by the differential equation
g
θ + θ = 0 ,
l
where θ(t) is the angle in radians that the pendulum makes with a
vertical line at time t (see Figure 2.8), interpreted so that θ is positive
if the pendulum is to the right and negative if the pendulum is to the
144
Chap. 2
Linear Transformations and Matrices
S
. . . . . . . . . . . . S
. . . .
. . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . θ(t) . . . . . >
. . . . . . . . . . . . .
.
.
S .
.
.
.
.
.
.
.
. . . . . . . .
. . l
. . . . .
. . . . . .
. . . .
. . .
Sq
Figure 2.8
left of the vertical line as viewed by the reader. Here l is the length
of the pendulum and g is the magnitude of acceleration due to gravity.
The variable t and constants l and g must be in compatible units (e.g.,
t in seconds, l in meters, and g in meters per second per second).
(a) Express an arbitrary solution to this equation as a linear combi-
nation of two real-valued solutions.
(b) Find the unique solution to the equation that satisfies the condi-
tions
θ(0) = θ0 > 0
andθ (0) = 0.
(The significance of these conditions is that at time t = 0 the
pendulum is released from a position displaced from the vertical
by θ0 .)
(c) Prove that it takes 2π l/g units of time for the pendulum to make
one circuit back and forth. (This time is called the period of the
pendulum.)
17. Periodic Motion of a Spring without Damping. Find the general solu-
tion to (3), which describes the periodic motion of a spring, ignoring
frictional forces.
18.Periodic Motion of a Spring with Damping. The ideal periodic motion
described by solutions to (3) is due to the ignoring of frictional forces.
In reality, however, there is a frictional force acting on the motion that
is proportional to the speed of motion, but that acts in the opposite
direction. The modification of (3) to account for the frictional force,
called the damping force, is given by
my + ry + ky = 0 ,
where r > 0 is the proportionality constant.
(a) Find the general solution to this equation.
Chap. 2
Index of Definitions
145
(b)(c)Find the unique solution in (a) that satisfies the initial conditions
y(0) = 0 and y (0) = v0 , the initial velocity.
For y(t) as in (b), show that the amplitude of the oscillation de-
creases to zero; that is, prove that lim y(t) = 0.
t→∞
19. In our study of differential equations, we have regarded solutions as
complex-valued functions even though functions that are useful in de-
scribing physical motion are real-valued. Justify this approach.
20.The following parts, which do not involve linear algebra, are included
for the sake of completeness.
(a) Prove Theorem 2.27. Hint: Use mathematical induction on the
number of derivatives possessed by a solution.
(b) For any c, d ∈ C, prove that
1
ec+d = cced
and e−c =
.
ec
(c) Prove Theorem 2.28.
(d) Prove Theorem 2.29.
(e) Prove the product rule for differentiating complex-valued func-
tions of a real variable: For any differentiable functions x and
y in F(R, C), the product xy is differentiable and
(xy) = x y + xy .
(f )Hint: Apply the rules of differentiation to the real and imaginary
parts of xy.
Prove that if x ∈ F(R, C) and x = 0 , then x is a constant func-
tion.

#### 'Solution Manuals > Linear Algebra, 4th Edition: Friedberg' 카테고리의 다른 글

**0 Comments**