College Homepage

Math Department

Course Homepage

Syllabus

Problem sets

Announcements

Course Calendar

Reference Page

# Reference page

Definition.  A vector space is a set V of elements called vectors that can be added and scaled so that the following axioms hold true

A1  (commutativity)
A+B=B+A                for each A,B in V
A2  (associativity)
(A+B) + C = A + (B+C)           for each A, B, C in V
A3  (zero vector)
There is a unique vector 0 such that A+0=A     for each A in V
A4  (inverse vector)
For each vector A, there is a unique vector -A such that A+(-A)=0
A5  (distributivity)
r(A+B)=rA+rB    for each number r and A,B in V
A6  (distributivity)
(r+s)A=rA+sA    for each numbers r,s and A in V
A7  (compatibility)
(rs)A=r(sA)     for each numbers r,s and A in V
A8  (identity)
1A=A   for each A in V

Definition.
A subset W of a vector space V is said to be a subspace of V if W is a vector space with the operation defined on V

True or not?

1. The empty set is a subspace of every vector space.
2.  In any vector space, rA=sA implies that r=s.

### Systems of Linear equations. Row reduction.

Elementary moves in row reduction method

1.
interchange any two equations
2.   multiply any equation by a nonzero scalor
3.   add to one equation  a multiple of another one

Reduced system

1.   the first nonzero coefficient in each equation, called pivot, is 1
2.
each pivot is the only nonzero coefficient in its column
3.   pivots form an echelon , i.e., if
aij and amn  are pivots and  i>m, then j>n.

### Linear combination.

Definition.
A linear combination of vectors
u1, u2, ..., un in a vector space V is any vector v in V of the form
a1u1 +  a2u2 + ...+ anun
for some numbers a1, ..., an.

True or not?

1. 0 is a linear combination of any nonempty set of vectors.
2. In solving systems of linear equations (by the row reduction method) we can scale any equation.
3. Every system of linear equations has a solution.
4. Let S be a subset of a vector space V. Suppose vectors in S can be added and scaled in S. Then is a  subspace of V.

5. The empty subset of a vector space is linearly independent.
6.  If a subset S of a vector space contains 0, then S is linearly dependent.
7.  Let V be a vector space. Let T be a subset of V and S is a subset of T. If S is linearly dependent, then T is also linearly dependent.
8.  Let u1, u2, ..., un be vectors in a vector space V. Suppose
a1u1 +  a2u2 + ...+ anun = 0
for some numbers a1, ..., an, not all zero. Then V is linearly dependent.

### Bases

Definition.
Let
be a vector space. Vectors e1, e2, ..., en form a basis for V if
(i)
e1, e2, ..., en span V, and
(ii)
e1, e2, ..., en are linearly independent.

Theorem.  Let be a vector space. Vectors e1, e2, ..., en form a basis for V if and only if
(i)
e1, e2, ..., en span V, i.e., each vector
v in V can be expressed as a linear combination of e1, e2, ..., en, and
(ii) for each v such an expression is unique.

L8 last page

Lecture 10

Definition.  A function T: V -> W is linear if it preserves sums and scalor products:
(a) T(x+y) = T(x) + T(y)  for x,y in V,
(b) T(cx) = cT(x)  for x in V and c in R.

Kernel   N(T) consists of vectors v in V with T(v)=0.
Image    R(T) consists of vectors w in W with T(v)=w for some v in V.

True or not?   (here T: V->W is a linear function)

1. N(T) is a subspace of V.
2.
R(T) is a subspace of V.
3. If N(T) contains only 0, then T: V-> W is one-to-one.

L9 last page

Lecture 11

True or not?   (here T: V->W is a linear function)

1. If V is of dimension n and W is of dimension m, then each mxn matrix determines a linear transformation T:V -> W.
2. Linear transformation can be added and scaled. The set of linear transformations from V to W forms a vector space.       (it is denoted L(V,W)).
3. Dimension of
L(V,W) is mn.
4. Suppose that {
v1, v2, ..., vn} is a basis of V, and {w1, w2, ..., wn} is a basis of W.  Then the i-th row of the matrix representation of T is the coordinate vector of T(vi).

Lecture 12

True or not?

1. If I:V->V is the identity transformation, then in any basis
{v1, v2, ..., vn} of V, the matrix [I] of I is nxn and is of the form

1 0 0  ...  0 0 0
0 1 0  ...  0 0 0
0 0 1  ...  0 0 0
.........  ...  ........
0 0 0  ...  1 0 0
0 0 0  ...  0 1 0
0 0 0  ...  0 0 1

2. A function U: W->V is said to be inverse of a linear transformation T:V->W if
UT=IV
3. L(V,W) denotes the vector space of linear transformations V->W.
dim L(V, W)  = dim L(W, V)
but L(V, W) is not the same as L(W, V).
4. A linear transformation T is invertible if and only if it is onto and one-to-one.
5. If T is invertible, then T -1 is also invertible. Its inverse is T, i.e.,
(T -1) -1 = T.

Lecture 13

True or not?

1. A linear transformation T is invertible if and only if its matrix representation [T] is invertible.
2.
T is one-to-one if and only if N(T)={0}
3. Let T: V->W be a linear transformation of vector spaces of dimension n.
Then N(T)={0} if and only if R(T)=W.
4. If AB=I, then A and B are invertible matrices.

5. Suppose that A and B are nxn matrices. Suppose that AB=I, then
(a)  LB is one-to-one,
(b)  LB is onto. So B is invertible.
(c)
LA is onto,
(d)
LA is one-to-one. So A is invertible.

Lecture 14

True or not?

1. A function T is one-to-one if T(v)=T(w) implies v=w   (i.e., images of two vectors v and w are the same if and only if v=w).
2. A function T: V-> W is onto if the image of T is all W (i.e., each vector w in W is the image of some vector v in V).
3. Let  {v1, v2, v3} be a basis of a vector space V. It is the same as the basis {v2, v1, v3}.
4.  Let  {v1, v2, v3} be an ordered basis of a vector space V. It is the same as the ordered basis {v2, v1, v3}.

Lecture 15

True or not?

1. Let V and W be vector spaces of dimensions n and m with fixed ordered bases. Then each mxn matrix A corresponds to a linear transformation T: V->W. The transformation T is usually denoted by
LA.
2. Performing a row operation on a matrix A is equivalent to taking the product of A and an elementary matrix E; in both cases the result is a new matrix B=EA.
3. Each elementary matrix is invertible.

Lecture 16

True or not?

1. For each matrix A, there are invertible matrices B and C such that BAC is a matrix  D with
Dii = 1 for i<rank(A) and Dij=0 otherwise.
2.  Each matrix A can be trasformed to a matrix D as above by elementary row operations.

Lecture 17

True or not?

1. Let Ax=0 be a system of m linear equations in n unknowns. Then the set of
solutions form a subspace of Rn.
2. Let Ax=b be a system of m linear equations in n unknowns. Then the set of solutions form a subspace of Rn.

Lecture 19

True or not?

1. Let S(n)  be  statements, one for each n>0.  Suppose that
(a)    the statement S(1) is true,
(b)    for each n>1, if S(n-1) is true, then S(n) is true.
Then each statement S(n) is true.

2. Let A be an nxn matrix. Suppose that all entries in the first row of A are zero. Then det A = 0

last page of Lecture 18.

Lecture 20

True or not?

1. Let A be an arbitrary nxn matrix. Then
(a)  The image of
LARn -> Rn is spanned by the columns of A
(b)
rank(A) = #(linearly independent columns of A).

2. Let {v1, v2, ..., vn} be a linearly dependent set of vectors in a vector space V, i.e.,
a1v1 + a2v2...+ anvn = 0
for some coefficinets ai (not all zero). Then for some k,
vk=b1v1 + ...+ bk-1vk-1 + bk+1vk+1 + ...+ bnvn .

3. Let A be an nxn matrix. If B is obtained from A by
(a) interchanging two rows, then det(B)=(-1) det(A)
(b) scaling a row of A by a multiple k, then det(B)=k det(A)
(c)  adding a multiple of a row to another row, then det(B)=det(A)

4. If A contains a row of zeros, then det(A)=0.
5. If rank(A)=n, then A is a product of elementary matricies.

Lecture 20.5

True or not?

1. Let A be an invertible matrix. Then
det(A) det(A-1)=1.
In particular, det(A) is not zero.
2. If rank(A)<n, then det(A)=0  (size of A is n).
3. A matrix A is invertible if and only if det(A) is not zero.

4. Let T: V -> W be a linear transformation of vector spaces of dimension n.. Then there are ordered basis in V and W such that the matrix of T in those bases is I
n.
5. Let T: V -> V be a linear transformation of a vector space of dimension n. Then in some ordered basis, the matrix of T is

a1 0   ...    0
0   a2 ...    0
...  ...   ...   ...
0   0          an

Lecture 22

True or not

1. A linear operator T on a finite dimensional vector space V is diagonalizable if and only if there is a basis of V that consists of eigenvectors of T.
2. The multiplicity m of an eigenvalue λ of T is the largest positive integer for which
( t - λ )m
is a factor of the characteristic polynomial det(T-tI) of T.
3. For the eigenspace E corresponding to an eigenvalue λ of multiplicity m, there is an inequality:
dim (E)m.
4. The complex conjugate of a complex number a+bi is defined to be the complex number a-bi.
5. The absolute number of
a+bi is defined to be
|a+bi|=sqrt(a2+ b2).

Lecture 24

True or not

1. A basis  β is orthonormal if any two disctinct vectors in β are perpendicular and every vector  in β is of length 1.  (careful!)
2. If {v1, v2, ..., vn} is an orthonormal basis for V and
y=a1v1 + a2v2...+ anvn,
then ai is the scalar <y, vi> for each i.
3. A linearly dependent set can not be orthogonal.

Lecture 26

True or not

1. An inner product on a vector space V is a map V × V -> F  that satisfies
(a)   <x+y, z> = <x, z> + <y, z>
(b)   <ax, z> = a<x, z>
(c)   <x, y> = <y, x>
(d)   <x, x> ≥ 0
2.  Let T be a linear transformation on an inner product space V. Then the adjoint T* of T is a transformation that satisfies
<T(x), y>=<x, T*(y)>     for all x,y in V.
3. Let A be an nxn matrix. Then A*= At if and only if all entries of A are real numbers.
4. (T+U)*=T*+ U*
5. Let A be an nxn matrix.  Then det(A)=0 if and only if rank(A)<n.
6. Every polynomial p(t) splits over complex numbers, i.e., can be written as
p(t)= (t - a1)(t - a2) ... (t - an)
for some complex numbers a1,..., an.

Lecture 27

True or not

Everywhere T is a linear transformation on an inner product space V of dimension n.

1. If
λ is an eigenvalue of T, then λ is an eigenvalue of T*.
2. If x is an eigenvector of T, then x is an eigenvector of T*.
3. If {v1, v2, ..., vn} is an orthonormal basis for V and
y=a1v1 + a2v2...+ anvn,
then ai is the scalar <y, vi> for each i.
4. If the scalars are complex numbers, then for each T there is an orthonormal basis for V such that [T] is upper triangular.
5. If T is diagonalizable, then there is an orthonormal basis for V that consists of eigenvectors of T.