Math 110 - Fall 05 - Lectures notes # 6 - Sep 12 (Monday)

Homework due Thursday, Sep 15:
(1) Sec 1.5: 1 (justify) (postponed from last time)
             2bd,  8, 9, 
             12 (postponed from last time)
             13, 17 
(2) Recall that that the set of symmetric nxn matrices form a subspace
    W of M_{n x n}(F). Find a basis of W. What is the dimension of W?
(3) Sec 1.6: 1 (justify), 5 (justify), 11, 12, 13, 29, 31

Goal for the day: Understand bases and dimension:
                     Express space V in simplist possible way:
                       where every vector in V is a unique linear
                       combination of a set of linear independent
                       vectors called a basis
                     Show that if W has a finite basis, then all
                       bases have the same number of vectors, and
                       this number is called the dimension of V

Def: If V = span(S), and S is linearly independent, we call
     S a basis of V

Ex: V = F^n, then S = {(1,0,...,0), (0,1,0,...,0), ... , (0,...,0,1)}
    is called the standard basis
ASK & WAIT: Why is this a basis?

Ex: M_{m x n}(F): S = {E^{11}, E^{12},..., E^{ij}, ... , E^{mn} }
    where E^{ij} is a matrix where entry ij is 1 and rest 0;
    S is also called standard basis, for same reason as last example

Ex: V = F^2, S = {(1,0), (1,1)} is a basis, but not standard
ASK & WAIT: why is this a basis?

Ex: P_n(F) = {polynomials of degree <= n over F}
    S = {1,x,x^2,...,x^n} is standard basis

Ex: P(F) = {all polynomials over F}
    S = {1,x,x^2,...} is a basis (not finite!)

Recall Thm 1 from last time: Let V be a vector space over F, S a subset
   Then any v in span(S) can be written as a unique linear combination
   of vectors in S if and only if S is linearly independent

Corollary: a subset S of V is a basis for V if and only if 
   each v in V can be written as a unique linear combination of
   vectors in S

Proof: If S is a basis for V, then by definition V = span(S) and
       S is linearly independent. By Thm 1, this implies that each
       v in V can be written as a unique linear combination of S.

       If each v in V is a unique linear combination of S, then
       V = span(S) and by the Thm 1, S is linearly independent, so
       that S is a basis.

Now we move on to constructing bases, and showing, if finite, they
all have to have the same number of vectors (the dimension)

Thm 2: If V = span(S) and S is finite, then S contains a finite
       basis S1 of V.

Proof: If S already independent, nothing to show, so assume S dependent.
       The idea of the proof is simply to start picking vectors out of
       S to put in S1, continuing as long as S1 is independent.
       As soon as putting any other vector from S into S1 would
       make S1 dependent, we will show that S1 is a basis. 
       We can pick vectors out of S in any order we like, and this will
       produce a basis (not always the same one!)

       Formally, to do an induction, 
          pick any nonzero s in S, 
          set S1 = {s}; S1 is independent (why?)
          remove s from S: S2 = S - {s} (so we can't pick it again)

          repeat
             if there exists some t in S2 such that 
                 S1 U {t} is independent, then
                    add t to S1:     S1 = S1 U {t}
                    remove t from S2:  S2 =  S2 - {t}
          until we can't find any such t

        Claim 1: This algorithm for building S1 eventually stops
            Proof: Since S is finite, there are only finitely
                   many t to pick, and since S is dependent, 
                   we know we will eventually
                   run out of t's to add.

        Claim 2: When we stop, S1 is independent
            Proof: by construction, S1 is independent at every step

        Claim 3: When we stop, V = span(S1)
            Proof: at every step of the algorithm S = S1 U S2.
              When we stop S2 is in Span(S1), so
                 span(S1) = span(S1 U S2) = span(S) = V

The next Theorem will be the main tool for show that all bases
have the same dimension

Thm 3 (Replacement Thm): Let V be a vector space over F, 
     V generated by G, G contains n vectors. Let L be any other
     linearly independent subset of V, and suppose it contains
     m vectors. Then m <= n, and there is a subset H of G containing
     m-n vectors such that the n vectors in L U H also span V.

We defer the proof briefly to present

Corollary 1: Let V be a vector space over F with a finite generating set.
     Then every basis of V has the same number of vectors. 

Proof of Corollary 1: If V has a finite generating set, then it has
     a finite basis, call it G, by Thm 2. Let n be the number of 
     vectors in G. Let L be any other finite basis of V, containing
     m vectors. By Thm 3, m <= n. Reversing the roles of G and L,
     we get n <= m. So m=n.

Def: A vector space V is called finite dimensional if it has a finite
     basis. The number of vectors in the basis is called the
     dimension of V, written dim(V) .
     (By the corollary, this number does not depend
     on the choice of basis, so the definition makes sense).
     If V does not have a finite basis, it is called 
     infinite-dimensional

Ex: dim(F^n) = n, dim(F) = 1

ASK & WAIT: if V = C, F = R, what is dim(V)?

Ex: dim(M_{m x n}(F)) = mn

Ex: P(F) is infinite dimensional

Ex: we say dim({0}) = 0

Proof of Replacement Theorem:
    We use induction on m. When m=0, so L is the null_set, then m=0 <= n,
    and we can simply choose H = G to get the spanning set G = G U null_set
    with n vectors.

    Now assume the Thm is true for m; we need to prove it for m+1.
    This means that we assume there is a linearly independent subset L
    of V, where L contains m+1 vectors, and have to prove 2 things:
      (1) that m+1 <= n
      (2) we can find a set H of n-(m+1) vectors in G such that span(L U H)=V
    Write L = {v_1,v_2,...,v_{m+1}}. Then L' = {v_1,...,v_m} has
    just m vectors, is linearly independent too (why?), so by the 
    induction hypothesis, we can apply the Thm to L', conclude that
    m <= n, and pick n-m vectors out of G to get H' = {u_1,...,u_{n-m}}
    where L' U H' span V. Thus v_{m+1} is in span(L' U H') = V, so we
    can write v_{m+1} as a linear combination 
      (*)   v_{m+1} = a_1*v_1 + ... + a_m*v_m  +  b_1*u_1 + ... + b_{n-m}*u_{n-m}
    Not all the b_i can be zero, because then we would have 
    v_{m+1} in span(v_1,..,v_m), contradicting the fact the 
    L is independent (why?). In particular, this means 
    n-m>0, or m < n, or m+1 <= n, proving the first part of the induction.
    For the second part, finding n-(m+1) = n-m-1 vectors H so that
    L U H span V, we suppose, by renumbering the u's if necessary, 
    that b_1 is nonzero. Then we can solve (*) for u_1 to get
      (**) u_1 = (-a_1/b_1)*v_1 + ... + (-a_m/b_1)*v_m  + (1/b_1)*v_{m+1}
                    + (-b_2/b_1)*u_1 + ... + (-b_{n-m}/b_1)*u_{n-m}
    i.e. u_1 is in span({v_1,...,v_{m+1},u_2,...,u_{n-m}) 
    Now let H = {u_2,...,u_{n-m}} contain n-m-1 vectors. We have just shown that
     span(L U H) = span(L U H U {u_1})       since u_1 is in span L U H
                 = span(L U H')              since H' = H U {u_1}
                 = span(L' U {v_{m+1}} U H') since L = L' U {v_{m+1}}
              contains span(L' U H')         since we removed v_{m+1}
                 = V                         by induction
    as desired.

We illustrate by considering all possible subspaces of R^2 and R^3.

Ex V = R^2 = {(x,y), x and y in R}. Dim(R^2) = 2, so all subspaces W of
R^2 must have dimensions 0, 1 or 2:
   dim(W) = 0 => W = {0_V = (0_R,0_R)} (why?)
     Geometrically, W = origin in R^2
   dim(W) = 1 => W = span(S) where S contains 1 vector x = {rs, r in R}
ASK & WAIT:     Geometrically, what is W?
   dim(W) = 2 => W = V (why?)
     Geometrically, W = V = R^2

Ex V = R^3 = {(x,y,z), x, y, z in R}. Dim(R^3) = 3, so all subspaces W of
R^3 must have dimensions 0, 1, 2 or 3:
   dim(W) = 0 => W = {0_V = (0_R,0_R)} (why?)
     Geometrically, W = origin in R^2
   dim(W) = 1 => W = span(S) where S contains 1 vector = {rs, r in R}
ASK & WAIT:     Geometrically, what is W?
   dim(W) = 2 => W = span(S) where S contains 2 vectors x {r1*s1+r2*s2, ri in R}
ASK & WAIT:     Geometrically, what is W?
   dim(W) = 3 => W = V (why?)
     Geometrically, W = V = R^3

Linear algebra was invented in part to generalized this geometric intuition 
to higher dimension sets like R^4 or R^27 etc. where it is harder to
visulaize what is going on. So whenever you learn a defintion or theorem
in this class, ask yourself what it means in R^2 and R^3.

The next corollary formalizes the idea that given a vectors space V, the sets
  Gen = {all generating sets of V}
  LinDep = {all linearly independent subset of V}
  Bases = {all bases of V} satisfy

      Bases = Gen intersect LinDep

Corollary 2: Let V be an n-dimensional vector space. Then
  (a) Any finite generating set for V has at least n vectors, and
      any finite generating set for V that has exactly n vectors is a basis.
  (b) Any linearly independent subset of V with n vectors is a basis.
  (c) Every linearly independent subset of V can be extended to a basis.

Proof:  
  (a) Let S be a generating set for V. By Thm 2, S contains a basis S1 for V.
      By Corollary 1, S1 contains n vectors. So S contains at least those n vectors.
      If S contains exactly n vectors, then S = S1 is a vasis.
  (b) Any linearly independent set L with m<=n vectors can be extended to a basis by
      adding n-m vectors from S, according to the Replacement Theorem.
      When m=n, L must already be a basis.
  (c) Let L be an independent set with m<n vectors, and let + be any basis.
      So G contains n vectors.  By the Replacement Theorem, we can take
      a set H of n-m vectors from G, so that the n vectors in L U H span V.
      Thus, by (a), L U H is a basis.

ASK & WAIT: Given two lines W1, W2 through the origin in R^2, what can 
     W1 intersect W2 look like?
Ex: What abou V = R^4 = {(w,x,y,z)}? Dim(R^4)=4 so you get subspaces W where
    dim(W) = 0: origin
    dim(W) = 1: lines through origin
    dim(W) = 2: 2-dimensional planes through origin
    dim(W) = 3: 3-dimensional planes through origin ("hyperplanes")
    dim(W) = 4: W=V

Thm 4: If W is a subspace of finite dimensional V, then dim(W) <= dim(V).
If dim(W)=dim(V), then W=V

Proof: If W = {0}, done, since dim(W) = 0. Otherwise, choose nonzero w_1 in W,
and keep adding vectors w_2, w_3, ... from W as long as they are linearly 
independent.  By the replacement theorem, this will stop at some point w_m, 
with m <= n.  We claim {w_1,...,w_m} is a basis for W, because it is
linearly dependent by construction, and all other vectors in W are in
span({w_1,...,w_m}).  Finally, dim(W) = m <= n = dim(V).
If dim(W) = n, then by Corollary 2(b), a basis for W is also a basis for V, so W=V.

Corollary: dim(W1 intersect W2) <= min(dim(W1),dim(W2))

Ex: Consider R^3 again, all of whose subspaces W must have dimensions 0, 1 or 2.
ASK & WAIT: Suppose we have 2 subspaces W1 and W2: 
what can dimensions of W1 and W2 be? What can they be geometrically?

Same ideas work in R^n, but harder to picture geometrically, 
which is why we use algebra

So far, ideas limited to finite dimensional vector spaces. Section 1.7 shows
(using Axiom of Choice) that

Thm Every vector space has a basis (which may be infinite)

Ex: A basis for P(F) = {1,x,x^2,x^3,...}

One can further show that all bases for V have the same "cardinality".
The cardinality of a finite set is just its number of elements.
Infinite sets can have different cardinalities (eq the integers are
"countable" and the reals are "uncountable", see Ma55), but we will
not consider this further in this class.