Math 110 - Fall 05 - Lectures notes # 2 - Aug 31 (Wednesday)

   Last time we talked about fields, which will be our sets of
   "scalars" that we use to do linear algebra (eg they will be
   components of our vectors and matrices). Whenever we prove
   something about linear algebra, we will prove it for all
   fields (i.e. we will only use the axioms that define a field
   and theorems you can derive from these axioms). This means
   that our results will be true whether or not our matrix and
   vectors are filled with rational numbers, complex numbers,
   elements of Z2 ("bits") or even rational functions. 
   (Homework problems and examples in the book will assume
   fields of characteristic 0 unless otherwise noted.)

   Just as fields generalized our intuitive notion of the rules
   that arithmetic with "scalars" ought to obey, we now define
   the most basic concept of the class, a "vector space", to
   generalize our intuition about what it means to add and subtract
   2-dimensional vectors (x,y), and multiply them by scalars. 
   By writing down the rules carefully, and only using these rules
   to prove theorems, we will prove theorems that apply not just
   to the familiar vectors (x,y) where x and y are real numbers, but
   to much more widely useful objects.

   Definition: A vector space (or linear space) V over a field F of "scalars"
     is a set of "vectors" with two operations:  
      vector addition: +: V x V -> V  (written x+y, called "sum")
      scalar multiplication: *: F x V -> V  (written t*x or tx, called "product")
     that satisfy the following axioms:
     VS1: Commutativity of +: for all x,y in V  x+y = y+x
     VS2: Associativity of +: for all x,y,z in V (x+y)+z = x+(y+z)
     VS3: Existence of zero vector: there exists 0_V in V such that
                                    for all x in V, x+0_V = x
     VS4: Additive inverse: for all x in V, there exists y such that x+y = 0_V
                            (and we write y = -x, called "negative x")
     VS5: Multiplication by 1: for all x in V, 1*x = x
     VS6: Associativity of Multiplication:  
              for all x in V, a,b in F: a*(b*x) = (a*b)*x
     VS7: Distributivity(1): for all x,y in V, a in F: a*(x+y) = a*x + a*y
     VS8: Distributivity(2): for all x in V, a,b in F: (a+b)*x = a*x + b+x

  Now we do examples, first some familiar ones and then less familiar:
 
  Def: An object (a_1,...,a_n) of n scalars from a field F is called an n-tuple.
       The values a_1,...,a_n are called entries or components.
       Two n-tuples a and b re equal if all their entries are equal: a_i = b_i

ASK & WAIT: What is the difference between the n-tuple (a_1,...,a_n) and
            the set {a_1,...,a_n}?

  EX 1 of a vector space: choose F and n, and let V = {all n-tuples from F}
             where a+b = (a_1+b_1,...,a_n+b_n) and t*a = (t*a_1,...,t*a_n)
             i.e. just add, multiply each component separately
  Notation: We write V = F^n
ASK & WAIT: Why is F^n a vector space?

  Notation: sometimes we write x in F as a row vector (a_1,...,a_n) and
            sometimes as a column vector, denoted 
              (a_1) or (a_1;a_2;...;a_n)   (the latter is notation from Matlab)
               ...
              (a_n)

ASK & WAIT: What is F^1?

  EX 2 of a vector space: choose F, m and n, and let V = {m x n matrices from F}
  Notation: We write V = M_{m x n}(F)
            If m=n we say the matrix is square, else rectangular
            If A is a matrix, we let A_ij be entry in row i, column j
            If A and B are m-by-n matrices, we say A=B if all A_ij = B_ij
            If A is a matrix, we call the A_ii the diagonal entries of A
            We call (A_i1, A_i2,...,A_in) the i-th row of A
            We call (A_1j; A_2j;...;A_mj) the j-th column of A
            If all entries of A are 0_F (in F), we call A the zero matrix
       A+B defined so that (A+B)_ij = A_ij + B_ij and (t*A)_ij = t*A_ij

ASK & WAIT: Why is M_{m x n}(F) a vector space?

  Ex 3 of a vector space: Choose F and n, and let 
          V = {polynomials in x with coefficients from F, degree <= n}
            = {a(x) = sum_i=0 to n  a_i*x^i where a_i in F}
   Notation: P_n(F) = V
        Sum a(x)+b(x) of polynomials defined by adding coefficients in usual way
        Product t*a(x) of scalar and polynomial also defined in usual way
        
ASK & WAIT: Why is P_n(F) a vector space?

ASK & WAIT: Let P'_n(F) = {polynomials in x with coefs from F, degree = n}
            Is P'_n(F) a vector space?

So far all examples can be described by finite list of components; not next one

  Ex 4 of a vector space: P(F) = all polynomials in x with coefs from F, 
         of any degree
ASK & WAIT: why is P(F) a vector space?

  Ex 5 of a vector space: Define a sequence to be any function sigma
     from the positive integers Z+ to field F, sometimes written
     sigma = (a_1, a_2, ...); let V = {all sequences} with
     addition and multiplication are defined, as before, componentwise
     then V is a vector space

  Ex 6 of a vector space: Let F be a field, S be any nonempty set, and let
       V = Func(S,F) = set of functions from S to F
       if f,g in V then f+g defined by (f+g)(s) = f(s) + g(s)
       if f in V, a in F then a*f defined by (a*f)(s) = a*(f(s))

ASK & WAIT: Why is Func(S,F) a vector space?

  Ex 7 of a vector space: S = [0,1], 
       V = Func(S,R) = all real valued functions on [0,1]
       Intuitively, cannot be described by any finite list of numbers,
         because there are just too many!

  Ex 8 of a vectors space: V = real valued functions on [0,1] that are
       continuous
ASK & WAIT: Why is V a vector space? Why isn't this a special case of Ex 7?

What is true about every vector space? Next results mimic similar results for
field, nearly same proofs:

Thm: (Cancellation Law) for all x,y,z in V, x+z = y+z implies x=y
ASK & WAIT: proof?

Thm (Uniqueness of 0_V and -x) The zero vector 0_V is unique.
     Given x, -x (its additive inverse) is unique.
Proof: homework!

Thm: (1) for all x in V, 0_F*x = 0_V  (here 0_F is in F and 0_V is in V)
     (2) for all x in V and a in F, -(a*x) = (-a)*x = a*(-x)
     (3) for all a in F, a*0_V = 0_V  
Proof: (1) 0_F*x = (0_F+0_F)*x   (def of 0_F)
             = 0_F*x+0_F*x   (distributivity(2))
           Also 0_F*x = 0_V + 0_F*x   (def of 0_V)
           so by Cancellation Law 0_F*x = 0_V
     (2) By last Thm -(a*x) unique vector such that a*x + -(a*x) = 0_V
         But also 0_V = 0_F*x         (part (1))
                      = (a+(-a))*x    (def of -a)
                      = a*x+(-a)*x    (distributivity(2))
         so -(a*x) = (-a)*x by uniqueness of -(a*x)
         Letting a=1 we get (-1)*x = -(1*x) = -(x) = -x  (Multiplication by 1)
         and so for general a
             (-a)*x = (a*(-1))*x   (def of -1)
                    = a*(-1*x)     (associativity of mult)
                    = a*(-x)       
         so again by uniqueness a*(-x) = (-a)*x = -(a*x)
     (3) a*(0_V) = a*(0_V + 0_V)   (def of 0_V)
                 = a*0_V + a*0_V   (distributivity(1))
         and also a*(0_V) = 0_V   + a*0_V   (def of 0_V)
         so 0_V = a*0_V by Cancellation Law