Math 110 - Fall 05 - Lectures notes # 15 - Oct 3 (Monday)

Our goal is to study inverses of linear transformations.

Def: Let T: V -> W be linear. A function U: W -> V is called
an inverse of U of T if 
(1) UT: V -> V is the identity function UT = I_V on V,
    i.e. I_V(v) = v for all v in V 
(2) TU: W -> W is the identity function TU = I_W on W,
    i.e. I_W(w) = w for all w in W
If T has an inverse, we call T invertible, and write U = T^{-1}
As noted in App B, inverse are unique.

From now on, we will assume that V and W are finite dimensional.
Recall our earlier theorem:

Thm 1: T: V -> W is invertible if and only if rank(T) = dim(W) = dim(V).

Lemma: (1)  Let T: V -> W and S: W -> Z both be invertible.
            Then ST: V -> Z is invertible and (ST)^{-1} = T^{1} S^{1}
       (2)  if it exists then T^{-1}: W -> V is linear
       (3)  T^{-1}: W -> V is invertible too, and (T^{-1})^{-1} = T
Proof: (1) If T is invertible it makes a 1-to-1 correspondence between
           V and W via T(v) = w. Similary S creates a 1-to-1 correspondence
           between W and Z via S(w) = z. So S(T(v)) = z is a 1-to-1
           correspondence between  V and Z so ST is invertible.
           If S(T(v)) = z, then we need to show v = T^{-1}(S^{-1}(z)):
           v    = I_V (v)                 ... def of I_V
                = (T^{-1}T)(v)            ... def of T^{-1}
                = T^{-1}(T(v))            ... def of function composition
                = T^{-1}(I_W(T(v)))       ... def of I_W
                = T^{-1}(S^{-1}S)((T(v))) ... def of S^{-1}
                = (T^{-1}S^{-1})(S(T(v))) ... associativity of composition
                = (T^{-1}S^{-1})(z)       ... def of S(T(v))

           so T^{-1}S^{-1} is the inverse of ST
       (2) We need to show T^{-1}(c*x1+x2) = c*T^{-1}(x1) + T^{-1}(x2)
           Let T^{-1}(x1) = y1 and T^{-1}(x2) = y2,
           or  T(y1) = x1      and T(y2) = x2
           Then
             T^{-1}(c*x1 + x2) = T^{-1}(c*T(y1) + T(y2))  ... def of y1,y2
                               = T^{-1}(T(c*y1+y2))       ... T linear
                               = c*y1+y2       ... T^{-1}T = I_V
                               = c*T^{-1}(x1)+T^{-1}(x2)  ... def of y1,y2
       (3) Apply def of inverse to T^{-1} to see that its inverse is T

Continuing our goal of connecting properties of linear transformations
and matrices, we define:

Def: Let A be an n-by-n matrix. Then A is invertible if there is an
     n-by-n matrix B such that AB = BA = I_n = n x n identify matrix

Thm 2: Let V and W be finite dimensional vectors spaces with
ordered bases beta and gamma, resp. Let T: V -> W be linear.
Then T is invertible if and only if [T]_beta^gamma is invertible,
in which case ([T]_beta^gamma)^{-1} = [T^{-1}]_gamma^beta

Proof: Suppose T is invertible. Then by Thm 1 dim(V) = dim(W) = n
and so [T]_beta^gamma is n x n. Now T^{-1}T = I_V and so
      I_n         = [I_V]_beta^beta    ... by def of I_V
                  = [T^{-1}T]_beta^beta   
                  = [T^{-1}]_gamma^beta * [T]_beta^gamma 
                     ... by an earlier theorem
and similarly
      I_n         = [I_W]_gamma^gamma    ... by def of I_W
                  = [T T^{-1}]_gamma^gamma
                  = [T]_beta^gamma * [T^{-1}]_gamma^beta 
So [T^{-1}]_gamma^beta satisfies the definition of
the inverse of matrix [T]_beta^gamma.

Now suppose A = [T]_beta^gamma is invertible; let B be the inverse.
We need to show T is invertible and B=[T^{-1}]_gamma^beta
Let beta = {v_1,...,v_n} and gamma = {w_1,...,w_n}
and define U: W -> V by U(w_j) = sum_{i=1 to n} B_ij*v_i.
We will show that U = T^{-1}:
    TU(w_j) = T(sum_{i=1 to n} B_ij*v_i) ... by def of U(w_j)
            = sum_{i=1 to n} B_ij*T(v_i) ... since T linear
            = sum_{i=1 to n} B_ij*(sum_{k=1 to n} A_ki*w_k) 
                  ... by def of T(v_i)
            = sum_{i=1 to n} sum_{k=1 to n} B_ij*A_ki*w_k
                  ... move B_ij into summation
            = sum_{k=1 to n} sum_{i=1 to n} B_ij*A_ki*w_k
                  ... reverse order of summation
            = sum_{k=1 to n} w_k * (sum_{i=1 to n} B_ij*A_ki)
                  ... move w_k out of inner summation
            = sum_{k=1 to n} w_k * (AB)_kj   
                  ... def of matrix product AB
            = sum_{k=1 to n} w_k * (I_n)_kj   
                  ... since AB = I_n
            = w_j ... def of I_n
Since I_W(w_j) = w_j for all j too, by uniqueness we have TU = I_W
One can similarly show UT(v_i) = v_i and so UT = I_V

Ex: T: R^2 -> R^2 where T(x1,x2) = (x1 - 2*x2, -2*x1 + x2)
beta = gamma = {e_1,e_2} standard ordered basis.
T is invertible because we can solve T(x1,x2) = (y1,y2)
for (x1,x2) given any (y1,y2) as follows:
   y1 =    x1 - 2*x2 => y2+2*y1 = -3*x2 => x2 = (-2/3)*y1 + (-1/3)*y2
   y2 = -2*x1 +   x2
 => x1 = y1 + 2*x2 = (-1/3)*y1 + (-2/3)*y2
and T^{-1}(y1,y2) = ( (-1/3)*y1 + (-2/3)*y2, (-2/3)*y1 + (-1/3)*y2 )
so [T^{-1}]_gamma^beta = [ -1/3   -2/3   ;  -2/3   -1/3  ]
Also [T]_beta^gamma = [ [T*e1]_gamma, [T*e2]_gamma ]
                    = [ [1; -2]_gamma, [-2 ; 1]_gamma ]
                    = [ 1 -2 ; -2 1 ]
and so ([T]_beta^gamma)^{-1} = [ 1 2 ; 2 1 ]/det 
                             = [1 2 ; 2 1]/(-3)
                             = [T^{-1}]_gamma^beta  ... as expected

Corollary: If V = W and beta = gamma, and T : V -> W is invertible,
then ([T]_beta)^{-1} = [T^{-1}]_beta
   Proof: Follows from Thm 2.

Corollary: Let A be an n x n  matrix. Then A is invertible if and only
if L_A is invertible
   Proof: Homework!

One-to-one correspondences have a special name when they are linear:

Def: Let V and W be vector spaces. If there is an invertible 
linear transformation T: V -> W, then we say V and W are 
isomorphic vector spaces, and T is called an isomorphism.

Ex: V = R^2, W = P_1(R), T((a1,a2)) = a1 + a2*x is an isomorphism

Thm: Let V and W be finite dimensional vector spaces. Then they
are isomorphic if and only if dim(V) = dim(W).
  Proof: If V and W are isomorphic, let T: V -> W be an isomorphism.
    Then we know T is invertible if and only if rank(T) = dim(V) = dim(W).

    Now suppose dim(V) = dim(W) = n. Let beta = {v_1,..,v_n} and
    gamma = {w_1,...,w_n} be ordered bases for V and W, resp.
    Define T by T(v_i) = w_i for i=1,...,n. Then 
    R(T) = span(T(v_1),..,T(v_n)) = span(w_1,..,w_n) = W
    so rank(T) = n = dim(W) = dim(V), so T is an isomorphism.

Corollary: Let V be a vector space over F. Then V is isomorphic to F^n
    if and only if n = dim(V).

Thm: Let V and W have finite dimensions n and m, resp., with ordered
     bases beta and gamma, resp. Then Phi: L(V,W) -> M_{m x n}(F)
     defined by Phi(T) = [T]_beta^gamma is an isomorphism
 Proof: We have proved most of this already: We know Phi is linear.
        It is one-to-one because the property of a basis implies
        that sum_i a_i*w_i = 0 if and only if all a_i = 0, so
        the only way [T]_beta^gamma can be the zero matrix is if
        T(v_i) = 0_W for all i, implying T(v) = 0 for all v.
        It is onto because given a matrix A, if we define 
          T(v_i) = sum_{j=1 to n} A_ji*w_j
        then T is linear and [T]_beta^gamma = A.