8.27.2008

Intro to PCA - Sent Using Google Toolbar

Intro to PCA

This is the html version of the file http://dml.cs.byu.edu/~cgc/docs/dm/Slides/Intro%20to%20PCA.ppt.
Google automatically generates html versions of documents as we crawl the web.
 
 
 
 
 
 

1  

Intro to PCA 

Adapted from G. Piatetsky-Shapiro,

Biologically Inspired Intelligent Systems (Lecture 7) and R. Gutierrez-Osuna's Lecture

 
 
 
 
 
 

2

 
 
 
 
 
 

3

 
 
 
 
 
 

4

 
 
 
 
 
 

5

 
 
 
 
 
 

6

 
 
 
 
 
 

7

 
 
 
 
 
 

8  

Field Reduction Improves Classification 

  • Most mining algorithms look for non-linear combinations of fields -- can easily find many spurious combinations given small # of records and large # of fields
  • Classification accuracy improves if we first reduce number of fields
  • Multi-class heuristic: select equal # of fields from each class
 
 
 
 
 
 

9  

Selecting Most Relevant Fields 

  • If there are too many fields, select a subset that is most relevant
    • Can select top N fields using 1-field predictive accuracy as computed earlier
    • What is good N
      • Rule of thumb: keep top 50 fields
    • Other techniques exist
 
 
 
 
 
 

10  

Attribute Construction 

  • Better to have a fair modeling method and good variables, than to have the best modeling method and poor variables
  • Examples:
    • People are eligible for  pension withdrawal at age 59 ½.  Create it as a separate Boolean variable!
    • Household income as sum of spouses' incomes in loan underwritting
  • Advanced methods exists for automatically examining variable combinations, but it is very computationally expensive!
 
 
 
 
 
 

11  

Variance 

  • A measure of the spread of the data in a data set
 
 
 
  • Variance is claimed to be the original statistical measure of spread of data.
 
 
 
 
 
 

12  

Covariance 

  • Variance – measure of the deviation from the mean for points in one dimension, e.g., heights
 
  • Covariance – a measure of how much each of the dimensions varies from the mean with respect to each other.
 
  • Covariance is measured between 2 dimensions to see if there is a relationship between the 2 dimensions, e.g., number of hours studied & grade obtained.
 
  • The covariance between one dimension and itself is the variance
 
 
 
 
 
 

13  

Covariance 
 
 
 
 
 
 
 

  • So, if you had a 3-dimensional data set (x,y,z), then you could measure the covariance between the x and y dimensions, the y and z dimensions, and the x and z dimensions.
 
 
 
 
 
 

14  

Covariance 

  • What is the interpretation of covariance calculations?
  • Say you have a 2-dimensional data set
    • X: number of hours studied for a subject
    • Y: marks obtained in that subject
  • And assume the covariance value (between X and Y) is: 104.53
  • What does this value mean?
 
 
 
 
 
 

15  

Covariance 

  • Exact value is not as important as its sign.
 
  • A positive value of covariance indicates that both dimensions increase or decrease together, e.g., as the number of hours studied increases, the grades in that subject also increase.
 
  • A negative value indicates while one increases the other decreases, or vice-versa, e.g., active social life at BYU vs. performance in CS Dept.
 
  • If covariance is zero: the two dimensions are independent of each other, e.g., heights of students vs. grades obtained in a subject.
 
 
 
 
 
 

16  

Covariance 

  • Why bother with calculating (expensive) covariance when we could just plot the 2 values to see their relationship?
 

    Covariance calculations are used to find relationships between dimensions in high dimensional data sets (usually greater than 3) where visualization is difficult.

 
 
 
 
 
 

17  

Covariance Matrix 

  • Representing covariance among dimensions as a matrix, e.g., for 3 dimensions:
 
 
 
  • Properties:
    • Diagonal: variances of the variables
    • cov(X,Y)=cov(Y,X), hence matrix is symmetrical about the diagonal (upper triangular)
    • n-dimensional data will result in nxn covariance matrix
 
 
 
 
 
 

18  

Transformation Matrices 

  • Consider the following:
 
 
 
  • The square (transformation) matrix scales (3,2)
  • Now assume we take a multiple of (3,2)
 
 
 
 
 
 

19  

Transformation Matrices 

  • Scale vector (3,2) by a value 2 to get (6,4)
  • Multiply by the square transformation matrix
  • And we see that the result is still scaled by 4.

    WHY?

    A vector consists of both length and direction. Scaling a vector only changes its length and not its direction. This is an important observation in the transformation of matrices leading to formation of eigenvectors and eigenvalues.

    Irrespective of how much we scale (3,2) by, the solution (under the given transformation matrix) is always a multiple of 4.

 
 
 
 
 
 

20  

Eigenvalue Problem 

  • The eigenvalue problem is any problem having the following form:

                A . v = λ . v

    A: n x n matrix

    v: n x 1 non-zero vector

    λ: scalar

  • Any value of λ for which this equation has a solution is called the eigenvalue of A and the vector v which corresponds to this value is called the eigenvector of A.
 
 
 
 
 
 

21  

Eigenvalue Problem 

  • Going back to our example:
 
 
 

                A   .      v           =  λ .   v  

  • Therefore, (3,2) is an eigenvector of the square matrix A and 4 is an eigenvalue of A
  • The question is:

    Given matrix A, how can we calculate the eigenvector and eigenvalues for A?

 
 
 
 
 
 

22  

Calculating Eigenvectors & Eigenvalues 

  • Simple matrix algebra shows that:

                A . v = λ . v  

           A . v - λ . I . v = 0

           (A - λ . I ). v = 0 

  • Finding the roots of |A - λ . I| will give the eigenvalues and for each of these eigenvalues there will be an eigenvector

    Example …

 
 
 
 
 
 

23  

Calculating Eigenvectors & Eigenvalues 

  • Let
 
 
  • Then:
 
 
 
 
  • And setting the determinant to 0, we obtain 2 eigenvalues:

                λ1 = -1 and λ2 = -2

 
 
 
 
 
 

24  

Calculating Eigenvectors & Eigenvalues 

  • For λ1 the eigenvector is:
 
 
 
 
 
  • Therefore the first eigenvector is any column vector in which the two elements have equal magnitude and opposite sign.
 
 
 
 
 
 

25  

Calculating Eigenvectors & Eigenvalues 

  • Therefore eigenvector v1 is
 
 

where k1 is some constant.

  • Similarly we find that eigenvector v2
 
 

where k2 is some constant.

 
 
 
 
 
 

26  

Properties of Eigenvectors and Eigenvalues 

  • Eigenvectors can only be found for square matrices and not every square matrix has eigenvectors.
  • Given an n x n matrix (with eigenvectors), we can find n eigenvectors.
  • All eigenvectors of a symmetric* matrix are perpendicular to each other, no matter how many dimensions we have.
  • In practice eigenvectors are normalized to have unit length.
 

*Note: covariance matrices are symmetric!

 
 
 
 
 
 

27  

PCA 

  • Principal components analysis (PCA) is a technique that can be used to simplify a dataset
  • It is a linear transformation that chooses a new coordinate system for the data set such that
    • The greatest variance by any projection of the data set comes to lie on the first axis (then called the first principal component)
    • The second greatest variance on the second axis
    • Etc.
  • PCA can be used for reducing dimensionality by eliminating the later principal components.
 
 
 
 
 
 

28  

PCA 

  • By finding the eigenvalues and eigenvectors of the covariance matrix, we find that the eigenvectors with the largest eigenvalues correspond to the dimensions that have the strongest correlation in the dataset.
  • These are the principal components.
  • PCA is a useful statistical technique that has found application in:
    • Fields such as face recognition and image compression
    • Finding patterns in data of high dimension.
 
 
 
 
 
 

29  

PCA Process – STEP 1 

  • Subtract the mean from each of the dimensions
  • This produces a data set whose mean is zero.
  • Subtracting the mean makes variance and covariance calculation easier by simplifying their equations.
  • The variance and co-variance values are not affected by the mean value.
 
 
 
 
 
 

30  

PCA Process – STEP 1 

http://kybele.psych.cornell.edu/~edelman/Psych-465-Spring-2003/PCA-tutorial.pdf

 
 
 
 
 
 

31  

PCA Process – STEP 2 

  • Calculate the covariance matrix
 
 
 
  • Since the non-diagonal elements in this covariance matrix are positive, we should expect that both the X and Y variables increase together.
  • Since it is symmetric, we expect the eigenvectors to be orthogonal.
 
 
 
 
 
 

32  

PCA Process – STEP 3 

  • Calculate the eigenvectors and eigenvalues of the covariance matrix
 
 
 
 
 
 

33  

PCA Process – STEP 3 

  • Eigenvectors are plotted as diagonal dotted lines on the plot. (note: they are perpendicular to each other).
  • One of the eigenvectors goes through the middle of the points, like drawing a line of best fit.
  • The second eigenvector gives us the other, less important, pattern in the data, that all the points follow the main line, but are off to the side of the main line by some amount.
 
 
 
 
 
 

34  

PCA Process – STEP 4 

  • Reduce dimensionality and form feature vector

    The eigenvector with the highest eigenvalue is the principal component of the data set. 

    In our example, the eigenvector with the largest eigenvalue is the one that points down the middle of the data.  

    Once eigenvectors are found from the covariance matrix, the next step is to order them by eigenvalue, highest to lowest. This gives the components in order of significance.  
 

 
 
 
 
 
 

35  

PCA Process – STEP 4 

    Now, if you'd like, you can decide to ignore the components of lesser significance.  

    You do lose some information, but if the eigenvalues are small, you don't lose much 

  • n dimensions in your data
  • calculate n eigenvectors and eigenvalues
  • choose only the first p eigenvectors
  • final data set has only p dimensions.
 
 
 
 
 
 

36  

PCA Process – STEP 4 

  • When the λi's are sorted in descending order, the proportion of variance explained by the p principal components is:
 
 
 
 
  • If the dimensions are highly correlated, there will be a small number of eigenvectors with large eigenvalues and p will be much smaller than n.
  • If the dimensions are not correlated, p will be as large as n and PCA does not help.
 
 
 
 
 
 

37  

PCA Process – STEP 4 

  • Feature Vector

          FeatureVector = (λ1 λ2 λ3 λp) 

(take the eigenvectors to keep from the ordered list of eigenvectors, and form a matrix with these eigenvectors in the columns) 

We can either form a feature vector with both of the eigenvectors: 

or, we can choose to leave out the smaller, less significant component and only have a single column:

 
 
 
 
 
 

38  

PCA Process – STEP 5 

  • Derive the new data

    FinalData = RowFeatureVector x RowZeroMeanData

    RowFeatureVector is the matrix with the eigenvectors in the columns transposed so that the eigenvectors are now in the rows, with the most significant eigenvector at the top.

    RowZeroMeanData is the mean-adjusted data transposed, i.e., the data items are in each column, with each row holding a separate dimension.

 
 
 
 
 
 

39  

PCA Process – STEP 5 

  • FinalData is the final data set, with data items in columns, and dimensions along rows.
  • What does this give us?

    The original data solely in terms of the vectors we chose.

  • We have changed our data from being in terms of the axes X and Y, to now be in terms of our 2 eigenvectors.
 
 
 
 
 
 

40  

PCA Process – STEP 5 

    FinalData (transpose: dimensions along columns) 

 
 
 
 
 
 

41  

PCA Process – STEP 5

 
 
 
 
 
 

42  

Reconstruction of Original Data 

  • Recall that:

    FinalData = RowFeatureVector x RowZeroMeanData

  • Then:

     RowZeroMeanData = RowFeatureVector-1 x FinalData

  • And thus:

    RowOriginalData = (RowFeatureVector-1 x FinalData) +        OriginalMean

  • If we use unit eigenvectors, the inverse is the same as the transpose (hence, easier).
 
 
 
 
 
 

43  

Reconstruction of Original Data 

  • If we reduce the dimensionality (i.e., p<n), obviously, when reconstructing the data we lose those dimensions we chose to discard.
  • In our example let us assume that we considered only a single eigenvector.
  • The final data is newX only and the reconstruction yields…
 
 
 
 
 
 

44  

Reconstruction of original Data 

  • The variation along the principal component is preserved.
 
  • The variation along the other component has been lost.
 
 
 
 
 
 

45

 
 
 
 
 
 

46  

A Word on Factor Analysis 

  • The "reciprocal" of PCA(?)
  • PCA generates new variables (zi) that are linear combinations of the original input variables (xi).
  • FA assumes that there are factors (zi) that, when linearly combined, generate the input variables (xi).
 
 
 
 
 
 

47  

A Word On Linear Discrimant Analysis 

  • Both PCA and FA are unsupervised.
  • LDA seeks to find a dimension such that when the data is projected onto it, the two classes* are well separated (i.e., the means are as far apart as possible and the examples of classes are as tightly clustered)
 

*This generalizes naturally to K classes yielding K-1 dimensions

 
 
 
 
 
 

48  

References