Linear Algebra for Data Science and Machine Learning

Harsh Mishra - May 31 - - Dev Community

What are Matrices?

A matrix is denoted by a capital letter (e.g., A). An m x n matrix has m rows and n columns. Each element of the matrix is called an entry and is denoted by a_ij, where i is the row number and j is the column number.

Examples:

Square Matrix: A matrix with the same number of rows and columns.

1 2 3
4 5 6
7 8 9
Enter fullscreen mode Exit fullscreen mode

Non-Square Matrix: A matrix with a different number of rows and columns.

1 2 3
4 5 6
Enter fullscreen mode Exit fullscreen mode

Row Matrix (Row Vector)

A row matrix, or row vector, is a matrix with a single row and multiple columns. It is used to represent a vector in row form.

Example:

1 2 3
Enter fullscreen mode Exit fullscreen mode

Column Matrix (Column Vector)

A column matrix, or column vector, is a matrix with a single column and multiple rows. It is used to represent a vector in column form.

Example:

1
2
3
Enter fullscreen mode Exit fullscreen mode

Identity Matrix

An identity matrix is a square matrix with ones on the diagonal and zeros elsewhere. It serves as the multiplicative identity in matrix multiplication, meaning any matrix multiplied by the identity matrix remains unchanged.

Example:

1 0 0
0 1 0
0 0 1
Enter fullscreen mode Exit fullscreen mode

Addition of Matrices

Matrices can be added together if they have the same dimensions, meaning they have the same number of rows and columns. To add two matrices, simply add the corresponding elements together.

Example:

Consider two matrices:

Matrix A:

1 2
3 4
Enter fullscreen mode Exit fullscreen mode

Matrix B:

5 6
7 8
Enter fullscreen mode Exit fullscreen mode

To add these matrices, add the corresponding elements:

1+5  2+6
3+7  4+8
Enter fullscreen mode Exit fullscreen mode

This results in the sum matrix:

6 8
10 12
Enter fullscreen mode Exit fullscreen mode

Multiplication of Matrix by Scalar

Multiplying a matrix by a scalar involves multiplying every element of the matrix by that scalar value.

Example:

Consider the matrix:

1 2
3 4
Enter fullscreen mode Exit fullscreen mode

To multiply this matrix by the scalar value 2, simply multiply each element by 2:

2*1  2*2
2*3  2*4
Enter fullscreen mode Exit fullscreen mode

This results in the matrix:

2 4
6 8
Enter fullscreen mode Exit fullscreen mode

Subtraction of Matrices

Matrices can be subtracted from each other if they have the same dimensions, meaning they have the same number of rows and columns. To subtract one matrix from another, simply subtract the corresponding elements.

Example:

Consider two matrices:

Matrix X:

10 8
6  4
Enter fullscreen mode Exit fullscreen mode

Matrix Y:

3  2
5  1
Enter fullscreen mode Exit fullscreen mode

To subtract Matrix Y from Matrix X, subtract the corresponding elements:

10-3  8-2
6-5   4-1
Enter fullscreen mode Exit fullscreen mode

This results in the difference matrix:

7 6
1 3
Enter fullscreen mode Exit fullscreen mode

Multiplication of Matrices

Matrix multiplication is a binary operation that produces a matrix from two matrices. For the multiplication of two matrices A and B to be defined, the number of columns in A must equal the number of rows in B.

Rules for Matrix Multiplication:

  1. Dimension Compatibility: The number of columns in the first matrix must be equal to the number of rows in the second matrix.

  2. Element Calculation: Each element c_ij of the resulting matrix C is obtained by multiplying the elements of the i-th row of matrix A by the corresponding elements of the j-th column of matrix B, and summing the products.

  3. Order of Multiplication: Matrix multiplication is not commutative, i.e., AB may not be equal to BA.

Example:

Consider two matrices:

Matrix A (2x3):

1 2 3
4 5 6
Enter fullscreen mode Exit fullscreen mode

Matrix B (3x2):

7 8
9 10
11 12
Enter fullscreen mode Exit fullscreen mode

To multiply Matrix A by Matrix B:

  1. Dimension compatibility: The number of columns in Matrix A (3) is equal to the number of rows in Matrix B (3), so multiplication is possible.

  2. Element calculation:

    • For element c_11 of the resulting matrix: c_11 = (1*7) + (2*9) + (3*11) = 7 + 18 + 33 = 58
    • For element c_12 of the resulting matrix: c_12 = (1*8) + (2*10) + (3*12) = 8 + 20 + 36 = 64
    • For element c_21 of the resulting matrix: c_21 = (4*7) + (5*9) + (6*11) = 28 + 45 + 66 = 139
    • For element c_22 of the resulting matrix: c_22 = (4*8) + (5*10) + (6*12) = 32 + 50 + 72 = 154

This results in the product matrix C (2x2):

58  64
139 154
Enter fullscreen mode Exit fullscreen mode

The dimension of the resultant matrix in matrix multiplication is determined by the number of rows of the first matrix and the number of columns of the second matrix.

Transpose of a Matrix

The transpose of a matrix is an operation that flips a matrix over its diagonal, switching the row and column indices of the matrix. The transpose of a matrix A is often denoted as A^T or A'.

Definition:

For a matrix A with dimensions m x n, the transpose A^T will have dimensions n x m. Each element a_ij in A becomes element a_ji in A^T.

Example:

Consider the matrix A:

1 2 3
4 5 6
Enter fullscreen mode Exit fullscreen mode

The transpose of matrix A, denoted as A^T, is:

1 4
2 5
3 6
Enter fullscreen mode Exit fullscreen mode

Here, the rows and columns of matrix A are swapped to get the transpose.

Symmetric Matrix

A symmetric matrix is a square matrix that is equal to its transpose. In other words, a matrix A is symmetric if A = A^T. This means that the element at the i-th row and j-th column is the same as the element at the j-th row and i-th column for all i and j.

Definition:

A matrix A is symmetric if a_ij = a_ji for all i and j.

Example:

Consider the matrix A:

1 2 3
2 4 5
3 5 6
Enter fullscreen mode Exit fullscreen mode

To verify if this matrix is symmetric, we calculate its transpose:

Transpose of matrix A:

1 2 3
2 4 5
3 5 6
Enter fullscreen mode Exit fullscreen mode

Since the original matrix A is equal to its transpose, it is a symmetric matrix.

Skew-Symmetric Matrix

A skew-symmetric matrix is a square matrix that is equal to the negative of its transpose. In other words, a matrix A is skew-symmetric if A = -A^T. This means that the element at the i-th row and j-th column is the negative of the element at the j-th row and i-th column for all i and j. Additionally, all the diagonal elements of a skew-symmetric matrix are zero.

Definition:

A matrix A is skew-symmetric if a_ij = -a_ji for all i and j, and a_ii = 0 for all i.

Example:

Consider the matrix A:

 0  2 -3
-2  0  4
 3 -4  0
Enter fullscreen mode Exit fullscreen mode

To verify if this matrix is skew-symmetric, we calculate its transpose and compare it with the negative of the original matrix:

Transpose of matrix A:

 0 -2  3
 2  0 -4
-3  4  0
Enter fullscreen mode Exit fullscreen mode

Negative of the original matrix A:

 0 -2  3
 2  0 -4
-3  4  0
Enter fullscreen mode Exit fullscreen mode

Since the transpose of matrix A is equal to the negative of the original matrix A, it is a skew-symmetric matrix.

Inverse of a Matrix

The inverse of a matrix is a matrix that, when multiplied by the original matrix, yields the identity matrix.

Definition:

A square matrix A has an inverse A^-1 if:

A * A^-1 = A^-1 * A = I
Enter fullscreen mode Exit fullscreen mode

where I is the identity matrix.

Conditions for the Inverse to Exist:

  1. Square Matrix: The matrix must be square (same number of rows and columns).
  2. Non-Singular Matrix: The matrix must have a non-zero determinant.

Example:

Consider the matrix A:

A = [4  7]
    [2  6]
Enter fullscreen mode Exit fullscreen mode

The inverse of A is:

A^-1 = [ 3/5  -7/10]
       [-1/5   2/5 ]
Enter fullscreen mode Exit fullscreen mode

Determinant

The determinant is a scalar value that can be computed from a square matrix. It provides important properties about the matrix, such as whether the matrix is invertible, and it is used in various areas of mathematics including solving systems of linear equations, finding eigenvalues, and more.

Determinant of a Square Matrix of Order 1

For a 1x1 matrix A:

A = [a]
Enter fullscreen mode Exit fullscreen mode

The determinant of A, denoted as det(A), is simply the value of the single element:

det(A) = a
Enter fullscreen mode Exit fullscreen mode

Determinant of a Square Matrix of Order 2

For a 2x2 matrix A:

A = [a  b]
    [c  d]
Enter fullscreen mode Exit fullscreen mode

The determinant of A, denoted as det(A), is calculated as:

det(A) = ad - bc
Enter fullscreen mode Exit fullscreen mode

Example:

A = [1  2]
    [3  4]
Enter fullscreen mode Exit fullscreen mode
det(A) = (1 * 4) - (2 * 3) = 4 - 6 = -2
Enter fullscreen mode Exit fullscreen mode

Determinant of a Square Matrix of Order 3

For a 3x3 matrix A:

A = [a  b  c]
    [d  e  f]
    [g  h  i]
Enter fullscreen mode Exit fullscreen mode

The determinant of A, denoted as det(A), is calculated using the following formula:

det(A) = a(ei - fh) - b(di - fg) + c(dh - eg)
Enter fullscreen mode Exit fullscreen mode

Example:

A = [1  2  3]
    [4  5  6]
    [7  8  9]
Enter fullscreen mode Exit fullscreen mode
det(A) = 1(5*9 - 6*8) - 2(4*9 - 6*7) + 3(4*8 - 5*7)
       = 1(45 - 48) - 2(36 - 42) + 3(32 - 35)
       = 1(-3) - 2(-6) + 3(-3)
       = -3 + 12 - 9
       = 0
Enter fullscreen mode Exit fullscreen mode

In this case, the determinant of matrix A is 0.

Singular Matrix

A singular matrix is a square matrix that does not have an inverse. This happens if and only if the determinant of the matrix is zero. Singular matrices are important in linear algebra because they indicate systems of linear equations that do not have a unique solution.

Definition:

A square matrix A is singular if det(A) = 0.

Example:

Consider the 2x2 matrix A:

A = [2  4]
    [1  2]
Enter fullscreen mode Exit fullscreen mode

To determine if A is singular, calculate its determinant:

det(A) = (2 * 2) - (4 * 1) = 4 - 4 = 0
Enter fullscreen mode Exit fullscreen mode

Since the determinant of A is 0, matrix A is singular.

Properties of Singular Matrices:

  1. No Inverse: Singular matrices do not have an inverse.
  2. Linearly Dependent Rows/Columns: The rows or columns of a singular matrix are linearly dependent, meaning one row or column can be expressed as a linear combination of the others.
  3. Non-unique Solutions: Systems of linear equations represented by singular matrices do not have a unique solution; they may have no solution or infinitely many solutions.

Example of a Non-Singular Matrix for Comparison:

Consider the 2x2 matrix B:

B = [1  2]
    [3  4]
Enter fullscreen mode Exit fullscreen mode

Calculate its determinant:

det(B) = (1 * 4) - (2 * 3) = 4 - 6 = -2
Enter fullscreen mode Exit fullscreen mode

Since the determinant of B is not 0, matrix B is not singular (it is invertible).

Properties of Determinants

The determinant is a scalar value associated with a square matrix that encapsulates important properties of the matrix. Here are key properties of determinants:

  1. Equal to Its Transpose: The determinant of a matrix is equal to the determinant of its transpose.

  2. Row Interchange: Interchanging two rows of a matrix changes the sign of the determinant.

  3. Identical Rows: If a matrix has two identical rows, its determinant is zero.

  4. Scalar Multiplication: If a matrix B is obtained by multiplying every element of a row (or column) of a matrix A by a scalar k, then the determinant of B is k times the determinant of A.

  5. Zero Row: If every element of a row (or column) of a matrix is zero, its determinant is zero.

Vectors

Vectors are mathematical entities that represent quantities with both magnitude and direction. They are commonly denoted by symbols with an arrow above them or bold letters. Vectors are extensively used in various fields such as physics, engineering, and computer science to describe quantities like displacement, velocity, force, etc.

What is a Vector?

A vector consists of components that indicate the magnitude of the vector along different axes or directions. For instance, in a two-dimensional space, a vector can be represented as [x, y], where x represents the horizontal component and y represents the vertical component.

Components of a Vector

The components of a vector represent its magnitude along different directions. Each component indicates the length of the vector along a specific axis. For example, in a three-dimensional space, a vector might have components [x, y, z], representing its magnitude along the x, y, and z axes respectively.

Addition of Vectors

To add two vectors, simply add their corresponding components. For example:

v1 = [2, 3]
v2 = [1, -1]

v_sum = [2+1, 3-1] = [3, 2]
Enter fullscreen mode Exit fullscreen mode

Subtraction of Vectors

To subtract one vector from another, simply subtract their corresponding components. For example:

v1 = [2, 3]
v2 = [1, -1]

v_diff = [2-1, 3-(-1)] = [1, 4]
Enter fullscreen mode Exit fullscreen mode

Multiplication by Scalar

Multiplying a vector by a scalar involves multiplying each component of the vector by the scalar. For example:

v = [2, 3]
scalar = 2

v_scaled = [2*2, 3*2] = [4, 6]
Enter fullscreen mode Exit fullscreen mode

Scalar or Dot Product

The scalar product, also known as the dot product, of two vectors yields a scalar quantity. It's calculated by multiplying corresponding components of the vectors and then summing the results. For example:

v1 = [2, 3]
v2 = [1, -1]

v_dot = (2*1) + (3*(-1)) = 2 - 3 = -1
Enter fullscreen mode Exit fullscreen mode

The dot product can also be expressed in terms of the magnitudes of the vectors and the cosine of the angle between them:

v_dot = |a| * |b| * cos(θ)
Enter fullscreen mode Exit fullscreen mode

Where |a| and |b| are the magnitudes of vectors a and b respectively, and θ is the angle between them.

Vector or Cross Product

The vector product, also known as the cross product, of two vectors yields another vector that is perpendicular to the plane containing the original vectors. It's calculated differently depending on the dimensionality of the vectors. For example, in three-dimensional space:

v1 = [i, j, k]
v2 = [a, b, c]

v_cross = [jb - kc, kc - ia, ia - jb]
Enter fullscreen mode Exit fullscreen mode

The cross product can be expressed in terms of the magnitudes of the vectors and the sine of the angle between them:

|v_cross| = |a| * |b| * sin(θ)
Enter fullscreen mode Exit fullscreen mode

Where |a| and |b| are the magnitudes of vectors a and b respectively, and θ is the angle between them.

Linear Equations

A linear equation is an algebraic equation in which each term is either a constant or the product of a constant and a variable raised to the first power. These equations represent straight lines when graphed on a coordinate plane and have the general form:

ax + by + c = 0

Here, a, b, and c are constants, and x and y are variables.

Linear equations can have one or more variables, and they express relationships that are linear in nature. They are fundamental in mathematics and are used to model various real-world situations, such as finance, physics, engineering, and economics.

Linear equations can be solved using various methods, including substitution, elimination, matrices, and graphing. They form the basis of linear algebra and are essential in many areas of mathematics and science.

Solution of System of Linear Equations

Solving a system of linear equations involves finding the values of variables that satisfy all the equations simultaneously. There are several methods to solve such systems:

  1. Graphical Method:
    Graphing each equation on the coordinate plane and finding the point(s) of intersection.

  2. Substitution Method:
    Solving one equation for one variable and substituting it into another equation repeatedly until all variables are determined.

  3. Elimination Method (or Addition Method):
    Adding or subtracting multiples of equations to eliminate one variable at a time.

  4. Matrix Method (or Gaussian Elimination):
    Representing the system of equations in matrix form and performing row operations to transform the augmented matrix into row-echelon form, followed by back substitution.

Rank of a Matrix

The rank of a matrix is a fundamental concept in linear algebra that represents the maximum number of linearly independent rows or columns in the matrix. In other words, it measures the dimension of the vector space spanned by the rows or columns of the matrix.

Calculation of Rank:

  1. Row Echelon Form (REF) / Reduced Row Echelon Form (RREF):
    One common method to find the rank of a matrix is to convert it into either row echelon form (REF) or reduced row echelon form (RREF). The number of non-zero rows in the resulting form is the rank of the matrix.

  2. Using Determinants:
    The rank of a matrix can also be determined by examining its determinant. If the determinant of a submatrix of size k is non-zero, and the determinant of all submatrices of size k+1 are zero, then the rank of the matrix is k.

Example:

Consider the matrix A:

1  2  3
4  5  6
7  8  9
Enter fullscreen mode Exit fullscreen mode

Converting it to REF:

1  2  3
0 -3 -6
0  0  0
Enter fullscreen mode Exit fullscreen mode

The number of non-zero rows is 2, so the rank of matrix A is 2.

Conclusion:

The rank of a matrix provides valuable insights into its properties and structure. It is used in various mathematical and computational applications, including solving systems of linear equations, finding eigenvalues, and performing matrix factorization. Understanding how to compute the rank of a matrix is essential for analyzing and manipulating matrices in linear algebra.

Elementary Operations of a Matrix

Elementary operations of a matrix refer to fundamental operations that can be performed on its rows or columns. These operations are essential in various matrix manipulations, such as solving systems of linear equations, finding determinants, and performing matrix factorization. The elementary operations include:

  1. Row Replacement (Ri <-> Rj):
    Swap the elements of two rows in the matrix.

  2. Row Scaling (Ri -> k * Ri):
    Multiply all elements of a row by a scalar value.

  3. Row Addition (Ri -> Ri + k * Rj):
    Add a scalar multiple of one row to another row, and assign the result to the second row.

These elementary operations are used to transform a matrix into a desired form, such as row echelon form or reduced row echelon form, which simplifies various matrix computations and analyses.

Row Echelon Form and Reduced Row Echelon Form

Row echelon form (REF) and reduced row echelon form (RREF) are standard forms used to simplify matrices, particularly in the context of solving systems of linear equations and computing the rank of a matrix.

To achieve these forms, we employ elementary matrix operations, including row replacement, row scaling, and row addition. These operations allow us to systematically transform a matrix into a more structured form that reveals valuable information about its properties and solutions.

Row Echelon Form (REF):

In REF, the matrix is transformed such that:

  1. The leading entry (pivot) of each nonzero row is to the right of the leading entry of the row above it.
  2. All entries in a column below a leading entry are zero.

Reduced Row Echelon Form (RREF):

RREF further refines the structure of the matrix:

  1. The leading entry of each nonzero row is 1.
  2. The leading entry of each nonzero row is the only nonzero entry in its column.

Example:

Consider the following matrix A:

1  2  3
0  4  5
0  0  6
Enter fullscreen mode Exit fullscreen mode

We can use elementary matrix operations to transform matrix A into REF and then into RREF. The resulting matrices will provide valuable insights into the structure of A and facilitate computations such as solving systems of linear equations and computing the rank of A.

Linear Transformation in Matrices

A linear transformation is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. In the context of matrices, a linear transformation can be represented by a matrix that, when multiplied by a vector, transforms that vector in a linear manner.

Definition:

A linear transformation T from a vector space V to a vector space W can be represented as:

T(v) = A * v
Enter fullscreen mode Exit fullscreen mode

where v is a vector in V, and A is a matrix that defines the transformation.

Properties of Linear Transformations:

  1. Additivity: T(u + v) = T(u) + T(v)
  2. Homogeneity: T(c * v) = c * T(v)
    • Where u and v are vectors, and c is a scalar.

Example:

Consider a 2x2 matrix A representing a linear transformation:

A = [2  0]
    [1  3]
Enter fullscreen mode Exit fullscreen mode

If v is a vector:

v = [x]
    [y]
Enter fullscreen mode Exit fullscreen mode

The linear transformation T(v) = A * v is:

T(v) = [2  0] * [x] = [2x]
       [1  3]   [y]   [x + 3y]
Enter fullscreen mode Exit fullscreen mode

Conclusion:

Linear transformations are fundamental in understanding how vectors change under various operations. Matrices provide a convenient way to represent and compute these transformations, preserving vector addition and scalar multiplication.

Eigenvalues and Eigenvectors

Eigenvalues and eigenvectors are fundamental concepts in linear algebra that are used in various fields such as physics, engineering, and computer science.

Definition:

For a given square matrix A, an eigenvector is a non-zero vector v that, when multiplied by A, yields a scalar multiple of itself. This can be expressed as:

A * v = λ * v
Enter fullscreen mode Exit fullscreen mode

where:

  • A is the square matrix.
  • v is the eigenvector.
  • λ (lambda) is the eigenvalue corresponding to the eigenvector v.

Characteristics:

  1. Eigenvalues (λ): Scalars that indicate how much the eigenvector is stretched or shrunk during the transformation.
  2. Eigenvectors (v): Vectors that remain in the same direction after the transformation by the matrix A.

Finding Eigenvalues and Eigenvectors:

  1. Eigenvalues: Solve the characteristic equation:
det(A - λI) = 0
Enter fullscreen mode Exit fullscreen mode
  • I is the identity matrix of the same dimension as A.
  • det denotes the determinant of the matrix.
  1. Eigenvectors: Once the eigenvalues are found, solve the equation:
(A - λI) * v = 0
Enter fullscreen mode Exit fullscreen mode
  • This system of equations will yield the eigenvectors corresponding to each eigenvalue.

Example:

Consider the matrix A:

A = [4  1]
    [2  3]
Enter fullscreen mode Exit fullscreen mode
  1. Find the eigenvalues by solving:
det(A - λI) = 0

det([4-λ  1   ]
    [2    3-λ]) = 0

(4-λ)(3-λ) - (2*1) = 0
λ^2 - 7λ + 10 = 0
λ = 5, 2
Enter fullscreen mode Exit fullscreen mode
  1. For λ = 5, solve (A - 5I)v = 0:
[4-5  1 ] [x]   [ -1 1 ] [x]   [ 0]
[2  3-5] [y] = [  2 -2] [y] = [ 0]

Solving, we get eigenvector v = [1, 1]^T
Enter fullscreen mode Exit fullscreen mode
  1. For λ = 2, solve (A - 2I)v = 0:
[4-2  1 ] [x]   [ 2 1 ] [x]   [ 0]
[2  3-2] [y] = [ 2 1] [y] = [ 0]

Solving, we get eigenvector v = [-1, 2]^T
Enter fullscreen mode Exit fullscreen mode

Conclusion:

Eigenvalues and eigenvectors provide insight into the properties of a matrix, such as its stability, oscillations, and directions of stretching or shrinking. They are essential tools in solving systems of differential equations, performing principal component analysis, and more.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .