0% found this document useful (0 votes)
190 views29 pages

Understanding Matrix Inversion Basics

The document provides a comprehensive overview of matrix inversion, including definitions of determinants, nonsingularity, and the rank of a matrix. It explains the importance of these concepts in solving systems of equations, detailing methods such as Laplace expansion and Cramer's rule for evaluating determinants and finding inverse matrices. Examples throughout illustrate the application of these principles in linear algebra.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
190 views29 pages

Understanding Matrix Inversion Basics

The document provides a comprehensive overview of matrix inversion, including definitions of determinants, nonsingularity, and the rank of a matrix. It explains the importance of these concepts in solving systems of equations, detailing methods such as Laplace expansion and Cramer's rule for evaluating determinants and finding inverse matrices. Examples throughout illustrate the application of these principles in linear algebra.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Matrix Inversion

Introduction to Matrix Inversion


• Definition of determinants and nonsingularity
• Importance of matrix inversion in solving
systems of equations
Determinants
• Determinants: A single number derived from a
square matrix.
• The determinant of a 2x2 matrix is called the
second-order determinant
• For a 2x2 matrix,

• A determinant of zero indicates a singular matrix


• Singular matrix is one in which there exists linear
dependence between at least two rows or columns
Nonsingularity
• Nonsingularity: A nonsingular matrix is one in
which all its rows and columns are linearly
independent, when ≠ 0
• If linear dependence exists in a system of
equations, the system as a whole will have an
infinite number of possible solutions, making a
unique solution impossible
• Unique solutions to systems of equations
require nonsingular matrices
Rank of a Matrix
• The rank of a matrix: The maximum number of
linearly independent rows or columns in the
matrix
• The rank of a matrix also allows for a simple test
of linear dependence
• Assuming a square matrix of order n,
– If , A is nonsingular and there is no linear dependence
between any of its row or columns
– , A is singular and there is linear dependence between
its rows and columns
Example 1 – Determinants and
Nonsingularity

• Since , A is nonsingular. The rank of A is 2,

• Since , B is singular. Closer inspection reveals


that row 2 and column 2 are equal to 1.5
times row 1 and column 1, respectively. Hence
Third-order Determinants
• The determinant of a 3x3 matrix is called a third-order
determinant, and is calculated by summing three products:
1. Take the first element of the first row, , and mentally delete the
row and column in which it appears. Then multiply by the
determinant of the remaining elements.
2. Take the second element of the first row, , and mentally delete
the row and column in which it appears. Then multiply by
times the determinant of the remaining elements.
3. Take the third element of the first row, , and mentally delete
the row and column in which it appears. Then multiply by the
determinant of the remaining elements.
Third-order Determinants

• Determinants of 4x4, 5x5 matrices follow


similar principles: determinant of a 4x4 matrix
is the sum of four products; determinant of a
5x5 matrix is the sum of five products
Example 2 – Third-order Determinants

• , A is nonsingular and
Minors and Cofactors
• The elements of a matrix remaining after the
deletion process earlier form a subdeterminant
of the matrix called a minor. Thus, a minor is
the determinant of the submatrix formed by
deleting the th row and th column of the matrix

• Where is the minor of , the minor of , and the


minor of
Minors and Cofactors
• Thus, the determinant can be written as

• A cofactor is a minor with a prescribed sign.


The rule for the sign of a cofactor is

• Thus if the sum of the subscripts is an even


number, , since raised to an even power is
positive. If is equal to an odd number, , since
raised to an odd power is negative
Example 3 – Minors and Cofactors
• The cofactors (1) , (2) , and (3) for the matrix
seen before are found as follows
• & since ,
• & since ,
• & since ,
Laplace Expansion and Higher-order
Determinants
• Laplace expansion is a method for evaluating determinants in terms
of cofactors. It thus simplifies matters by permitting higher-order
determinants to be established in terms of lower-order determinants.
Laplace expansion of a third-order determinant can be expressed as

• where is a cofactor based on a second-order determinant. is not


explicitly multiplied by 1, since by the rule of cofactors will
automatically be multiplied by 1
• Laplace expansion permits evaluation of a determinant along any row
or column. Selection of a row or column with more zeros than others
simplifies evaluation of the determinant by eliminating terms. Laplace
expansion also serves as the basis for evaluating determinants of
orders higher than three
Example 4 – Laplace Expansion

• The determinant is found by Laplace expansion


along the third column

• Since and ,
• Deleting row 2 and column 3 to find ,

• Then substituting in where . So A is non singular


and
Example 5 – Laplace Expansion
• Laplace expansion for a fourth-order
determinant is

• Where the cofactors are third-order


subdeterminants which in turn can be reduced
to second-order sub determinants, as above
• Fifth-order determinants and higher are
treated in similar fashion
Properties of a Determinant
• The following seven properties of determinants provide the ways in which a
matrix can be manipulated to simplify its elements or reduce part of them to
zero, before evaluating the determinant:
1. Adding or subtracting any nonzero multiple of one row (or column) from another
row (or column) will have no effect on the determinant
2. Interchanging any two rows or columns of a matrix will change the sign, but not
the absolute value, of the determinant
3. Multiplying the elements of any row or column by a constant will cause the
determinant to be multiplied by the constant
4. The determinant of a triangular matrix, i.e., a matrix with zero elements
everywhere above or below the principal diagonal, is equal to the product of the
elements on the principal diagonal
5. The determinant of a matrix equals the determinant of its transpose:
6. If all the elements of any row or column are zero, the determinant is zero
7. If two rows or columns are identical or proportional, i.e., linearly dependent, the
determinant is zero
Cofactor and Adjoint Matrices
• A cofactor matrix is a matrix in which every
element is replaced with its cofactor
• An adjoint matrix is the transpose of a
cofactor matrix
Example 6 – Cofactor and Adjoint Matrices

• The cofactor matrix C and the adjoint matrix


Adj A are found below, given

• Replacing the elements with their cofactors


according to the laws of cofactors

• The adjoint matrix Adj A is the transpose of C


Inverse Matrices
• An inverse matrix , which can be found only
for a square, nonsingular matrix A, is a unique
matrix satisfying the relationship

• Multiplying a matrix by its inverse reduces it


to an identity matrix. Thus, the inverse matrix
in linear algebra performs much the same
function as the reciprocal in ordinary algebra.
The formula for deriving the inverse is
Example 7 – Inverse Matrices
• Finding the inverse of

1. Check that it is a square matrix, here , since


only square matrices can have inverses
2. Evaluate the determinant to be sure , since
only nonsingular matrices can have inverses

• Matrix A is nonsingular;
Example 7 – Inverse Matrices (continued)

3. Find the cofactor matrix of

• Then transpose the cofactor matrix to get the


adjoint matrix
Example 7 – Inverse Matrices (continued)

4. Multiply the adjoint matrix by to get

5. To check your answer, multiply or . Both


products will equal if the answer is correct
Solving Linear Equations with the Inverse

• An inverse matrix can be used to solve matrix


equations. If,
• And the inverse exists, multiplication of both sides
of the equation by , following the laws of
conformability, gives,
• . Thus,
• . Therefore,
• The solution of the equation is given by the product
of the inverse of the coefficient matrix and the
column vector of constants B
Example 8 – Solving Linear Equations with
the Inverse
• Matrix equations and the inverse are used
below to solve for , given

• First, express the system of equations in


matrix form,
Example 8 – Solving Linear Equations with
the Inverse (continued)

• Substituting from the previous example and


multiplying,

• Thus,
Cramer’s Rule for Matrix Solutions
• Cramer’s rule provides a simplified method of
solving a system of linear equations through the use
of determinants. Cramer’s rule states:

• is the th unknown variable in a series of equations


• A is the determinant of the coefficient matrix
• is the determinant of a special matrix formed from
the original coefficient matrix by replacing the
column of coefficients of with the column vector of
constants
Example 9 – Cramer’s Rule
• Cramer’s rule is used to solve the system of
equations

1. Express the equations in matrix form:

2. Find the determinant of A:


Example 9 – Cramer’s Rule (continued)

3. Then to solve for , replace column 1, the


coefficients of , with the vector of constants B,
forming a new matrix :

Find the determinant of :

Use the formula for Cramer’s rule:


Example 9 – Cramer’s Rule (continued)

4. To solve for , replace column 2, the


coefficients of , from the original matrix, with
the column vector of constants B, forming a new
matrix

Take the determinant:

Use the formula:

You might also like