0% found this document useful (0 votes)
10 views23 pages

Chapters 45

Chapters 4 and 5 cover matrix algebra and linear models, emphasizing the use of matrices to represent and solve systems of linear equations. Key topics include matrix operations, vector definitions, and the concept of linear dependence. The document illustrates these concepts with examples and explains how to manipulate matrices for various operations.

Uploaded by

jiaayi.lee0316
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views23 pages

Chapters 45

Chapters 4 and 5 cover matrix algebra and linear models, emphasizing the use of matrices to represent and solve systems of linear equations. Key topics include matrix operations, vector definitions, and the concept of linear dependence. The document illustrates these concepts with examples and explains how to manipulate matrices for various operations.

Uploaded by

jiaayi.lee0316
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Chapters 4 and 5: Linear Models and Matrix Algebra

1 Matrix Algebra
1. Give us a shorthand way of writing a large system of equations.
2. Allows us to test for the existence of solutions to simultaneous systems.
3. Allows us to solve a simultaneous system.
Drawback: Only works for linear systems. However, we can ofter covert non-linear to linear
systems.
Example:

y = axb
(take log) ⇒ lny = lna + blnx

1.1 Matrices and Vectors


Given

y = 10 − x ⇒ x + y = 10
y = 2 + 3x ⇒ −3x + y = 2

In matrix form " # " # " #


1 1 x 10
=
−3 1 y 2
| {z } | {z } | {z }
Matrix of Coefficients Vector of Unknowns Vector of Constants

In general

a11 x1 + a12 x2 + . . . + a1n xn = d1


a21 x1 + a22 x2 + . . . + a2n xn = d2
...........................
am1 x1 + am2 x2 + . . . + amn xn = dm

where there are n unknowns (x1 , x2 , . . . , xn )


Matrix form     
a11 a12 . . . a1n x1 d1
 a21 a22 . . . a2n x2 d2
    
   
 . .. .. = ..
..
 
 .
 . ... . . . .
   
   
am1 am2 . . . amn xn dm

1
A matrix is defined as a rectangular array of numbers, parameters, or variables. The members
of the array, referred to as the elements of the matrix, are usually enclosed in brackets.
Matrix shorthand
Ax = d

where

A = coefficient matrix or an array


x = vector of unknowns or an array
d = vector of constants or an array

Subscript notation
aij

is the coefficient found in the i-th row (i = 1, . . . , m) and the j-the column (j = 1, . . . , n) of matrix
A. Therefore, matrix A can sometimes be written more simply as
!
i = 1, 2, . . . , m
A = [aij ]
j = 1, 2, . . . , n

1.2 Vectors as special matrices


The number of rows and the number of columns define the dimension of a matrix.
Matrix A contains m rows and n columns, read “m by n”.
A matrix containing 1 column is called a “column vector ”.
x is a n × 1 column vector, and d is m × 1 column vector.
If x were arranged in a horizontal array we would have a row vector.
Row vectors are denoted by a prime

x′ = [x1 , x2 , ..., xn ]

A 1 × 1 vector is known as a scalar

x = [4] is a scalar

1.3 Matrix Operations


Equality: two matrices A and B are said to be equal if and only if they have the same dimension
and have identical elements in the corresponding locations in the array. Formally,

A=B iff aij = bij

2
Examples. " # " # " #
4 3 4 3 2 0
= ̸=
2 0 2 0 4 3
And if " # " #
x 7
= ,
y 4
this will mean that x = 7 and y = 4.

1.3.1 Addition and Subtraction of Matrices

Two matrices can be added if and only if they have the same dimension. When this dimensional
requirement is met, the matrices are said to be conformable for addition. Suppose A is an m × n
matrix and B is a p × q matrix, then addition of A and B is possible if and only if m = p and
n = q. For example,
" # " # " # " #
a11 a12 b11 b12 (a11 + b11 ) (a12 + b12 ) c11 c12
+ = =
a21 a22 b21 b22 (a21 + b21 ) (a22 + b22 ) c21 c22

1.3.2 Scalar Multiplication

Suppose we want to multiply a matrix by a scalar

k
|{z} × A
|{z}
1×1 m×n

We multiply every element in A by the scalar k


 
ka11 ka12 ... ka1n
ka21 ka22 ... ka2n
 
 
kA =  .. .. .. 
. ... . .
 
 
kam1 kam2 . . . kamn
" #
6 2
Example. Let k = [3] and A = , then
4 5
" # " #
3×6 3×2 18 6
kA = =
3×4 3×5 12 15

3
1.3.3 Multiplication of Matrices

Whereas a scalar can be used to multiply a matrix of any dimension, the multiplication of two
matrices is contingent upon the satisfaction of a different dimensional requirement.
Suppose that, given two matrices A and B, we want to find the product AB. The conformability
condition for multiplication is that the column dimension of A (the “lead” matrix in AB) must be
equal to the row dimension of B (the “lag” matrix), namely, for multiplying A and B, it must be
true
A
|{z} × B
|{z} = C
|{z}
(m×n) (n×q) (m×q)

That A must have the same number of columns (n) as B has rows (n). The product matrix, C,
will have the same number of rows as A and the same number of columns as B.
Example.
A
|{z} × B
|{z} = C
|{z}
(1×3) (3×4) (1×4)

In general
A
|{z} × B
|{z} × C
|{z} × D
|{z} = E
|{z}
(3×2) (2×5) (5×4) (4×1) (3×1)

To multiply two matrices:


(1) Multiply each element in a given row by each element in a given column
(2) Sum up their products
Example 1 " # " # " #
a11 a12 b11 b12 c11 c12
× =
a21 a22 b21 b22 c21 c22
where
c11 = a11 b11 + a12 b21 (sum of row 1 times column 1)
c12 = a11 b12 + a12 b22 (sum of row 1 times column 2)
c21 = a21 b11 + a22 b21 (sum of row 2 times column 1)
c22 = a21 b12 + a22 b22 (sum of row 2 times column 2)
Example 2
" #
h i 1 2 h i h i
3 2 = (3 × 1) + (2 × 3) (3 × 2) + (2 × 4) = 9 14
3 4

Example 3
 
h i 2 h i h i
3 2 1  1  = (3 × 2) + (2 × 1) + (1 × 4) = 12
 

4
12 is the inner product of two vectors.
Suppose " #
x1 h i
x= then x′ = x1 x2
x2
therefore " #

h i x1
xx= x1 x2 = [x21 + x22 ]
x2

However, xx′ = 2 × 2matrix


" # " #
x1 h i x21 x1 x2
x1 x2 =
x2 x2 x1 x22

Example 4
 
1 3 " #
5
A= 2 8  B=
 
9
4 0
   
(1 × 5) + (3 × 9) 32
AB =  (2 × 5) + (8 × 9)  =  82 
   

(4 × 5) + (0 × 9) 20

Example 5
Ax = d
A x d
     
6 3 1 x1 22
 1 4 −2   x2   12 
     

4 −1 5 x3 10
(3 × 3) (3 × 1) (3 × 1)
This produces

6x1 + 3x2 + x3 = 22
x1 + 4x2 − 2x3 = 12
4x1 − x2 + 5x3 = 10

5
1.3.4 National Income Model

Y = c + I0 + G 0
C = a + bY

Arrange as

Y − C = I0 + G 0
− bY + C = a

Matrix form
" A # " x # = " d #
1 −1 Y I 0 + G0
=
−b 1 C a

1.3.5 Division in Matrix Algebra

In ordinary algebra
a
=c
b
is well defined iff b ̸= 0. Now 1/b can be rewritten as b−1 , therefore ab−1 = c, also b−1 a = c.
But in matrix algebra
A
=C
B
is not defined. However,
AB −1 = C

is well defined. BUT normally


AB −1 ̸= B −1 A

where B −1 is called the inverse of B and

1
B −1 ̸=
B

In some ways, B −1 has the same properties as b−1 but in other ways if differs. These differences
will be explored later on.

1.3.6 The Σ Notation

The summation shorthand is used for

Σnj=0 xj = x0 + x1 + · · · + xn

6
where j is the summation index (or dummy subscript) that takes only integer values (from 1 to n
in this case). The application of Σ can be readily extended to cases in which the x term is prefixed
with a coefficient or in which each term in the sum is raised to some integer power. For instance,

Σnj=0 axj = ax0 + ax1 + · · · + axn = a(x0 + x1 + · · · + xn ) = aΣnk=0 xj

Σni=0 ai xi = a0 x0 + a1 x1 + a2 x2 + · · · + an xn
= a0 + a1 x1 + a2 x2 + · · · + an xn
The latter expression can in fact be used as a shorthand form of the general polynomial function.
Apply the Σ shorthand to matrix multiplication in Example 1. Each element of the product
matrix C = AB is defined as a sum of terms, which may now be rewritten as:

c11 = a11 b11 + a12 b21 = Σ2k=1 a1k bk1


c12 = a11 b12 + a12 b22 = Σ2k=1 a1k bk2
c21 = a21 b11 + a22 b21 = Σ2k=1 a2k bk1
c22 = a21 b12 + a22 b22 = Σ2k=1 a2k bk2

Extending this to the multiplication of an m × n matrix A = [aik ] and an n × p matrix B = [bkj ],


the elements of the m × p product matrix AB = C = [cij ] is now written as

c11 = Σnk=1 a1k bk1 c12 = Σnk=1 a1k bk2 ···

or more generally !
i = 1, 2, . . . , m
cij = Σnk=1 aik bkj
j = 1, 2, . . . , p
This last equation represents yet another way of stating the rule of multiplication for the matrices
defined above.

1.4 Linear Dependence


Suppose we have two equations

x1 + 2x2 = 1
3x1 + 6x2 = 3

7
To solve

3[−2x2 + 1] + 6x2 = 3
−6x2 + 3 + 6x2 = 3
3=3

There is no solution. These two equations are said to be linearly dependent. Equation 2 is equal to
two times equation one.
" #" # " #
1 2 x1 1
=
3 6 x2 3
Ax = d

where A is a two-column vector


" # " #
1 2
U1 = and U2 =
3 6

Or A is a two-row vector

V1′ = [ 1 2 ]
V2′ = [ 3 6 ]

where column two is twice column one and/or row two is three times row one.
Linear Dependence Generally: A set of vectors is said to be linearly dependent if and only
if any one of them can be expressed as a linear combination of the remaining vectors.
Example: Three vectors,
" # " # " #
2 1 4
V1 = V2 = V3 =
7 8 5

as linearly dependent since

3V1 − 2V2 = V3
" # " # " #
6 2 4
− =
21 16 5

or expressed as
3V1 − 2V2 − V3 = 0

General Rule

8
A set of vectors, V1 , V2 , . . . , Vn , are linearly dependent if there exists a set of scalars ki (i =
1, . . . , n), not all equal to zero, such that

Σni=1 ki Vi = k1 V1 + k2 V2 + . . . + kn Vn = 0

1.5 Commutative, Associative, and Distributive Laws


From high-school algebra we know commutative law of addition,

a+b=b+a

commutative law of multiplication


ab = ba

associative law of addition


(a + b) + c = a + (b + c)

associative law of multiplication


(ab)c = a(bc)

distributive law
a(b + c) = ab + ac

In matrix algebra most, but not all, of these laws are true.

1.5.1 Commutative Law of Addition

A+B =B+A

Since we are adding individual elements and aij + bij = bij + aij for all i and j.

1.5.2 Similarly Associative Law of Addition

A + (B + C) = (A + B) + C

for the same reasons.

1.5.3 Matrix Multiplication

Matrix multiplication is not commutative

AB ̸= BA

9
Example 1 Let A be 2 × 3 and B be 3 × 2

A × B = C whereas B × A = C
(2 × 3) (3 × 2) = (2 × 2) (3 × 2) (2 × 3) = (3 × 3)
" # " #
1 2 0 −1
Example 2 Let A = and B = , then
3 4 6 7
" # " #
(1 × 10) + (2 × 6) (1 × −1) + (2 × 7) 12 13
AB = =
(3 × 0) + (4 × 6) (3 × −1) + (4 × 7) 24 25

But " # " #


(0 × 1) − (1 × 3) (0 × 2) − (1 × 4) −3 −4
BA = =
(6 × 1) + (7 × 3) (6 × 2) + (7 × 4) 27 40
Therefore, it is realized that the distinction of post multiply and pre multiply.

AB = C

where B is pre multiplied by A, and A is post multiplied by B.

1.5.4 Associative Law

Matrix multiplication is associative

(AB)C = A(BC) = ABC

as long as their dimensions conform to our earlier rules of multiplication.

A × B × C
(m × n) (n × p) (p × q)

1.5.5 Distributive Law

Matrix multiplication is distributive

A(B + C) = AB + AC Pre multiplication


(B + C)A = BA + CA Post multiplication

10
1.6 Identity Matrices and Null Matrices
1.6.1 Identity Matrix

This is a square matrix with ones on its principal diagonals and zeros everywhere else.
 
  1 0
... 0
" # 1 0 0  .. 
1 0  0 1 . 
I2 = I3 =  0 1 0  In =  .
   
0 1  .. .. 
0 0 1  . 0 
0 ... 0 1

Identity matrix in scalar algebra we know

1×a=a×1=a

In matrix algebra the identity matrix plays the same role such that

IA = AI = A
" #
1 3
Example 1 Let A =
2 4
" #" # " # " #
1 0 1 3 (1 × 1) + (0 × 2) (1 × 3) + (0 × 4) 1 3
IA = = =
0 1 2 4 (0 × 1) + (1 × 2) (0 × 3) + (1 × 4) 2 4
" #
1 2 3
Example 2 Let A =
2 0 3
" #" # " #
1 0 1 2 3 1 2 3
IA = = = A {I2 Case}
0 1 2 0 3 2 0 3
 
" # 1 0 0 " #
1 2 3 1 2 3
AI =  0 1 0 = = A {I3 Case}
 
2 0 3 2 0 3
0 0 1
Furthermore,

AIB = (AI)B = A(IB) = AB


(m × n)(n × p) (m × n)(n × p)

11
1.6.2 Null Matrices

A null matrix is simply a matrix where all elements equal zero.


" # " #
0 0 0 0 0
0= 0=
0 0 0 0 0
(2 × 2) (2 × 3)

The rules of scalar algebra apply to matrix algebra in this case.


Example
a + 0 = a {scalar}
" # " #
a11 a12 0 0
A+0= + = A {matrix}
a21 a22 0 0
 
" # 0 " #
a11 a12 a13   0
A×0=  0 = =0
a21 a22 a23 0
0

1.7 Idiosyncrasies of Matrix Algebra


1. We know AB ̸= BA
2. ab = 0 implies a = 0 or b = 0
In matrix, this is not the case. For instance
" #" # " #
2 4 −2 4 0 0
AB = =
1 2 1 −2 0 0

1.7.1 Transposes and Inverses

1. Transpose: it is when the rows and columns are interchanged. Formally, transpose of A =
A′ or AT
Example " # " #
3 8 −9 3 4
If A = and B =
1 0 4 1 7
then  
3 1 " #
3 1
A′ =  8 0  and B ′ =
 
4 7
−9 4
Symmetric Matrix

12
   
1 0 4 1 0 4
If A =  0 3 7  and A′ =  0 3 7 
   

4 7 2 4 7 2
then A is a symmetric matrix.
Property of Transposes
1. (A′ )′ = A
2. (A + B)′ = A′ + B ′
3. (AB)′ = B ′ A′

Inverses
In scalar algebra, if ax = b, then x = b/a or x = ba−1 for a ̸= 0.
In matrix algebra, if Ax = d, then x = A−1 d, where A−1 is the inverse of A.
Properties of Inverses
1. Not all matrices have inverses
Non-singular: if there is an inverse; singular: if there is no inverse.
2. A matrix must be square in order to have an inverse (necessary but not sufficient)
3. In scalar algebra, a/a = 1; in matrix algebra, AA−1 = A−1 A = I
4. If an inverse exists, then it must be unique.
Example" # " #
1
3 1 −1 3 − 16
Let A = and A =
0 2 0 21
" #
2 −1
A−1 = 61 by factoring { 16 is a scalar}
0 3
Post multiplication
" #" # " # " #
3 1 2 −1 1 6 0 1 1 0
AA−1 = = =
0 2 0 3 6 0 6 6 0 1

Pre multiplication
" #" # " # " #
1 2 −1 3 1 1 6 0 1 0
A−1 A = = =
6 0 3 0 2 6 0 6 0 1

Further properties: If A and B are square and non-singular, then


1. (A−1 )−1 = A
2. (AB)−1 = B −1 A−1
3. (A′ )−1 = (A−1 )′

13
Solving a linear system
A x = d
(3 × 3) (3 × 1) (3 × 1)
then
A−1 A x = A−1 d
(3 × 3) (3 × 3) (3 × 1) (3 × 3) (3 × 1)

I x = A−1 d
(3 × 3) (3 × 1) (3 × 3) (3 × 1)

x = A−1 d

Example
Ax = d
      
6 3 1 x1 22 18 −16 −10
1 
A =  1 4 −2  x =  x2  d =  12  A−1 =  −13 26 13 
      
52
4 −1 5 x3 10 −17 18 21
then       
x1 18 −16 −10 22 2
1 
 x2  =  −13 26 13   12  =  3 
     
52
x3 −17 18 21 10 1

x∗1 = 2 x∗2 = 3 x∗3 = 1

2 Linear Dependence and Determinants


Suppose we have the following equation system

x1 + 2x2 = 1
2x1 + 4x2 = 2

where equation two is twice equation one. Therefore, there is no unique solution for x1 , x2 .
In matrix form: [Ax = d]

" A
# " x # = " d #
1 2 x1 1
=
2 4 x2 2

14
The determinant of the coefficient matrix is

|A| = (1)(4) − (2)(2) = 0

A determinant of zero tells us that the equations are linearly dependent. Sometimes called a
“vanishing determinant.”
In general, the determinant of a square matrix, A, is written as |A| or detA.
For two by two case,
a11 a12
|A| = = a11 a22 − a12 a21 = k
a21 a22
where k is unique. Any k ̸= 0 implies linear independence.
Example
" 1#
3 2
A= , |A| = (3 × 5) − (1 × 2) = 13 {non-singular}
1 5
" #
2 6
B= , |B| = (2 × 24) − (6 × 8) = 0 {singular}
8 24

Three by threecase 
a1 a2 a3
Given A =  b1 b2 b3 
 

c1 c2 c3
then |A| = (a1 b2 c3 ) + (a2 b3 c1 ) + (a3 b1 c2 ) − (a3 b2 c1 ) − (a2 b1 c3 ) − (a1 b3 c2 )
Cross-diagonals  
a1 a2 a3
 b1 b2 b3 
 

c1 c2 c3
Use viso to display cross diagonals
Multiple along the diagonals and add up their products
⇒ The product along the BLUE lines are given a positive sign
⇒ The product of the RED lines are negative

2.1 Using Laplace Expansion


⇒ The cross diagonal method does not work for matrices greater than three by three
⇒ Laplace expansion evaluates the determinant of a matrix, A, by means of subdeterminants
of A.
Subdeterminants
 or Minors

a1 a2 a3
Given A =  b1 b2 b3 
 

c1 c2 c3

15
By deleting the first row and first column, we get
" #
b2 b3
M11 =
c2 c3

The determinant of this matrix is the minor of element a1 .


|Mij | is the subdeterminant from deleting the i-th row and the j-th column.
 
a11 a12 a13
Given A =  a21 a22 a23 
 

a31 a32 a33


then
a12 a13 a12 a13
|M21 | = |M31 | =
a32 a33 a22 a23

2.1.1 Cofactors

A cofactor is a minor with a specific algebraic sign

|Cij | = (−1)i+j |Mij |

therefore
|C11 | = (−1)2 |M11 | = |M11 |

|C21 | = (−1)3 |M21 | = −|M21 |

The determinant by Laplace


Expanding down the first column
 
a11 a12 a13
A =  a21 a22 a23 
 

a31 a32 a33

|A| = a11 |C11 | + a21 |C21 | + a31 |C31 | = Σ3i=1 ai1 |Ci1 |

a22 a23 a12 a13 a12 a13


|A| = a11 − a21 + a31
a32 a33 a32 a33 a22 a23

Note: minus sign (−1)(1+2)

|A| = a11 [a22 a33 − a23 a32 ] − a21 [a12 a33 − a13 a32 ] + a31 [a12 a23 − a13 a22 ]

It is consistent with the result by using the method of cross diagonals.


Laplace expansion can be used to expand along any row or any column.

16
Example
Expand by the third row

a12 a13 a11 a13 a11 a12


|A| = a31 − a32 + a33
a22 a23 a21 a23 a21 a22

Example
 
8 1 3
A= 4 0 1 
 

6 0 3
1. Expand the first column

0 1 1 3 1 3
|A| = 8 −4 +6
0 3 0 3 0 1

|A| = (8 × 0) − (4 × 3) + (6 × 1) = −6

2. Expand the second column

4 1 8 3 8 3
|A| = −1 +0 −0
6 3 6 3 4 1

|A| = (−1 × 6) + (0) − (0) = −6

Suggestion: Try to choose an easy row or column to expand. (i.e., the ones with zero’s in it.)

2.2 Basic Properties of Determinants


Property I The interchange of rows and columns does not affect the value of a determinant.
In other words, the determinant of a matrix A has the same value as that of its transpose A′ , that
is |A| = |A′ |.
4 3 4 5
Example 1 = =9
5 6 3 6
a b a c
Example 2 = = ad − bc
c d b d

Property II The interchange of any two rows (or any two columns) will alter the sign,
but not the numerical value of the determinant.
a b c d
Example 3 = ad−bc, but the interchange of the two rows yields = cb−ad =
c d a b
−(ad − bc).

17
0 1 3
Example 4 2 5 7 = −26, but the interchange of the first and third columns yields
3 0 1
3 1 0
7 5 2 = 26.
1 0 3

Property III The multiplication of any one row (or one column) by a scalar k will change
the value of the determinant k-fold.
ka kb a b
Example 5 = kad − kbc = k(ad − bc) = k .
c d c d
15a 7b
Example 6 Factoring the first column and the second row in turn, we have =
12c 2d
5a 7b 5a 7b
3 =3×2× = 6 × (5ad − 14bc).
4c 2d 2c d

Property IV The addition (subtraction) of a multiple of any row to (from) another row
will leave the value of the determinant unaltered. The same holds true if the word row is replaced
by column in the above statement.
Example 7 Adding k times the top row of the determinant in Example 3 to its second row,
we end up with the original determinant:
a b a b
= a(d + kb) − b(c + ka) = ad − bc = .
c + ka d + kb c d

Property V If one row (or column) is a multiple of another row (or column), the value
of the determinant will be zero. As a special case of this, when two rows (or two columns) are
identical, the determinant will vanish.
Example 8
2a 2b c c
= 2ab − 2ab = 0 = cd − cd = 0.
a b d d

2.3 Rank of a Matrix


Definition: the rank of a matrix is the maximum number linearly independent rows in the
matrix.
If A is an m × n matrix, then the rank of A is

r(A) ≤ min{m, n}

Read as: the rank of A is less than or equal to the minimum of m or n.


Using determinants to find the rank for a n × n matrix:

18
1. If A is n × n and |A| = 0
2. Then delete one row and one column, and find the determinant of this new (n − 1) × (n − 1)
matrix.
3. Continue this process until you have a non-zero determinant.

3 Matrix Inversion
Given an n × n matrix, A, the inverse of A is

1
A−1 = • AdjA
|A|

where AdjA is the adjoint matrix of A. AdjA is the transpose of matrix A’s cofactor matrix. It is
also the adjoint, which is an n × n matrix.
Cofactor Matrix (denoted C)
The cofactor matrix of A is a matrix whose elements are the cofactors of the elements of A
" # " # " #
a11 a12 |C11 | |C12 | a22 −a21
If A = then C = =
a21 a22 |C21 | |C22 | −a12 a11

Example " #
3 2
Let A = ⇒ |A| = −2
1 0
Step 1: Find the cofactor matrix
" # " #
|C11 | |C12 | 0 −1
C= =
|C21 | |C22 | −2 3

Step 2: Transpose the cofactor matrix


" #
0 −2
C T = AdjA =
−1 3

Step 3: Multiply all the elements of AdjA by 1


|A| to find A−1

 " # " #
1 1 0 −2 0 1
A−1 = • AdjA = − = 1 3
|A| 2 −1 3 2 −2

19
Step 4: Check by AA−1 = I
" #" # "  # " #
1
(3)(1) + (2) − 23

3 2 0 1 (3)(0) + (2) 2 1 0
1
=  =
1 0 2 − 23 (1)(0) + (0) 1
2 (1)(1) + (0) − 23 0 1

4 Cramer’s Rule
Suppose

Equation 1 a1 x1 + a2 x2 = d1
Equation 2 b1 x1 + b2 x2 = d2

or
" A # " x # = " d #
a1 a2 x1 d1
=
b1 b2 x2 d2

where |A| = a1 b2 − a2 b1 ̸= 0
Solve for x1 by substitution
From equation 1
d1 − a1 x1
x2 =
a2
and equation 2
d2 − b1 x1
x2 =
b2
therefore
d1 − a1 x1 d2 − b1 x1
=
a2 b2
Cross multiply
d1 b2 − a1 b2 x1 = d2 a2 − b1 a2 x1

Collect terms
d1 b2 − d2 a2 = (a1 b2 − b1 a2 )x1
d1 b2 − d2 a2
x1 =
a1 b2 − b1 a2
The denominator is the determinant of A, namely |A|. The numerator is the same as the denomi-
nator except d1 d2 replaces a1 b1 .

20
Cramer’s Rule
d1 a2
d2 b2 d1 b2 − d2 a2
x1 = =
a1 a2 a1 b2 − b1 a2
b1 b2
where the d vector replaces column 1 in the A matrix.
To find x2 , replace column 2 with the d vector

a1 d1
b1 d2 a1 d2 − d1 b1
x2 = =
a1 a2 a1 b2 − b1 a2
b1 b2

Generally: to find xi , replace column i with vector d; then find the determinant.
xi = the ratio of two determinants, namely, xi = |A i|
|A|

4.0.1 Example: The Market Model

Equation 1 Qd = 10 − P or Q + P = 10
Equation 2 Qs = P − 2 or − Q + P = 2

Matrix form
" A # " x # = " d #
1 1 Q 10
=
−1 1 P 2

|A| = (1)(1) − (−1)(1) = 2

Find Qe
10 1
2 1 10 − 2
Qe = = =4
2 2
Find Pe
1 10
−1 2 2 − (−10)
Pe = = =6
2 2

21
Substitute P and Q into either equation 1 or equation 2 to verify

Qd = 10 − P
10 − 6 = 4

4.0.2 Example: National Income Model

Y = C + I0 + G0 or Y − C = I0 + G0
C = a + bY or − bY + c = a

In matrix form " #" # " #


1 −1 Y I0 + G0
=
−b 1 C a
Solve for Y e
I0 + G0 −1
a 1 I0 + G0 + a
Ye = =
1 −1 1−b
−b 1
Solve for C e
1 I0 + G 0
−b a a + b(I0 + G0 )
Ce = =
1 −1 1−b
−b 1
Numerical Example:
Let C = 100 + 0.75Y , I = 150 and G = 250. Then the model is

Y −C =I +G
Y − C = 400

and

C = 100 + 0.75Y
−0.75Y + C = 100

In matrix form " #" # " #


1 −1 Y 400
=
−0.75 1 C 100

22
Solve for Y e
400 −1
100 1 500
Ye = = = 2000
1 −1 0.25
−0.75 1
Solve for C e
1 400
−0.75 100 100 + 0.75 × 400
Ce = = = 1600
1 −1 0.25
−0.75 1

23

You might also like