Further Pure Mathematics 1 (Paper 1) - Chapter 1.4: Matrices

Hello and welcome to the world of Matrices! Don't worry if these rectangular arrays of numbers look intimidating at first. Matrices are one of the most powerful tools in mathematics, used everywhere from computer graphics (transforming 3D objects) to handling vast datasets and solving complex systems of equations.

In this chapter, we will build upon your basic knowledge of matrices to master advanced operations, understand their connection to geometry in 2D space, and tackle concepts like inverses and invariant lines. Let’s dive in!


1. Matrix Operations and Terminology

1.1 Understanding Matrix Order and Special Matrices

A matrix is defined by its dimensions, or order, given by (number of rows) \( \times \) (number of columns). The syllabus focuses on matrices up to \( 3 \times 3 \).

  • Example: A matrix \( A = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{pmatrix} \) has order \( 2 \times 3 \).

Key Terminology:

  • Square Matrix: A matrix where the number of rows equals the number of columns (e.g., \( 2 \times 2 \) or \( 3 \times 3 \)).
  • Zero Matrix (O): A matrix containing only zeros. When added or multiplied (if compatible) it behaves like the number zero.
  • Identity Matrix (I) or Unit Matrix: A square matrix that has ones on the leading diagonal and zeros everywhere else. When you multiply any matrix \( M \) by \( I \), you get \( M \) back: \( MI = IM = M \).
    Examples: $$ I_2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \quad I_3 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} $$

1.2 Addition, Subtraction, and Scalar Multiplication

These operations are straightforward:

  1. Addition and Subtraction: You can only add or subtract matrices if they have the exact same order. You simply add or subtract corresponding elements.
  2. Scalar Multiplication: Multiply every element in the matrix by the constant scalar value.

Quick Review: These are element-by-element operations. If orders don't match for addition/subtraction, the operation is impossible.

1.3 Matrix Multiplication

Matrix multiplication is the trickiest operation, but essential!

Compatibility Rule:

You can multiply matrix \( A \) (order \( m \times n \)) by matrix \( B \) (order \( p \times q \)) only if \( n = p \). The resulting matrix \( AB \) will have the order \( m \times q \).

Step-by-Step Multiplication (Row-by-Column):

To find the element in the \( i \)-th row and \( j \)-th column of the product \( AB \), you multiply the elements of the \( i \)-th row of \( A \) by the corresponding elements of the \( j \)-th column of \( B \) and sum the results.

Crucial Point to Remember:

Matrix multiplication is generally not commutative. In almost all cases, \( AB \neq BA \). The order of multiplication matters immensely, especially when dealing with transformations!

Key Takeaway for Operations: Always check the order first! For multiplication, remember the "inner numbers must match" rule, and that order is everything.


2. Determinants and Inverses

The determinant is a scalar value associated only with square matrices. It tells us critical information about whether an inverse exists and how the matrix scales areas in geometric transformations.

2.1 The Determinant of a 2x2 Matrix

For a general \( 2 \times 2 \) matrix \( M = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \): $$ \text{det } M = ad - bc $$

Memory Aid: Multiply the elements on the leading diagonal (top-left to bottom-right) and subtract the product of the elements on the other diagonal.

2.2 The Determinant of a 3x3 Matrix

For a \( 3 \times 3 \) matrix, the determinant is calculated using a process called cofactor expansion.
For \( M = \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix} \), expanding along the first row (the most common method): $$ \text{det } M = a(ei - fh) - b(di - fg) + c(dh - eg) $$
Don't worry if this looks long! Remember the pattern:

  • Start with the element \( a \), multiply it by the determinant of the \( 2 \times 2 \) matrix left when you cross out \( a \)'s row and column.
  • Subtract the second element \( b \), multiplied by its corresponding \( 2 \times 2 \) determinant.
  • Add the third element \( c \), multiplied by its corresponding \( 2 \times 2 \) determinant.

2.3 Singular and Non-Singular Matrices

A square matrix \( M \) is classified based on its determinant:

  • Non-Singular Matrix: If \( \text{det } M \neq 0 \). These matrices have an inverse \( M^{-1} \).
  • Singular Matrix: If \( \text{det } M = 0 \). These matrices do not have an inverse. Geometrically, they map an area (or volume) to zero, meaning information is lost.

2.4 Finding the Inverse Matrix (\( M^{-1} \))

The inverse \( M^{-1} \) "undoes" the effect of \( M \), such that \( M M^{-1} = M^{-1} M = I \).

Inverse of a 2x2 Matrix:

For \( M = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \), provided \( \text{det } M \neq 0 \): $$ M^{-1} = \frac{1}{\text{det } M} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix} $$

Simple Trick: Swap \( a \) and \( d \), and negate \( b \) and \( c \). Then divide the resulting matrix by the determinant.

Inverse of a 3x3 Matrix:

Finding the inverse of a \( 3 \times 3 \) matrix involves calculating the matrix of cofactors, transposing it (to get the adjugate matrix), and then dividing by the determinant. This is a multi-step process: $$ M^{-1} = \frac{1}{\text{det } M} (\text{Adj } M) $$

The Inverse Product Rule:

For two non-singular matrices \( A \) and \( B \): $$ (AB)^{-1} = B^{-1}A^{-1} $$

Analogy: Think of transformations. If you first put on your socks (\( B \)) and then your shoes (\( A \)), to undo the process, you must first take off your shoes (\( A^{-1} \)) and then take off your socks (\( B^{-1} \)). The reverse order is necessary!

Key Takeaway for Determinants and Inverses: If the determinant is zero, stop! There is no inverse. The inverse rule \((AB)^{-1} = B^{-1}A^{-1}\) is essential for solving combined transformation problems.


3. Matrices and Geometric Transformations (2D)

A \( 2 \times 2 \) matrix \( M \) transforms a position vector \( \begin{pmatrix} x \\ y \end{pmatrix} \) to a new position vector \( \begin{pmatrix} x' \\ y' \end{pmatrix} \) using the equation \( M \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x' \\ y' \end{pmatrix} \).

3.1 Understanding Basic Transformations

You must be able to recognise and apply the matrices for standard 2D transformations:

  • Rotation: Rotations are defined by the angle \( \theta \) (positive angle is counter-clockwise). The matrix for rotation about the origin through angle \( \theta \) is: $$ R = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} $$
  • Reflection: A mirror transformation across a line (usually through the origin).
  • Enlargement (Scaling): An enlargement centre the origin with scale factor \( k \) is given by: $$ E = \begin{pmatrix} k & 0 \\ 0 & k \end{pmatrix} $$
  • Stretch: A stretch parallel to an axis. Example: Stretch factor \( k \) parallel to the x-axis: \( \begin{pmatrix} k & 0 \\ 0 & 1 \end{pmatrix} \).
  • Shear: A transformation that displaces points parallel to a fixed line (the invariant line), where the displacement is proportional to the point's distance from that line. Example: Shear parallel to the x-axis: \( \begin{pmatrix} 1 & k \\ 0 & 1 \end{pmatrix} \).

3.2 Composition of Transformations

If transformation \( T_1 \) is represented by matrix \( M_1 \) and transformation \( T_2 \) by \( M_2 \), and a point \( P \) is transformed first by \( T_1 \) and then by \( T_2 \):

The final transformation matrix \( M \) is the product \( M_2 M_1 \).

Remember the Order!
To find the final matrix, write the matrices in the order "right-to-left" corresponding to the order the transformations occur (leftmost matrix is the last transformation applied).

3.3 Inverse Transformations

If matrix \( A \) represents a transformation, then the inverse matrix \( A^{-1} \) represents the inverse transformation, which maps the image back onto the original object.

  • Example: If \( A \) is a rotation of \( 30^\circ \) clockwise, \( A^{-1} \) is a rotation of \( 30^\circ \) counter-clockwise.

3.4 Area Scale Factor

When a region is transformed by a matrix \( M \), the area of the image is related to the area of the original region by the determinant of \( M \).

$$ \text{Area Scale Factor} = |\text{det } M| $$

We use the absolute value of the determinant because area is always positive. A negative determinant simply implies a reflection has occurred (flipping the orientation).

Key Takeaway for Transformations: The transformation matrix is applied \( M \mathbf{x} \). In a sequence \( T_2 \) followed by \( T_1 \), the combined matrix is \( T_1 T_2 \). The absolute determinant gives the area scale factor.


4. Invariant Points and Lines

Understanding invariant features is often tested and requires setting up and solving specific matrix equations.

4.1 Invariant Points

An invariant point is a point \( P \) that remains stationary (it does not move) after a transformation \( M \) has been applied.

If \( P = \begin{pmatrix} x \\ y \end{pmatrix} \) is an invariant point, then: $$ M P = P $$

This can be rewritten using the Identity matrix \( I \): $$ M P - I P = \mathbf{0} \quad \Rightarrow \quad (M - I) P = \mathbf{0} $$

You solve the resulting system of simultaneous equations for \( x \) and \( y \).

  • Common Scenario: For an enlargement centered at the origin, the only invariant point is the origin itself, \((0, 0)\).

4.2 Invariant Lines Through the Origin (\( y = mx \))

An invariant line is a line that, while points on the line might move, the line itself is mapped back onto itself.

For a line passing through the origin, \( y = mx \), we substitute the general point \( \begin{pmatrix} x \\ mx \end{pmatrix} \) into the transformation matrix \( M \).

Let \( M = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \). If the line is invariant, the transformed point \( M \begin{pmatrix} x \\ mx \end{pmatrix} \) must also lie on the line \( y = mx \).

The transformed point \( \begin{pmatrix} x' \\ y' \end{pmatrix} \) is: $$ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} x \\ mx \end{pmatrix} = \begin{pmatrix} ax + bmx \\ cx + dmx \end{pmatrix} $$

Since this new point must satisfy the gradient condition \( y' = m x' \): $$ cx + dmx = m(ax + bmx) $$

Since this must hold for any \( x \neq 0 \), we can divide by \( x \) and solve for the invariant gradient \( m \): $$ c + dm = ma + bm^2 $$ $$ bm^2 + (a - d)m - c = 0 $$

Solving this quadratic equation gives the gradient(s) \( m \) of the invariant lines through the origin.

4.3 Invariant Lines (Not necessarily through the origin)

If a problem asks for all invariant lines (including those not passing through the origin, \( y = mx + c \)), you must follow a similar principle: substitute the general point \( \begin{pmatrix} x \\ mx + c \end{pmatrix} \) into the matrix \( M \), and ensure the transformed coordinates \( x' \) and \( y' \) satisfy the original line equation \( y' = m x' + c \).

Common Mistake to Avoid:

Do not confuse invariant points with invariant lines. A point remaining invariant means \( P' = P \). A line remaining invariant means \( P' \) is still on the line, but \( P' \) might not equal \( P \).

Key Takeaway for Invariants: Invariant points solve \((M - I)P = \mathbf{0}\). Invariant lines through the origin solve the quadratic equation derived from ensuring the transformed point maintains the gradient \( m \).