Inverse Matrices

Reversing transformations and solving equations

The Idea of Reversibility

Imagine you apply a transformation to space—stretching, rotating, shearing. Now ask yourself: can you undo it? Can you find another transformation that takes you back exactly to where you started?

This is the central question behind inverse matrices. If a matrix AA represents some transformation, then its inverse A1A^{-1} is the transformation that perfectly reverses it. Apply AA and then A1A^{-1}, and you end up exactly where you began.

Think of it like this: if AA is a recipe that transforms ingredients into a dish, then A1A^{-1} is the reverse recipe that extracts the original ingredients from the dish. Of course, not every cooking process can be reversed—some transformations destroy information. The same is true for matrices.

Forward and Reverse

Let's see this in action. Below, we start with the standard basis vectors i\vec{i} and j\vec{j}, apply a transformation AA, and then apply its inverse A1A^{-1} to get back to the original.

Interactive: Step through the transformation and its inverse

0

Original: The original unit vectors i and j

Notice how applying A1A^{-1} after AA brings us right back to the identity—the original unit vectors. This is the defining property of an inverse:

AA1=A1A=IA \cdot A^{-1} = A^{-1} \cdot A = I

Here, II is the identity matrix, which represents doing nothing to space. When you multiply any vector by II, it stays exactly where it is.

Solving Systems of Equations

Why do we care about inverses? One of the most powerful applications is solving systems of linear equations. Consider the equation:

Ax=bA\vec{x} = \vec{b}

This says: some unknown vector x\vec{x}, when transformed by AA, lands on b\vec{b}. To find x\vec{x}, we need to reverse the transformation. Multiply both sides by A1A^{-1}:

x=A1b\vec{x} = A^{-1}\vec{b}

The inverse tells us exactly where x\vec{x} was before the transformation—the answer to our equation.

Interactive: Adjust b and see the solution x = A⁻¹b

3.0
2.0

Given A=[2101]A = \begin{bmatrix} 2 & 1 \\ 0 & 1 \end{bmatrix} and b=[3.02.0]\vec{b} = \begin{bmatrix} 3.0 \\ 2.0 \end{bmatrix}

The solution is x=A1b=[0.502.00]\vec{x} = A^{-1}\vec{b} = \begin{bmatrix} 0.50 \\ 2.00 \end{bmatrix}

Drag the sliders to change b\vec{b} and watch how the solution x\vec{x} changes. The blue vector shows where x\vec{x} must be so that after transformation by AA, it lands on the purple b\vec{b}.

When Does an Inverse Exist?

Here's the crucial question: when can a transformation be reversed? The answer connects directly to the determinant.

Remember that the determinant measures how a transformation scales area. If the determinant is zero, the transformation squashes space down to a lower dimension—a line, or even a single point. When this happens, information is lost. Multiple input vectors get mapped to the same output, so there's no way to uniquely trace back.

Interactive: Watch space collapse as det(A) approaches zero

2.0

A=[12.012]A = \begin{bmatrix} 1 & 2.0 \\ 1 & 2 \end{bmatrix}, det(A)=0.00\det(A) = 0.00

Determinant ≈ 0: No inverse exists! Information is lost.

When k=2k = 2, the two columns of the matrix become parallel—they point in the same direction. The entire 2D plane gets squashed onto a single line. This matrix is called singular, and it has no inverse.

The rule is simple:

A1 exists     det(A)0A^{-1} \text{ exists } \iff \det(A) \neq 0

A non-zero determinant means the transformation preserves dimensionality—nothing is lost, so everything can be recovered.

The Formula for 2×2 Inverse

For a 2×2 matrix, there's a beautiful explicit formula. Given:

A=[abcd]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}

The inverse is:

A1=1adbc[dbca]A^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}

Notice the pattern: swap the diagonal entries (aa and dd), negate the off-diagonal entries (bb and cc), and divide everything by the determinant adbcad - bc.

This formula makes it clear why the determinant matters: we're dividing by it. If adbc=0ad - bc = 0, the formula breaks down—you can't divide by zero.

Interactive: Adjust the matrix and see its inverse computed

2.0
1.0
0.5
1.5
A=[2.01.00.51.5]A = \begin{bmatrix} 2.0 & 1.0 \\ 0.5 & 1.5 \end{bmatrix}det(A)=2.50\det(A) = 2.50
A1=12.50[1.51.00.52.0]=[0.600.400.200.80]A^{-1} = \frac{1}{2.50} \begin{bmatrix} 1.5 & -1.0 \\ -0.5 & 2.0 \end{bmatrix} = \begin{bmatrix} 0.60 & -0.40 \\ -0.20 & 0.80 \end{bmatrix}

Verification: A·A⁻¹ ≈ [1.00.00.01.0]\begin{bmatrix} 1.0 & 0.0 \\ -0.0 & 1.0 \end{bmatrix} = I ✓

Play with the matrix entries. Watch how the inverse changes, and verify that AA1=IA \cdot A^{-1} = I. Try making the determinant approach zero and see what happens.

Geometric Intuition

Here's another way to think about the inverse. If AA stretches space by a factor of 2 along some direction, then A1A^{-1} compresses it by a factor of 1/2 along that same direction. If AA rotates by 30° counterclockwise, A1A^{-1} rotates by 30° clockwise.

Every action has an equal and opposite reaction. The inverse is the "anti-transformation" that undoes exactly what the original did.

This also explains why singular matrices have no inverse: if AA flattens a 2D region to a 1D line, there's no way to "unflatten" it back. The information about where points were in the perpendicular direction is gone forever.

Key Takeaways

  • The inverse A1A^{-1} is the transformation that perfectly reverses AA
  • The defining property: AA1=IA \cdot A^{-1} = I, the identity matrix
  • An inverse exists if and only if det(A)0\det(A) \neq 0
  • When det(A)=0\det(A) = 0, the matrix is singular—it squashes space and loses information
  • We can solve Ax=bA\vec{x} = \vec{b} by computing x=A1b\vec{x} = A^{-1}\vec{b}
  • For 2×2 matrices: swap diagonals, negate off-diagonals, divide by the determinant