1. Square matrices
- A square matrix has dimensions . (Thus, )
- Square matrices linearly transform all vectors in an vector space to vector space (i.e. no dimensionality change).
- This means the vector space is transformed in a way that no dimensions are added or lost.
- Also, the origin remains fixed, and all grid lines remain parallel and evenly spaced.
2. Determinant, or
- The Determinant, or , is an important property of square matrices.
2.1. Geometric intuition (From 3Blue1Brown):
-
Its absolute value, , tells you the factor by which any and all regions in the vector space increases/decreases after the transformation by matrix
-
A “region” can be thought of as depending on the dimensionality of the vector space:
- For a 2D vector space, a “region” is an area (e.g. of the unit square)
- For a 3D vector space, a “region” is a volume (e.g. of the unit cube)
- And so on…
-
How to interpret the determinant in terms of initial region (area/volume/etc) in the vector space (before transformation):
- Absolute values:
- If : Any/all initial regions in the vector space are scaled up by that factor
- If : Any/all initial regions in the vector space do not change in size
- If : Any/all initial regions in the vector space shrink by that factor
- If : it means space has dropped into fewer dimensions (i.e. information lost)
- i.e. at least 1 dimension is LOST, so all initial regions (area/volume/etc) of the vector space is now 0
- e.g. if a 2D matrix transforms a 2D vector space into a 1D line (or a 0D point), all “areas” in the original 2D vector space no longer exist. They are squished to 0; hence
- This also means that the columns of are linearly dependent (i.e. they lie on the same line/plane/etc)
- If
- A negative sign tells you whether the vector space was “flipped” (i.e. if the basis vectors swapped sides).
- Here, is said to “invert the orientation” of the vector space.
- Absolute values:
-
To prove understanding, explain:
2.2. Formulae for calculating the determinant of a matrix :
- For a matrix; is:
- For a matrix; is:
- For higher dimension matrices, a similar approach can be used.
2.2.1 Geometric derivation of the determinant formula:
- This relies on how the square matrix transforms the unit square in into a parallelogram in .
- See A Few Useful Transformations section for a recap of how each element of the matrix affects the transformation of the unit square.
2.3. Identity matrix
- A square matrix, , with ones on the diagonal and zeros everywhere else
- Multiplying a matrix with (of compatible dimensionality) will produce the same matrix (like how )
# Find the determinant of M, and multiply M by np.eye(4) to demonstrate M x I = M
import numpy as np
from numpy.linalg import det
M = np.array([[0, 2, 1, 3], [3, 2, 8, 1], [1, 0, 0, 3], [0, 3, 2, 1]])
print("M:\n", M)
print(f"Determinant: {det(M):.1f}")
I = np.eye(4)
print("\nI:\n", I)
print("\nM @ I:\n", M @ I)
M:
[[0 2 1 3]
[3 2 8 1]
[1 0 0 3]
[0 3 2 1]]
Determinant: -38.0
I:
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
M @ I:
[[0. 2. 1. 3.]
[3. 2. 8. 1.]
[1. 0. 0. 3.]
[0. 3. 2. 1.]]3. Trace
- The trace of is the sum of elements on the main diagonal (from left to right):
# Compute the trace of the following matrix A
from numpy import trace
A = np.array([[4.1, 2.8], [9.7, 6.6]])
print("Trace(A)", trace(A))
Trace(A) 10.7