1. Recap: A few ways to multiply vectors and matrices

1.1. Vector multiplication operations (4 approaches)

Given we have 2 vectors, and , of same length (i.e. ,), we can “multiply” them in the following ways:

  1. Vector dot (inner) product:
  2. Vector outer product: . The resultant matrix is of size and its elements are given by:
  3. Vector Hadamard (aka element-wise) product: . Elements of the resultant vector are given by:
  4. Vector cross product:

1.2. Matrix multiplication operations (4 approaches)

Given we have a matrix, , following are a few multiplication operations involving . NB: inner dimensions must match!

  1. Matrix and some vector (examples: column vector , and row vector ):
    1. Matrix times vector: , where vector . Hence, resultant column vector
    2. Vector times matrix: , where vector . Hence, resultant row vector
  2. Matrix and some matrix (examples: and matrices are of same size, but matrix has different size):
    1. Matrix Hadamard (aka element-wise) product: , where . Elements of the resultant matrix are given by:
    2. Matrix multiplication: , where and . Hence, inner dimensions match, and resultant matrix
import numpy as np  # more basic functionality
import scipy  # advanced functionality, built on numpy
 
# Find the inner and outer products of two 1D arrays (not exactly vectors, no double [[]])
a = np.array([4, 5, 6])
b = np.array([7, 8, 9])
 
print("Given vectors a:", a, "and b:", b)
 
print("\n4 types of vector multiplication")
print(
    "- Inner (aka dot) product: a•b = (a^T)b =", np.inner(a, b)
)  # dot prod; dims are: [1x3][3x1]=[1x1] <-- output dim, scalar
print("- Hadamard (elementwise) product: a⊙b", a * b)  # elementwise (or hadamard) product
print("- Cross product, a⨉b:", np.cross(a, b))
print("- Outer product, a[3⨉1] ⨂ b[1⨉3]:\n", np.outer(a, b))  # dims are [3x1][1x3]=[3x3] <-- output dim
 
Given vectors a: [4 5 6] and b: [7 8 9]
 
4 types of vector multiplication
- Inner (aka dot) product: a•b = (a^T)b = 122
- Hadamard (elementwise) product: a⊙b [28 40 54]
- Cross product, a⨉b: [-3  6 -3]
- Outer product, a[3⨉1] ⨂ b[1⨉3]:
 [[28 32 36]
 [35 40 45]
 [42 48 54]]

3. Gram-Schmidt Process

Use this to orthonormalise anything (vector or matrix (orthogonalise))

  • Orthonormalise a set of vectors
    • to , where each vector is in the same vector space,
      • but each vector is unit length, and
      • is mutually orthogonal with other vectors

I.e. Transform a set of vectors into a set of orthonormal vectors in the same vector space

4. Matrix decompositions

4.1. Gaussian Elimination (or Decomposition?)

  • Purpose: We use Gaussian Elimination to simplify a system of linear equations, into row echelon form (or reduced row echelon form; which allows solving by simple inspection)

  • Application:

    • Solving linear system ,
    • Computing inverse matrices
    • Computing rank
    • Computing determinant
    • Elementary row operations: Methods by which the above are done
      • Swapping rows
      • Scaling rows
      • Adding rows to each other (i.e. creating linear combinations)
  • Row echelon form: The first non-zero element from the left in each row (aka leading coefficient, pivot) is always to the right of the first non-zero element in the row above

  • Reduced row echelon form: Row echelon form whose pivots are and column containing pivots are elsewhere

  • Elementary row operation

4.2. LU Decomposition

Like Gaussian Decomposition, but more computationally efficient

Decompose any matrix (square or not) into:

  • A lower triangular matrix
  • An upper triangular matrix
  • Sometimes needing to reorder using a matrix
a = np.random.randn(3, 4)
print("A:\n", a)
 
p, l, u = scipy.linalg.lu(a)
print("\nP:\n", p)
print("\nL:\n", l)
print("\nU:\n", u)
print("\n----\n\nRecomposition: PLU = A:\n", p @ l @ u)
 
A:
 [[ 0.15901331 -0.42151338  0.89002566 -0.77368563]
 [ 0.76901951 -0.8152131  -0.28904111 -1.03463915]
 [ 0.85276297  0.99433259 -0.23377478 -0.12578455]]
 
P:
 [[0. 0. 1.]
 [0. 1. 0.]
 [1. 0. 0.]]
 
L:
 [[1.         0.         0.        ]
 [0.90179749 1.         0.        ]
 [0.18646835 0.354533   1.        ]]
 
U:
 [[ 0.85276297  0.99433259 -0.23377478 -0.12578455]
 [ 0.         -1.71189973 -0.0782236  -0.92120695]
 [ 0.          0.          0.9613501  -0.42363252]]
 
----
 
Recomposition: PLU = A:
 [[ 0.15901331 -0.42151338  0.89002566 -0.77368563]
 [ 0.76901951 -0.8152131  -0.28904111 -1.03463915]
 [ 0.85276297  0.99433259 -0.23377478 -0.12578455]]

4.3. QR Decomposition

Decompose a matrix into:

  • an orthogonal matrix
  • an upper triangular matrix

It’s used in QR algorithms to solve the linear least square problem.

Also, the matrix is sometimes what we desire after the Gram-Schmidt process

a = np.random.randn(3, 4)
print("A:\n", a)
 
q, r = np.linalg.qr(a)
print("\nQ:\n", q)
print("\nR:\n", r)
print("\n----\n\nRecomposition: QR = A:\n", q @ r)
 
A:
 [[-1.36254921 -0.77417909 -2.21614234  0.89851109]
 [ 0.04834985 -0.67077294 -2.09042296  1.6127626 ]
 [ 0.67829896  0.71423733  0.69209262 -1.74429325]]
 
Q:
 [[-0.8947565   0.14601158  0.4220088 ]
 [ 0.0317503  -0.92184032  0.38626719]
 [ 0.44542421  0.35901398  0.82018671]]
 
R:
 [[ 1.52281566  0.98954313  2.22481102 -1.52969339]
 [ 0.          0.76172761  1.85192465 -1.98174223]
 [ 0.          0.         -1.17504818 -0.42850929]]
 
----
 
Recomposition: QR = A:
 [[-1.36254921 -0.77417909 -2.21614234  0.89851109]
 [ 0.04834985 -0.67077294 -2.09042296  1.6127626 ]
 [ 0.67829896  0.71423733  0.69209262 -1.74429325]]

4.4. Cholesky Decomposition

Decompose a symmetric (or Hermitian) positive-definite matrix into:

  • a lower triangular matrix
  • and its transpose (or conjugate transpose)

Used in algorithms for numerical convenience

x = np.diagflat([[1, 2], [3, 4]])
print("x:\n", x)
 
L = np.linalg.cholesky(x)
print("\nL:\n", L)
 
print("\n----\n\nRecomposition: LL^T:\n", L @ L.T)
 
x:
 [[1 0 0 0]
 [0 2 0 0]
 [0 0 3 0]
 [0 0 0 4]]
 
L:
 [[1.         0.         0.         0.        ]
 [0.         1.41421356 0.         0.        ]
 [0.         0.         1.73205081 0.        ]
 [0.         0.         0.         2.        ]]
 
----
 
Recomposition: LL^T:
 [[1. 0. 0. 0.]
 [0. 2. 0. 0.]
 [0. 0. 3. 0.]
 [0. 0. 0. 4.]]

Questions

  • When exactly do we use decompositions?