1. Vectors

Notation:

  • A vector, , is a list of numbers (scalars) arranged in a particular order.
  • By convention, vectors are implicitly column vectors. So ; or implicitly .
  • To represent a column vector, , as a row vector, take its transpose . Hence, .
    • Transpose operation turns a column vector into a row vector vice versa. For matrices, it swaps rows and columns.
  • The ‘th element of vector is denoted .

Properties and geometric interpretation of a vector:

  • The number of elements in a vector is called its dimension. This also tells us the vector space it belongs to.
    • For example, is an 3-dimensional vector.
  • A vector can be thought of as an arrow in n-dimensional space.
  • The magnitude of a vector is its length in n-dimensional space. (many distance formulas can be used to compute):
    • norm (Euclidean length): (physical length of a vector in n-dim space)
    • norm (Manhattan Distance):
    • More generally,
      • The p-norm is
      • And the norm, , is the -norm where
    • Generally the norm is implied when referring to vector length.
      • Hence can be written more simply as or occasionally (though the latter is ambiguous with the absolute value of a scalar)
import numpy as np
 
# Create a row vector and a column vector, show their shapes
row_vec = np.array([[1, -5, 3, 2, 4]])
col_vec = np.array([[1], [2], [3], [4]])
 
# Comments show output if you don't use list of lists
print("row_vec =", row_vec[0], "\nrow_vec shape:", row_vec.shape)  # [ 1 -5  3  2  4] and (1, 5)
print("\ncol_vec =\n", col_vec, "\ncol_vec shape:", col_vec.shape)  # [1 2 3 4] and (4, 1)
 
row_vec = [ 1 -5  3  2  4] 
row_vec shape: (1, 5)
 
col_vec =
 [[1]
 [2]
 [3]
 [4]] 
col_vec shape: (4, 1)
# Transpose row_vec and calculate L1, L2, and L_inf norms
from numpy.linalg import norm
 
print("row_vec =", row_vec[0], "\nrow_vec shape:", row_vec.shape)
 
transposed_row_vec = row_vec.T  # now a column vector
print("\ntransposed_row_vec =", transposed_row_vec, "\ntransposed_row_vec shape:", transposed_row_vec.shape, "\n")
 
norm_1 = norm(transposed_row_vec, 1)
norm_2 = norm(transposed_row_vec, 2)  # <- L2 norm is default and most common
norm_inf = norm(transposed_row_vec, np.inf)
 
print("Measures of magnitude:")
print(f"L_1 norm is: {norm_1:.1f}")
print(f"L_2 norm is: {norm_2:.1f}", "(most common)")
print(f"L_inf norm is: {norm_inf:.1f}", "(absolute value of largest element)")  # NB: norm_inf = |largest element|
 
row_vec = [ 1 -5  3  2  4] 
row_vec shape: (1, 5)
 
transposed_row_vec = [[ 1]
 [-5]
 [ 3]
 [ 2]
 [ 4]] 
transposed_row_vec shape: (5, 1) 
 
Measures of magnitude:
L_1 norm is: 15.0
L_2 norm is: 7.4 (most common)
L_inf norm is: 5.0 (absolute value of largest element)

2. Vector addition,

Elementwise addition; if vectors are of of same length (i.e. if and are both in ) then:

  • Addition of 2 vectors: is the vector with elements
# Sum vectors v = [10, 9, 3] and w = [2, 5, 12]
v = np.array([[10, 9, 3]])
w = np.array([[2, 5, 12]])
 
print("Vector addition: \npy: v + w =", (v + w)[0])
 
Vector addition: 
py: v + w = [12 14 15]

3. Scalar times vector,

To multiply a vector , by a scalar (a number in ), do it “elementwise” or “pairwise”.

  • Scalar multiplication of a vector: is the vector with elements
# Multiply vector v = [10, 9, 3] by scalar alpha = 4
v = np.array([[10, 9, 3]])
alpha = 4
 
print("Scalar times vector: \npy: alpha * v =", (alpha * v)[0])
 
Scalar times vector: 
py: alpha * v = [40 36 12]

3.1. Linear Combinations

  • A linear combination of set is defined as
    • Here values are the coefficients of values
    • Example: Grocery bill total cost is a linear combination of items purchased:
      • ( is item cost, is qty. purchased)

3.2. Linear Dependence and Independence

  • A set is linearly INdependent if no object in the set can be written as a lin. combination of the other objects in the set.
    • Example: and are linearly independent. Can you see why?

3.2.1. Geometric interpretation:

  • If two vectors are linearly dependent, they lie on the same line/plane/hyperplane in n-dimensional space.
    • For a 2D vector space, if two vectors are linearly dependent, they lie on the same line (1D subspace).
    • For a 3D vector space, if two vectors are linearly dependent, they lie on the same plane (2D subspace).
    • And so on…

3.2.2. Example:

Below is an example of a linearly DEpendent set. is dependent on

# Writing the vector x = [-8, -1, 4] as a linear combination of 3 vectors, v, w, and u:
v = np.array([[0, 3, 2]])
w = np.array([[4, 1, 1]])
u = np.array([[0, -2, 0]])
 
x = 3 * v - 2 * w + 4 * u
print("x is a linear combination of v, w, and u. It is linearly dependent on them: \nx =", x[0])
 
x is a linear combination of v, w, and u. It is linearly dependent on them: 
x = [-8 -1  4]

4. Vector multiplication (4 approaches)

4.1. Vector dot (inner) product,

Geometric interpretation: A measure of how similarly directed two vectors are (think shadows, or projections)

  • Definition:
    • It’s the sum of elementwise products. For ,
  • Note also, since , transpose to make inner dimensions match. Dot product can hence be rewritten as:

Angle between vectors,

  • Think of dot product as “degree of alignment”:
    • (1,1) and (2,2) are parallel; computing the angle gives
    • (1,1) and (-1,1) are orthogonal (perpendicular) bc and (no alignment)
  • Extending on the dot product idea, , the angle between two vectors is . It is as defined by the formula:

Recap: Many ways to express a dot product (it’s commutative!):

# Compute the dot (i.e. inner) product of vectors v = [10, 9, 3] and w = [2, 5, 12]
v = np.array([[10, 9, 3]])
w = np.array([[2, 5, 12]])
 
print("Vector dot product: v • w:")
print("py: np.dot(v, w.T) =", np.dot(v, w.T)[0][0])  # w.T to match inner dims, manually transpose w to column vector
 
# Compute the angle between vectors v and w
theta = np.arccos(np.dot(v, w.T) / (norm(v) * norm(w)))  # norm() default is L2
print("\nTheta =", theta[0][0])
 
# These methods produce the same result as np.dot(v, w)
print("\n----------------- Below methods produce same dot prod -----------------\n")
print("py: np.inner(v, w) =", np.inner(v, w)[0][0])  # same as dot, but more explicit. Can be used for higher dims.
print("py: np.matmul(v, w.T) =", np.matmul(v, w.T)[0][0])  # Assumes all inputs are row vectors
print("py: v @ w.T =", (v @ w.T)[0][0])  # @ operator is equivalent to np.matmul() for 2D arrays
 
Vector dot product: v • w:
py: np.dot(v, w.T) = 101
 
Theta = 0.979924710443726
 
----------------- Below methods produce same dot prod -----------------
 
py: np.inner(v, w) = 101
py: np.matmul(v, w.T) = 101
py: v @ w.T = 101

4.2. Vector outer product,

  • Definition: is the matrix obtained by multiplying each element of by each element of .

    • Unlike with the dot product, here the vectors and may have different lengths: , .
    • Elements of the resultant matrix are given by:
    • The entire matrix is represented as:
  • Note also, since and are column vectors (, ), we can transpose (now a row vector, ).

    • is hence equivalent to the matrix multiplication (since this ensures the inner dimensions () to match). Example:
# Compute the outer product of same vectors v = [10, 9, 3] and w = [2, 5, 12]
v = np.array([[10, 9, 3]])
w = np.array([[2, 5, 12]])
# w = np.array([[2, 5, 12, 13]])  # <- uncomment to see outer product with different dimensions
 
print("Vectors:\nv =", v[0], "\nw =", w[0])
print("\nVector inner product, v • w\npy: np.inner(v, w) =", np.inner(v, w)[0][0])  # As previous code cell
print("\nVector outer product, v ⨂ w\nnp.outer(v, w) =\n", np.outer(v, w))  # Assumes all inputs column vectors
print("\nVector outer product flipped, w ⨂ v\nnp.outer(w, v) =\n", np.outer(w, v))  # Assumes all inputs column vectors
 
Vectors:
v = [10  9  3] 
w = [ 2  5 12]
 
Vector inner product, v • w
py: np.inner(v, w) = 101
 
Vector outer product, v ⨂ w
np.outer(v, w) =
 [[ 20  50 120]
 [ 18  45 108]
 [  6  15  36]]
 
Vector outer product flipped, w ⨂ v
np.outer(w, v) =
 [[ 20  18   6]
 [ 50  45  15]
 [120 108  36]]

4.3. Vector Hadamard (element-wise) product, :

  • Definition: Element-wise product, where the vectors must be of the same dimension
    • This method also works for matrices of the same dimension.
    • Output vector/matrix is of the same dimension as the operands.
  • Notation: (sometimes )
    • Elements of the resultant vector are given by:
  • Example:

# Compute the Hadamard product of vectors v = [2, 3, 1] and w = [3, 1, 4]
v = np.array([[2, 3, 1]])
w = np.array([[3, 1, 4]])
 
print("Vector Hadamard product: v ⊙ w:\npy: np.multiply(v, w) =", np.multiply(v, w)[0])
 
Vector Hadamard product: v ⊙ w:
py: np.multiply(v, w) = [6 3 4]

4.4. Vector cross product,

Geometric interpretation: The cross product is a vector perpendicular to both and , whose length equals the area enclosed by the parallelogram created by the two vectors

  • Cross product definition:
  • Where:
    • can by computed via dot product
    • is a unit vector (i.e. length ) perpendicular to both and .
# Compute the cross product of v = [0,2,0] and w = [3,0,0]
v = np.array([[0, 2, 0]])
w = np.array([[3, 0, 0]])
 
print("Vector cross product: v ⨉ w\nnp.cross(v, w) =", np.cross(v, w)[0])
 
Vector cross product: v ⨉ w
np.cross(v, w) = [ 0  0 -6]