Vector and Its Operations#

Learning Objectives#

  1. Understand Vector Addition and Subtraction: Grasp the concepts of vector addition and subtraction, both algebraically and geometrically, in the \(D\)-dimensional real number space \(\mathbb{R}^D\).

  2. Comprehend Algebraic Definitions: Familiarize with the algebraic definitions of vector addition, subtraction, and scalar-vector multiplication, understanding how these operations are performed component-wise.

  3. Visualize Geometric Interpretation of Vector Operations: Learn to visualize and interpret the geometric implications of vector addition and subtraction, especially in a 2D context for clarity.

  4. Recognize the Vector Addition is Commutative: Acknowledge that vector addition is commutative in nature, implying that the order of addition does not affect the resultant vector.

  5. Explore Scalar-Vector Multiplication: Understand the process and implications of multiplying a vector by a scalar, both in terms of algebraic representation and geometric scaling.

  6. Understand Direction Preservation in Scalar Multiplication: Recognize that scalar multiplication of a vector alters its magnitude but preserves its direction, except in cases of negative scaling where the direction is reversed.

  7. Discover Commutativity in Scalar-Vector Multiplication: Learn about the commutative property of scalar-vector multiplication, which allows the scalar to multiply the vector from either side, resulting in the same outcome.

Vector Addition and Subtraction#

Algebraic Definition#

Definition 36 (Algebraic Definition (Vector Addition and Subtraction))

For any vectors \(\mathbf{u}, \mathbf{v} \in \mathbb{R}^D\), where \(\mathbb{R}^D\) represents the D-dimensional real number space, the operations of vector addition and vector subtraction are defined component-wise as follows:

\[\begin{split} \mathbf{u} \pm \mathbf{v} = \begin{bmatrix} u_1 \pm v_1 \\ u_2 \pm v_2 \\ \vdots \\ u_D \pm v_D \end{bmatrix}, \end{split}\]

where \(\mathbf{u} = \begin{bmatrix} u_1 \\ u_2 \\ \vdots \\ u_D \end{bmatrix} \) and \(\mathbf{v} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_D \end{bmatrix} \) are column vectors in \(\mathbb{R}^D\). Each component of the resulting vector is the sum (or difference, depending on the operation) of the corresponding components of \(\mathbf{u}\) and \(\mathbf{v}\). Specifically, the \(i\)-th component of the resultant vector \(\mathbf{u} \pm \mathbf{v}\) is given by \(u_d \pm v_d\), for each \(d \in \{1, 2, \ldots, D\}\).

Geometric Interpretation#

In exploring the geometric intuition behind vector addition and subtraction, let’s consider vectors in a D-dimensional space, particularly focusing on 2D for visualization.

(19)#\[\begin{split}\mathbf{u} = \begin{bmatrix} 4 \\ 7 \end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix} 8 \\ 4 \end{bmatrix}\end{split}\]

Example 10 (Vector Addition)

To add vectors \(\mathbf{u}\) and \(\mathbf{v}\), start by placing both vectors with their tails at the origin as shown in Fig. 14. One approach is to extend vector \(\mathbf{u}\) by moving 8 units right (along the x-axis) and 4 units up (along the y-axis) from the head of \(\mathbf{u}\). This results in \(\mathbf{u} + \mathbf{v} = \begin{bmatrix} 12 \\ 11 \end{bmatrix}\).

Alternatively, apply the head-to-tail method by placing the tail of \(\mathbf{v}\) at the head of \(\mathbf{u}\). The key concept here is that vectors are free entities in space, characterized solely by their direction and magnitude. Hence, translating vector \(\mathbf{v}\) to the head of \(\mathbf{u}\) does not change its essence. The resultant vector from the origin to the new head position of \(\mathbf{v}\) is \(\mathbf{u} + \mathbf{v}\).

Hide code cell source
 1# Create plot using VectorPlotter2D
 2fig, ax = plt.subplots(figsize=(9, 9))
 3
 4plotter = VectorPlotter2D(
 5    fig=fig,
 6    ax=ax,
 7    ax_kwargs={
 8        "set_xlim": {"left": 0, "right": 15},
 9        "set_ylim": {"bottom": 0, "top": 15},
10        "set_xlabel": {"xlabel": "x-axis", "fontsize": 16},
11        "set_ylabel": {"ylabel": "y-axis", "fontsize": 16},
12        "set_title": {"label": "Vector Addition", "size": 18},
13    },
14)
15
16
17# Define vectors and colors
18vectors = [
19    Vector2D(origin=(0, 0), direction=(4, 7), color="r", label="$\mathbf{u}$"),
20    Vector2D(origin=(0, 0), direction=(8, 4), color="b", label="$\mathbf{v}$"),
21    Vector2D(origin=(0, 0), direction=(12, 11), color="g", label="$\mathbf{u} + \mathbf{v}$"),
22    Vector2D(origin=(4, 7), direction=(8, 4), color="b", label="$\mathbf{v}$"),
23    Vector2D(origin=(8, 4), direction=(4, 7), color="r", label="$\mathbf{u}$"),
24]
25
26add_vectors_to_plotter(plotter, vectors)
27add_text_annotations(plotter, vectors)
28
29# Plot and show
30plotter.plot()
31save_path = Path("./assets/02-vector-operation-addition.svg")
32if not save_path.exists():
33    plotter.save(save_path)
../../_images/02-vector-operation-addition.svg

Fig. 14 Vector addition; By Hongnan G.#

For the sake of displaying completeness of vector addition in 3D, let’s look at how we can plot it with our code base:

 1fig = plt.figure(figsize=(20, 10))
 2ax = fig.add_subplot(111, projection="3d")
 3
 4quiver_kwargs = {
 5    "length": 1,
 6    "normalize": False,
 7    "alpha": 0.6,
 8    "arrow_length_ratio": 0.08,
 9    "pivot": "tail",
10    "linestyles": "solid",
11    "linewidths": 3,
12}
13
14plotter3d = VectorPlotter3D(
15    fig=fig,
16    ax=ax,
17    ax_kwargs={
18        "set_xlim": {"left": 0, "right": 10},
19        "set_ylim": {"bottom": 0, "top": 10},
20        "set_zlim": {"bottom": 0, "top": 10},
21    },
22    quiver_kwargs=quiver_kwargs,
23)
24vectors = [
25    Vector3D(origin=(0, 0, 0), direction=(3, 4, 5), color="red", label="$\mathbf{u}$"),
26    Vector3D(origin=(0, 0, 0), direction=(3, 6, 3), color="green", label="$\mathbf{v}$"),
27    Vector3D(origin=(0, 0, 0), direction=(6, 10, 8), color="blue", label="$\mathbf{u} + \mathbf{v}$"),
28    Vector3D(origin=(3, 6, 3), direction=(3, 4, 5), color="red", label="$\mathbf{u}$"),
29    Vector3D(origin=(3, 4, 5), direction=(3, 6, 3), color="green", label="$\mathbf{v}$"),
30]
31
32add_vectors_to_plotter(plotter3d, vectors)
33add_text_annotations(plotter3d, vectors)
34plotter3d.plot(show_ticks=True)
../../_images/faf115ed006cac1597daeab259636c04486ebe4e7dad48b96d7bfcf0cddff5c0.svg

Example 11 (Vector Subtraction)

To conceptualize vector subtraction, consider two methods realized by the diagram in Fig. 15.

  • First Method: Recognize that \(\mathbf{u} - \mathbf{v}\) is equivalent to \(\mathbf{u} + (-\mathbf{v})\). Here, \(-1 \cdot \mathbf{v} = \begin{bmatrix} -8 \\ -4 \end{bmatrix}\). Now, apply vector addition by placing the tail of \(-\mathbf{v}\) at the head of \(\mathbf{u}\). This method corresponds to the diagram’s bottom left side.

  • Second Method: Keep both vectors in their standard positions (origin as the tail) and draw a vector from the head of \(\mathbf{v}\) to the head of \(\mathbf{u}\). This resultant vector represents \(\mathbf{u} - \mathbf{v}\). It’s not in the standard position, but it geometrically represents the difference.

A noteworthy geometric property is that \(\mathbf{u} - \mathbf{v} = -(\mathbf{v} - \mathbf{u})\). This signifies that geometrically, the vector \(\mathbf{u} - \mathbf{v}\) is the vector \(\mathbf{v} - \mathbf{u}\) rotated by 180 degrees.

Hide code cell source
 1# Create plot using VectorPlotter2D
 2fig, ax = plt.subplots(figsize=(9, 9))
 3
 4plotter = VectorPlotter2D(
 5    fig=fig,
 6    ax=ax,
 7    ax_kwargs={
 8        "set_xlim": {"left": -9, "right": 9},
 9        "set_ylim": {"bottom": -9, "top": 9},
10        "set_xlabel": {"xlabel": "x-axis", "fontsize": 16},
11        "set_ylabel": {"ylabel": "y-axis", "fontsize": 16},
12        "set_title": {"label": "Vector Subtraction", "size": 18},
13    },
14)
15
16# Define vectors and colors
17vectors = [
18    Vector2D(origin=(0, 0), direction=(4, 7), color="r", label="$\mathbf{u}$"),
19    Vector2D(origin=(0, 0), direction=(8, 4), color="b", label="$\mathbf{v}$"),
20    Vector2D(origin=(0, 0), direction=(-4, 3), color="g", label="$\mathbf{u} - \mathbf{v}$"),
21    Vector2D(origin=(-8, -4), direction=(4, 7), color="r", label="$\mathbf{u}$"),
22    Vector2D(origin=(0, 0), direction=(-8, -4), color="b", label="$\mathbf{-v}$"),
23    Vector2D(origin=(4, 7), direction=(-8, -4), color="b", label="$\mathbf{-v}$"),
24    Vector2D(origin=(8, 4), direction=(-4, 3), color="g", label="$\mathbf{u} - \mathbf{v}$"),
25]
26
27add_vectors_to_plotter(plotter, vectors)
28add_text_annotations(plotter, vectors)
29
30# Plot and show
31plotter.plot()
32save_path = Path("./assets/02-vector-operation-subtraction.svg")
33if not save_path.exists():
34    plotter.save(save_path)
../../_images/02-vector-operation-subtraction.svg

Fig. 15 Vector subtraction; By Hongnan G.#

We can also use code to combine them to a subplot:

Hide code cell source
 1fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(18, 9))
 2
 3plotter_add = VectorPlotter2D(
 4    fig=fig,
 5    ax=axes[0],
 6    ax_kwargs={
 7        "set_xlim": {"left": 0, "right": 15},
 8        "set_ylim": {"bottom": 0, "top": 15},
 9        "set_xlabel": {"xlabel": "x-axis", "fontsize": 16},
10        "set_ylabel": {"ylabel": "y-axis", "fontsize": 16},
11        "set_title": {"label": "Vector Addition", "size": 18},
12    },
13)
14
15vectors_add = [
16    Vector2D(origin=(0, 0), direction=(4, 7), color="r", label="$\mathbf{u}$"),
17    Vector2D(origin=(0, 0), direction=(8, 4), color="b", label="$\mathbf{v}$"),
18    Vector2D(origin=(0, 0), direction=(12, 11), color="g", label="$\mathbf{u} + \mathbf{v}$"),
19    Vector2D(origin=(4, 7), direction=(8, 4), color="b", label="$\mathbf{v}$"),
20    Vector2D(origin=(8, 4), direction=(4, 7), color="r", label="$\mathbf{u}$"),
21]
22
23add_vectors_to_plotter(plotter_add, vectors_add)
24add_text_annotations(plotter_add, vectors_add)
25
26plotter_add.plot()
27
28plotter_sub = VectorPlotter2D(
29    fig=fig,
30    ax=axes[1],
31    ax_kwargs={
32        "set_xlim": {"left": -9, "right": 9},
33        "set_ylim": {"bottom": -9, "top": 9},
34        "set_xlabel": {"xlabel": "x-axis", "fontsize": 16},
35        "set_ylabel": {"ylabel": "y-axis", "fontsize": 16},
36        "set_title": {"label": "Vector Subtraction", "size": 18},
37    },
38)
39
40vectors_sub = [
41    Vector2D(origin=(0, 0), direction=(4, 7), color="r", label="$\mathbf{u}$"),
42    Vector2D(origin=(0, 0), direction=(8, 4), color="b", label="$\mathbf{v}$"),
43    Vector2D(origin=(0, 0), direction=(-4, 3), color="g", label="$\mathbf{u} - \mathbf{v}$"),
44    Vector2D(origin=(-8, -4), direction=(4, 7), color="r", label="$\mathbf{u}$"),
45    Vector2D(origin=(0, 0), direction=(-8, -4), color="b", label="$\mathbf{-v}$"),
46    Vector2D(origin=(4, 7), direction=(-8, -4), color="b", label="$\mathbf{-v}$"),
47    Vector2D(origin=(8, 4), direction=(-4, 3), color="g", label="$\mathbf{u} - \mathbf{v}$"),
48]
49
50add_vectors_to_plotter(plotter_sub, vectors_sub)
51add_text_annotations(plotter_sub, vectors_sub)
52
53plotter_sub.plot()
../../_images/cac8f656822d2996724e778e220574fa353be68b1964329647f1acb050f5a2fa.svg

Vector Addition is Commutative#

In the realm of linear algebra, before diving into the formal definition of a vector space over a field, it’s insightful to note a fundamental property of vector addition within such a context. Suppose we have a set of vectors \(\mathcal{V}\) defined over a field \(\mathbb{F}\). In this scenario, every vector in \(\mathcal{V}\) exhibits commutative properties in addition, as dictated by the definition of a field in Definition 30.

Concretely, this means that for any two vectors \(\mathbf{u}, \mathbf{v} \in \mathcal{V}\), the operation of vector addition is commutative; that is, \(\mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}\). This property holds regardless of the dimension \(D\) of the vectors.

Scalar-Vector Multiplication#

Algebraic Definition#

Definition 37 (Algebraic Definition (Scalar-Vector Multiplication))

Given any vector \(\mathbf{v} \in \mathbb{R}^D\) and a scalar \(\lambda \in \mathbb{R}\), the operation of multiplying the vector \(\mathbf{v}\) by the scalar \(\lambda\), denoted as \(\lambda \mathbf{v}\), is defined as:

\[\begin{split} \lambda \mathbf{v} = \begin{bmatrix} \lambda v_1 \\ \lambda v_2 \\ \vdots \\ \lambda v_D \end{bmatrix}, \end{split}\]

where \(\mathbf{v}\) is represented as \(\begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_D \end{bmatrix}\). This operation is known as Vector-Scalar Multiplication.

Geometrical Definition#

Positive Scaling#

Scaling a vector positively is straightforward. For instance, consider the vector \(\mathbf{u} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}\). When scaled by a positive scalar, say \(\lambda = 3\), the vector becomes:

\[\begin{split} 3\mathbf{u} = 3 \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 3 \\ 6 \end{bmatrix}. \end{split}\]

In this case, the magnitude of \(\mathbf{u}\) increases by a factor of \(3\), while its direction remains unchanged. The resulting vector, \(3\mathbf{u}\), points in the same direction as the original but is three times longer.

Negative Scaling#

Conversely, taking the same vector \(\mathbf{u} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}\) and scaling it by \(\lambda = -1\) yields:

\[\begin{split} -1\mathbf{u} = -1 \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \begin{bmatrix} -1 \\ -2 \end{bmatrix}. \end{split}\]

Here, the negatively scaled vector \(-\mathbf{u}\) points in the opposite direction to \(\mathbf{u}\). However, it’s important to note that the “orientation” of \(\mathbf{u}\) in a broader sense remains unchanged; the line along which \(\mathbf{u}\) lies is preserved, and all scalar multiples of \(\mathbf{u}\), regardless of the sign, will lie on this line.

Let’s look how the vector \(\mathbf{u}\) and its scaled versions look like in a 2D space:

Hide code cell source
 1import matplotlib.pyplot as plt
 2from dataclasses import dataclass
 3from typing import List, Tuple, Optional, Dict
 4
 5# Create a subplot for positive and negative scaling
 6fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(18, 9))
 7
 8# Plot for Positive Scaling
 9plotter_pos = VectorPlotter2D(
10    fig=fig,
11    ax=axes[0],
12    ax_kwargs={
13        "set_xlim": {"left": -8, "right": 8},
14        "set_ylim": {"bottom": -8, "top": 8},
15        "set_xlabel": {"xlabel": "x-axis", "fontsize": 16},
16        "set_ylabel": {"ylabel": "y-axis", "fontsize": 16},
17        "set_title": {"label": "Positive Vector-Scalar Multiplication", "size": 18},
18    },
19)
20
21original_vector = Vector2D(origin=(0, 0), direction=(1, 2), color="r", label="$\mathbf{u}$")
22scaled_vector = Vector2D(origin=(0, 0), direction=(3, 6), color="g", label="$3 \cdot \mathbf{u}$")
23
24add_vectors_to_plotter(plotter_pos, [original_vector, scaled_vector])
25add_text_annotations(plotter_pos, [original_vector, scaled_vector])
26plotter_pos.plot()
27
28# Plot for Negative Scaling
29plotter_neg = VectorPlotter2D(
30    fig=fig,
31    ax=axes[1],
32    ax_kwargs={
33        "set_xlim": {"left": -8, "right": 8},
34        "set_ylim": {"bottom": -8, "top": 8},
35        "set_xlabel": {"xlabel": "x-axis", "fontsize": 16},
36        "set_ylabel": {"ylabel": "y-axis", "fontsize": 16},
37        "set_title": {"label": "Negative Vector-Scalar Multiplication", "size": 18},
38    },
39)
40
41negatively_scaled_vector = Vector2D(origin=(0, 0), direction=(-1, -2), color="b", label="$-1 \cdot \mathbf{u}$")
42
43add_vectors_to_plotter(plotter_neg, [original_vector, negatively_scaled_vector])
44add_text_annotations(plotter_neg, [original_vector, negatively_scaled_vector])
45plotter_neg.plot()
../../_images/58029a4acb00fb865f2c06d65e37b2ff8910ce589b1aac59ef21af4777f831f7.svg

Vector-Scalar Multiplication is Invariant under Rotation#

Vector-scalar multiplication, though conceptually straightforward, plays a fundamental role in various applications within linear algebra, such as in the context of eigendecomposition.

As we have seen in the Definition 37, this operation involves scaling a vector by a scalar factor, which alters the vector’s magnitude without changing its direction (geometrically we can see this too). The result, \(\lambda \mathbf{v}\), is a vector in the same direction as \(\mathbf{v}\) but scaled in magnitude by \(\lambda\). This operation is significant for several reasons:

  1. Preservation of Direction: Unlike many other transformations, vector-scalar multiplication maintains the original direction of the vector, either stretching or compressing it along its existing line of action.

  2. Eigenvectors and Eigenvalues: In the study of eigenvectors and eigenvalues, which are central to eigendecomposition, vector-scalar multiplication illustrates how certain vectors (eigenvectors) change only in magnitude, not direction, when a linear transformation is applied.

Vector-Scalar Multiplication is Commutative#

In linear algebra, the commutative property often associated with scalar operations also applies to vector-scalar multiplication, but with a nuanced understanding. Specifically, for any vector \(\mathbf{v} \in \mathbb{R}^D\) and scalar \(\lambda \in \mathbb{R}\), the operation of multiplying the vector by the scalar exhibits a form of commutativity. This can be expressed as:

\[\begin{split} \lambda \mathbf{v} = \begin{bmatrix} \lambda v_1 \\ \lambda v_2 \\ \vdots \\ \lambda v_D \end{bmatrix} = \begin{bmatrix} v_1 \lambda \\ v_2 \lambda \\ \vdots \\ v_D \lambda \end{bmatrix} = \mathbf{v} \lambda. \end{split}\]

Here, \(\lambda \mathbf{v}\) and \(\mathbf{v} \lambda\) are mathematically equivalent, indicating that the scalar can multiply the vector from either the left or the right, yielding the same result. This property is particularly important because it simplifies the manipulation and transformation of vectors in various linear algebra applications, such as in matrix-vector multiplication and transformations.

It’s essential to note that while the scalar multiplication operation is commutative, the multiplication of vectors (if defined, such as in dot or cross products) does not necessarily follow the commutative property. Thus, the commutativity in vector-scalar multiplication is a specific case, pertaining only to the interaction between a scalar and a vector, not between two vectors.

References and Further Readings#

  • Axler, S. (1997). Linear Algebra Done Right. Springer New York. (Chapter 1.A \(\mathbb{R}^N\) and \(\mathbb{C}^N\)).