Skip to main content
Logo image

Section 11.3 Concepts

Subsection 11.3.1 Vectors

A directed line segment (or arrow) could be thought of dynamically as describing a change in position, from the initial point to the terminal point. A two-dimensional vector in the plane or a three-dimensional vector in space captures just the change part of “change in position,” leaving the position part (that is, the initial and terminal points) unspecified. For example, in the plane, the instructions “move three units right and two units down” describe a way to change positions, but don’t actually specify from where or to where the change in position is occurring. So a vector corresponds to an infinite number of directed line segments, where each of these directed line segments has a different initial point but all of them require the same “change” to change positions from initial point to terminal point. Continuing our example, every change in position between some initial and terminal points in the plane that requires moving two units right and three units down can be represented by the same vector.
Example of a vector associated to several equivalent directed line segments in the plane.
A diagram consisting of four right triangles in the \(xy\)-plane, each with a directed line segment for the hypotenuse. The triangles are all identical, both in that they are all congruent but also that they are similarly oriented in the plane. Each triangle can identically be described as follows. A directed line segment extends downwards and rightwards from an initial point \(P_i\) to a terminal point \(Q_i\) (with \(i = 1, 2, 3, 4\) indexing the triangles), representing a vector labelled \(\uvec{u}\text{.}\) (This vector label is identical, without indexing subscript, for each triangle.) A dashed line segment extends horizontally \(2\) units rightwards of \(P_i\text{,}\) and from there another dashed line segment extends vertically \(3\) units downwards where it precisely meets \(Q_i\) to complete the triangle. These two line segments necessarily meet at a right angle.
Figure 11.3.1. A specific vector can be associated to many equivalent directed line segments.
We describe a two-dimensional vector in the plane with a pair of numerical components: the change in \(x\) and the change in \(y\text{.}\) If \(P(x_1,y_1)\) and \(Q(x_2,y_2)\) are points in the plane, then the vector associated to the directed line segment \(\abray{PQ}\) has components \(\uvec{v} = (\change{x},\change{y}) = (x_2 - x_1, y_2 - y_1)\text{.}\) (A three-dimensional vector in space requires a third component as well: the change in \(z\text{.}\))
Notice what happens when we use the origin \(O(0,0)\) as the initial point and an arbitrary point \(R(x,y)\) as the terminal point in a directed line segment: the vector associated to \(\abray{OR}\) is then \((x-0,y-0) = (x,y)\text{.}\) So when the initial point is the origin, the components of the vector are exactly the coordinates of the terminal point. In Discovery 11.1 we saw that this works in reverse as well. That is, if we have a vector \(\uvec{v} = (v_1,v_2)\text{,}\) then we could consider the numbers \(v_1,v_2\) as coordinates of a point \(R(x,y)\) with \(x=v_1\) and \(y=v_2\text{,}\) and then the vector associated to \(\abray{OR}\) is just \(\uvec{v}\) again. In this way, every vector corresponds to one unique directed line segment with initial point at the origin, and so that is sort of the “natural position” of the vector as a directed line segment. But we will find that it is often convenient to consider other directed line segments that correspond to a particular vector.
We live in a three-dimensional world (or, at least, it appears that way to us), and our little human brains cannot visualize points or arrows in four- or higher-dimensional spaces. However, we can still describe such imaginary objects using our experience from two- and three-dimensional points and vectors. For example, if we had two points \(P\) and \(Q\) in an imaginary four-dimensional space, they would each require four coordinates, so we would describe them as \(P(w_1,x_1,y_1,z_1)\) and \(Q(w_2,x_2,y_2,z_2)\text{.}\) Then the vector corresponding to the directed line segment \(\abray{PQ}\) would have four components and we would compute it as \(\uvec{v} = (\change{w}, \change{x}, \change{y}, \change{z}) = (w_2 - w_1, x_2 - x_1, y_2 - y_1, z_2 - z_1)\text{.}\)

Subsection 11.3.2 Vector addition

A vector triangle with the initial point of the second vector placed at the terminal point of the first, and an unknown third vector extending from the initial point of the first to the terminal point of the second completing the triangle.
A diagram of three vectors in a triangular configuration. Three points are plotted: \(P\) in the lower-left of the diagram, \(Q\) at the top of the diagram, just left of centre, and \(R\) at the far right of the diagram, slightly above centre. Between these points are drawn three directed line segments: \(\abray{PQ}\) representing a vector labelled \(\uvec{u}\text{,}\) \(\abray{QR}\) representing a vector labelled \(\uvec{v}\text{,}\) and \(\abray{PR}\) drawn with a dashed-line shaft and labelled with a question mark.
Figure 11.3.2. Traversing the plane by “chaining” vectors.
A vector describes a change in position. If we chain two changes in position together, by making the initial point of the second vector the same as the terminal point of the first vector, then we could consider the overall change in position.
If these are points and vectors in the plane, then clearly the change in position from \(P\) to \(R\) will be described by the total net change in \(x\) and the total net change in \(y\text{,}\) as we discovered in Discovery 11.2. So, in Figure 11.3.2, we obtain the components for the vector corresponding to \(\abray{PR}\) by adding corresponding components of \(\uvec{u}\) and \(\uvec{v}\text{.}\) That is, if \(\uvec{u} = (u_1,u_2)\) and \(\uvec{v} = (v_1,v_2)\text{,}\) then the components of the dashed arrow labelled with a question mark are \((u_1 + v_1, u_2 + v_2)\text{,}\) as illustrated in Figure 11.3.3. For obvious reasons, we call this the sum of \(\uvec{u}\) and \(\uvec{v}\text{.}\)
Diagram of a vector-addition triangle.
A diagram of three vectors arranged in a triangular configuration to illustrate vector addition. The diagram of Figure 11.3.2 is reproduced, except that \(\abray{PR}\) is drawn with a solid-line shaft and represents the sum vector \(\uvec{u} + \uvec{v}\text{.}\)
Also, below the vector triangle appears a horizontal bar extending from the horizontal position of \(P\) to the horizontal position of \(Q\) and labelled \(u_1\text{,}\) and adjacent to it appears another bar extending from the horizontal position of \(Q\) to the horizontal position of \(R\) and labelled \(v_1\text{.}\) Below these is a third horizontal bar extending from the horizontal position of \(P\) all the way to the horizontal position of \(R\) and labelled \(u_1 + v_1\text{.}\)
Similarly, to the right of the vector triangle appears a vertical bar extending from the vertical position of \(Q\) to the vertical position of \(R\) and labelled \(\abs{v_2}\text{,}\) and adjacent to it appears another bar extending from the vertical position of \(R\) to the vertical position of \(P\) and labelled \(u_2 + v_2\text{.}\) Further to the right is a third vertical bar extending from the vertical position of \(Q\) all the way down to the vertical position of \(P\) and labelled \(u_2\text{.}\)
Figure 11.3.3. A vector-addition triangle. Note that in this configuration, \(v_2\) would be a negative value.
Diagram of the commutativity of vector sums, consisting three arrows in a triangular vector sum configuration, along with a mirror image vector sum triangle, with the two sum vectors aligned together as a common diagonal in a parallelogram formed by the four outside vectors.
A diagram of six vectors arranged into a pair of mirror-image vector-addition triangles. Four points \(P, Q, R, Q'\) are plotted so that they form the vertices of a parallelogram \(\abcdquad{PQRQ'}\text{.}\) Six directed line segments are drawn between these points:
  • Parallel segments \(\abray{PQ}\) and \(\abray{Q'R}\) forming opposite sides of parallelogram \(\abcdquad{PQRQ'}\text{,}\) both representing the same vector \(\uvec{u}\text{.}\)
  • Parallel segments \(\abray{PQ'}\) and \(\abray{QR}\) forming opposite sides of parallelogram \(\abcdquad{PQRQ'}\text{,}\) both representing the same vector \(\uvec{v}\text{.}\)
  • The directed line segment \(\abray{PR}\text{,}\) a diagonal in parallelogram \(\abcdquad{PQRQ'}\text{,}\) is drawn twice: one copy representing the sum vector \(\uvec{u} + \uvec{v}\) as part of the vector-addition triangle \(\abctriangle{PQR}\text{,}\) and the other copy representing the sum vector \(\uvec{v} + \uvec{u}\) as part of the vector-addition triangle \(\abctriangle{PQ'R}\text{.}\)
Figure 11.3.4. The parallelogram rule for vector addition.
In Discovery 11.2 we also considered the result of interchanging the order of a “chained” pair of vectors. In Figure 11.3.4, the vector for \(\abray{Q'R}\) is the same as that for \(\abray{PQ}\text{,}\) because they represent the same change in position, just with a different initial/terminal points. Accordingly, we have labelled both vectors with \(\uvec{u}\text{.}\) And the same applies to \(\uvec{v}\) with respect to \(\abray{PQ'}\) and \(\abray{QR}\text{.}\)
The diagram in Figure 11.3.4 illustrates that if we start at \(P\) and chain together the change-of-position instructions contained in vectors \(\uvec{u}\) and \(\uvec{v}\text{,}\) the order that we do so does not matter — the overall change in position will be from \(P\) to \(R\text{.}\) Thus, the order of vector addition doesn’t matter. Algebraically, we could have predicted that this would be the case because it doesn’t matter what order you add components: the identities \(u_1 + v_1 = v_1 + u_1\) and \(u_2 + v_2 = v_2 + v_2\) are both valid. But it’s useful conceptually to have these geometric pictures of vector addition because, whether you believe this about yourself or not, humans are spatial thinkers. And the geometric version of the vector identity \(\uvec{v} + \uvec{u} = \uvec{u} + \uvec{v}\) makes a pretty picture of a parallelogram, so we call it the parallelogram rule.
For three-dimensional vectors, we can imagine diagrams like the ones in Figure 11.3.3 and Figure 11.3.4 floating in space, and the parallelogram rule would hold there as well. In higher dimensions, we cannot draw pictures, but we could imagine that they are similar. At any rate, the algebra of vector addition is the same in any dimension: for \(\uvec{u} = (u_1,u_2,\dotsc,u_n)\) and \(\uvec{v} = (v_1,v_2,\dotsc,v_n)\) in \(\R^n\text{,}\) we have
\begin{equation*} \uvec{u} + \uvec{v} = (u_1+v_1,u_2+v_2,\dotsc,u_n+v_n) \text{.} \end{equation*}

Subsection 11.3.3 The zero vector

There is one special change in position that is unlike any other — the one where the initial and terminal points are the same, so that there is actually no change in position. In two dimensions, this means there is no change in either \(x\) or \(y\text{,}\) so the components are \((0,0)\text{.}\) Similarly, in any number of dimensions we have the zero vector \(\zerovec = (0,0,\dotsc,0)\text{.}\)
As we explored in Discovery 11.3, if we chain together a vector \(\uvec{v}\text{,}\) representing some change in position, with the zero vector, which represents no change, then the net result is just the change of \(\uvec{v}\text{.}\) That is, \(\uvec{v} + \zerovec = \uvec{v}\text{,}\) and also \(\zerovec + \uvec{v} = \uvec{v}\text{.}\)

Subsection 11.3.4 Vector negatives and vector subtraction

Subsubsection 11.3.4.1 Negatives

If we move from \(P\) to \(Q\text{,}\) and from there move from \(Q\) back to \(P\text{,}\) the net result is no change in position, which is represented by the zero vector. This means if we add the vector corresponding to \(\abray{PQ}\) to the one corresponding to \(\abray{QP}\text{,}\) the result is \(\zerovec\text{.}\) So if we label the vector for \(\abray{PQ}\) as \(\uvec{v}\text{,}\) it seems reasonable to label the vector for \(\abray{QP}\) as \(-\uvec{v}\text{,}\) the negative of \(\uvec{v}\text{,}\) so that we have \(\uvec{v} + (-\uvec{v}) = \zerovec\text{.}\) (See Figure 11.3.5.(a).)
If we are to have \(\uvec{v} + (-\uvec{v}) = \zerovec\text{,}\) and the components of \(\zerovec\) are all \(0\text{,}\) then since we add vectors by adding corresponding components, the components of \(-\uvec{v}\) must be the negatives of the components of \(\uvec{v}\text{.}\) For example, if \(\uvec{v} = (v_1,v_2)\) in the plane, then \(-\uvec{v} = (-v_1,-v_2)\text{.}\) In any dimension, we have
\begin{align*} \uvec{v} \amp= (v_1,v_2,\dotsc,v_n) \amp \amp\implies \amp -\uvec{v} \amp= (-v_1,-v_2,\dotsc,-v_n)\text{.} \end{align*}
This relationship between the components of \(\uvec{v}\) and \(-\uvec{v}\) will later lead to an identity between the negative and a certain scalar multiple of a vector in Subsection 11.3.5.
Aside: A look ahead.
Diagram of a vector and its negative arranged in a vector-addition triangle.
A diagram illustrating a negative vector as the vector that sums with the original to the zero vector. Points \(P\) and \(Q\) are plotted, and directed line segment \(\abray{PQ}\) is drawn, representing a vector labelled \(\uvec{u}\text{.}\) The directed line segment \(\abray{QP}\) is drawn in parallel, representing the negative vector \(- \uvec{u}\text{.}\) If the point \(P\) can be thought of as representing the zero vector, completing the vector addition “triangle” \(\abctriangle{PQP}\text{.}\)
(a) A vector and its negative summing to the zero vector.
Diagram of a vector arrow with the opposite vector extend reversely from a shared initial point.
A diagram illustrating a negative vector as an oppositely directed line segment. Collinear points \(R', O, R\) are plotted so that \(O\) is the midpoint between \(R\) and \(R'\text{.}\) Directed line segments \(\abray{OR}\) and \(\abray{OR'}\) are drawn, with \(\abray{OR}\) representing a vector \(\uvec{v}\) and \(\abray{OR'}\) representing the negative vector \(- \uvec{v}\text{.}\)
(b) A negative vector as an “opposite”.
Figure 11.3.5. Two visualizations of a negative vector.
Remembering that the “natural” position for a vector is with its tail at the origin, it’s useful to visualize negatives as in Figure 11.3.5.(b). That is, the negative of a vector will change positions by the same distance but in the opposite direction.

Subsubsection 11.3.4.2 Subtraction

Diagram of vector subtraction, consisting of three arrows arranged in a vector-addition triangle, along with a a pair of oppositely directed arrows emanating from one of the vertices of the triangle.
A diagram illustrating vector subtraction as vector addition of a vector and the negative of another. Collinear points \(Q', O, Q\) are plotted so that \(O\) is the midpoint between \(Q\) and \(Q'\text{.}\) Directed line segments \(\abray{OQ}\) and \(\abray{OQ'}\) are drawn, with \(\abray{OQ}\) representing a vector \(\uvec{v}\) and \(\abray{OQ'}\) representing the negative vector \(- \uvec{v}\text{.}\) External to the line through those three points are plotted two more points \(P\) and \(R\) so that \(\abcdquad{O P R Q'}\) is a parallelogram. The directed line segment \(\abray{PR}\) is drawn, again representing the negative vector \(- \uvec{v}\text{,}\) as it is the side of parallelogram \(\abcdquad{O P R Q'}\) opposite \(\abray{OQ'}\text{.}\) The directed line segment \(\abray{OP}\) is also drawn, forming another side of parallelogram \(\abcdquad{O P R Q'}\) and representing a vector labelled \(\uvec{u}\text{.}\) Finally, the directed line segment \(\abray{OR}\) is drawn, forming a diagonal of parallelogram \(\abcdquad{O P R Q'}\) and completing the vector-addition triangle \(\abctriangle{OPR}\text{.}\) This final directed line segment represents the difference vector \(\uvec{u} - \uvec{v}\text{.}\)
Figure 11.3.6. Vector subtraction via vector addition with a negative.
To subtract vectors, we add to a vector the negative of another. In Figure 11.3.6, the vector arrows forming triangle \(\abctriangle{OPR}\) represent a vector-addition triangle, so that the vector labelled \(\uvec{u} - \uvec{v}\) is obtained by adding \(\uvec{u}\) and \(-\uvec{v}\text{.}\)
As explored in Discovery 11.5, we get an interesting pattern if we draw in another copy of the vector labelled \(\uvec{u} - \uvec{v}\) with its initial point at \(Q\text{.}\) In Figure 11.3.7.(a), triangle \(\abctriangle{QP'P}\) creates the vector addition pattern \(\uvec{u} + (-\uvec{v}) = \uvec{u} - \uvec{v}\text{.}\) But notice that \(\abctriangle{OQP}\) creates a vector addition pattern starting at \(O\) and ending at \(P\text{,}\) by \(\uvec{v} + (\uvec{u} - \uvec{v}) = \uvec{u}\text{.}\) So we can think of a difference of two vectors as a vector that runs between the terminal points of the two vectors in the difference when they share the same initial point. Algebraically, we can think of the \(\uvec{v}\) and \(-\uvec{v}\) cancelling in the expression \(\uvec{v} + (\uvec{u} - \uvec{v})\text{,}\) leaving just \(\uvec{u}\text{.}\)
A parallelogram split into two triangles by a diagonal representing a difference vector, illustrating both vector subtraction as vector addition with a negative vector in one triangle, and vector subtraction as displacement between terminal points in the other triangle.
A diagram using a parallelogram to simultaneously illustrate vector subtraction interpreted as vector addition with a negative vector and as the displacement between the two terminal points of the vectors in the subtraction operation. Four points \(O,P,P',Q\) are plotted so that \(\abcdquad{OPP'Q}\) is a parallelogram. Parallel directed line segments \(\abray{OP}\) and \(\abray{QP'}\) are drawn, forming two sides of the parallelogram, and both labelled as equivalently representing the vector \(\uvec{u}\text{.}\) Parallel directed line segments \(\abray{OQ}\) and \(\abray{P'P}\) are also drawn, forming the remaining two sides of the parallelogram, but with the former labelled as representing vector \(\uvec{v}\) and the latter labelled as representing the negative vector \(- \uvec{v}\text{.}\) Finally, the directed line segment \(\abray{QP}\) is drawn, forming a diagonal of the parallelogram, and labelled as representing the difference vector \(\uvec{u} - \uvec{v}\text{.}\)
(a)
A parallelogram split into two triangles by a diagonal representing the oppositely-oriented difference vector as in a previous diagram.
A diagram using a parallelogram to simultaneously illustrate vector subtraction interpreted as vector addition with a negative vector and as the displacement between the two terminal points of the vectors in the subtraction operation. Four points \(O,P,P',Q\) are plotted so that \(\abcdquad{OPP'Q}\) is a parallelogram. Parallel directed line segments \(\abray{OQ}\) and \(\abray{PP'}\) are drawn, forming two sides of the parallelogram, and both labelled as equivalently representing the vector \(\uvec{v}\text{.}\) Parallel directed line segments \(\abray{OP}\) and \(\abray{P'Q}\) are also drawn, forming the remaining two sides of the parallelogram, but with the former labelled as representing vector \(\uvec{u}\) and the latter labelled as representing the negative vector \(- \uvec{u}\text{.}\) Finally, the directed line segment \(\abray{PQ}\) is drawn, forming a diagonal of the parallelogram, and labelled as representing the difference vector \(\uvec{v} - \uvec{u}\text{.}\)
(b)
Figure 11.3.7. Two vector-subtraction triangles, one each for the different orders of subtraction. Vector subtraction can be interpreted as the displacement between the two terminal points of the vectors in the subtraction operation.
Of course, there are two vectors that run between the terminal points of \(\uvec{u}\) and \(\uvec{v}\text{,}\) namely \(\uvec{u} - \uvec{v}\) and its negative. In Figure 11.3.7.(b), \(\abctriangle{PP'Q}\) creates a vector addition pattern starting at \(P\) and ending at \(Q\text{,}\) so that \(\uvec{v} + (-\uvec{u}) = \uvec{v} - \uvec{u}\text{.}\) But also \(\abctriangle{OPQ}\) creates a vector addition pattern starting at \(O\) and ending at \(Q\text{,}\) so that \(\uvec{u} + (\uvec{v} - \uvec{u}) = \uvec{v}\text{.}\) And finally, the fact that \(\uvec{u} - \uvec{v}\) and \(\uvec{v} - \uvec{u}\) both run between \(P\) and \(Q\text{,}\) but in opposite directions, verifies geometrically that \(-(\uvec{u} - \uvec{v}) = \uvec{v} - \uvec{u}\text{,}\) as we would expect algebraically.

Subsection 11.3.5 Scalar multiplication

Geometrically, when we scalar multiply a vector we “stretch” or scale its length by the scale factor. (If this scale factor is negative, then we also flip the vector around in the opposite direction.) Here are some examples.
Diagram of a vector and several example scalar multiples of it.
A point is plotted near the centre of the diagram, an from it emanate several parallel directed line segments:
  • A directed line segment representing a vector labelled \(\uvec{v}\text{.}\)
  • A directed line segment in the same direction as \(\uvec{v}\) but twice as long, representing the scalar multiple vector \(2 \uvec{v}\text{.}\)
  • A directed line segment in the same direction as \(\uvec{v}\) but half as long, representing the scalar multiple vector \(\frac{1}{2} \uvec{v}\text{.}\)
  • A directed line segment the same length as \(\uvec{v}\) but in the opposite direction, representing the scalar multiple vector \((-1) \uvec{v}\text{.}\)
  • A directed line segment in the opposite direction to \(\uvec{v}\) and close to one-and-a-half times as long, representing the scalar multiple vector \((-\sqrt{2}) \uvec{v}\text{.}\)
Figure 11.3.8. Examples of scalar multiples of a vector.
Notice how each of the vectors in Figure 11.3.8 either points in the same direction as or in the opposite direction to \(\uvec{v}\text{.}\) In particular, they are all parallel to one another. This happens precisely when the vectors are scalar multiples of one another.
Thinking of the vectors in Figure 11.3.8 as vectors in the plane, if we scale \(\uvec{v}\) by a factor of \(2\text{,}\) then our knowledge of similar triangles tells us that the change in both \(x\) and \(y\) must be double. That is, for \(\uvec{v} = (v_1,v_2)\) we should have \(2\uvec{v} = (2 v_1, 2 v_2)\text{.}\)
Diagram illustrating the scaling of the components of a two-dimensional vector when the vector is scaled by 2.
A directed line segment representing a vector labelled \(\uvec{v}\) runs upwards and rightwards from a point at the left edge of the diagram, just below centre, to a point just above and left of the centre of the diagram. A second directed line segment representing the scalar multiple vector \(2 \uvec{v}\) runs parallel to and from the same initial point as \(\uvec{v}\text{,}\) but is twice as long, reaching to the top of the diagram.
A dashed horizontal line runs from the initial point of \(\uvec{v}\) to the point directly below the terminal point of the scalar multiple \(2 \uvec{v}\text{.}\) A dashed vertical line runs from each of the the terminal points of \(\uvec{v}\) and \(2 \uvec{v}\) down to this horizontal line, creating two right triangles, one inside the other, with the vectors \(\uvec{v}\) and \(2 \uvec{v}\) as the hypotenuses. The horizontal and vertical legs of the smaller right triangle (with \(\uvec{x}\) as hypotenuse) are labelled \(\change{x}\) and \(\change{y}\text{,}\) respectively, and the horizontal and vertical legs of the larger right triangle (with \(2 \uvec{x}\) as hypotenuse) are labelled \(2 \change{x}\) and \(2 \change{y}\text{,}\) respectively.
Figure 11.3.9. Geometric confirmation that the components of a scalar multiple vector are just the scaled components of the original vector, in the case of a plane vector and a scale factor of \(2\text{.}\)
This relationship between original vector \(\uvec{v}\) and scaled vector \(k\uvec{v}\) holds in general, in any dimension, and even for negative \(k\text{:}\)
\begin{equation*} \uvec{v} = (v_1,v_2,\dotsc,v_n) \qquad\implies\qquad k \uvec{v} = (k v_1, k v_2, \dotsc, k v_n)\text{.} \end{equation*}
In the case that \(k = -1\text{,}\) we obtain the identity \((-1)\uvec{v} = -\uvec{v}\text{,}\) as promised earlier.

Remark 11.3.10.

It may seem redundant to write \((-1)\uvec{v} = -\uvec{v}\text{,}\) don’t both sides mean the same thing? In terms of the effect on components of \(\uvec{v}\text{,}\) yes they are the same. However, when we explore abstract vectors in Chapter 15, we won’t have components or the geometric notion of “opposite direction” as means of seeing this equality, and so there will initially be a subtle difference between the idea of a vector having an additive negative (so that \(\uvec{v} + (-\uvec{v}) = \zerovec\)) and the operation of scalar multiplying a vector by the particular scalar \(-1\text{.}\)
We can connect scalar multiplication to addition, as we explored in Discovery 11.6. If we add a vector to itself, then the sum vector will be twice as long as the original, so that \(\uvec{v} + \uvec{v} = 2 \uvec{v}\text{.}\)
Diagram illustrating how adding a vector to itself relates scalar multiplication to vector addition.
A diagram of a vector added to itself. Collinear points \(P, Q, R\) are plotted so that \(Q\) is the midpoint of \(P\) and \(R\text{.}\) Directed line segments \(\abray{PQ}\) and \(\abray{QR}\) are plotted, both representing the same vector \(\uvec{v}\text{.}\) The directed line segment \(\abray{PR}\) is drawn in parallel, completing the vector addition “triangle” \(\abctriangle{PQR}\) (which is not actually a triangle as the three points are collinear). This third directed line segment represents both the sum vector \(\uvec{v} + \uvec{v}\) and the scalar multiple vector \(2 \uvec{v}\text{.}\)
Figure 11.3.11. Adding a vector to itself produces the same result as multiplying the vector by scale factor \(2\text{.}\)

Subsection 11.3.6 Vector algebra

We have already discovered a few rules of vector algebra, such as
\begin{align*} \uvec{v} + \uvec{u} \amp= \uvec{u} + \uvec{v}, \amp -(\uvec{v} - \uvec{u}) \amp= \uvec{u} - \uvec{v}, \amp \uvec{v} + \uvec{v} \amp= 2\uvec{v}, \amp (-1)\uvec{v} = -\uvec{v}. \end{align*}
In Discovery 11.7, we explored a version of the distributive rule \(k (\uvec{u} + \uvec{v}) = k \uvec{u} + k \uvec{v}\) in the case \(k=2\text{.}\)
Diagram illustrating that scalar multiplication distributes over vector addition in the case of a scale factor of 2, via two vector-addition triangles and a parallogram sitting inside a large vector-addition triangle.
Points \(P, Q', R'\) are arranged in a triangular configuration. Points \(Q, S, R\) are plotted so that \(Q\) is the midpoint between \(P\) and \(Q'\text{,}\) \(S\) is the midpoint between \(Q'\) and \(R'\text{,}\) and \(R\) is the midpoint between \(P\) and \(R'\text{.}\) Directed line segments \(\abray{PQ}\text{,}\) \(\abray{QR}\text{,}\) and \(\abray{PR}\) are drawn to form a vector-addition triangle, with \(\abray{PQ}\) representing a vector labelled \(\uvec{u}\text{,}\) \(\abray{QR}\) representing a vector labelled \(\uvec{v}\text{,}\) and \(\abray{PR}\) representing the sum vector \(\uvec{u} + \uvec{v}\text{.}\) An identical vector-addition triangle is formed by directed line segments \(\abray{RS}\text{,}\) \(\abray{SR}'\text{,}\) and \(\abray{RR'}\text{,}\) with \(\abray{RS}\) again representing \(\uvec{u}\text{,}\) \(\abray{SR'}\) again representing \(\uvec{v}\text{,}\) and \(\abray{RR'}\) again representing the sum vector \(\uvec{u} + \uvec{v}\text{.}\) Directed line segments \(\abray{QQ'}\) and \(\abray{Q'S}\) are drawn to complete the parallelogram \(\abcdquad{QQ'SR}\text{,}\) with \(\abray{QQ'}\) again representing \(\uvec{u}\) and \(\abray{Q'S}\) again representing \(\uvec{v}\text{.}\)
Directed line segments \(\abray{PQ'}\text{,}\) \(\abray{Q'R'}\text{,}\) and \(\abray{PR'}\) are drawn to represent the scalar multiple vectors \(2 \uvec{u}\text{,}\) \(2 \uvec{v}\text{,}\) and \(2 (\uvec{u} + \uvec{v})\text{:}\)
  • Points \(P, Q, Q'\) are collinear, with \(Q\) the midpoint between the other two points, and directed segments \(\abray{PQ}\) and \(\abray{QQ'}\) both represent the same vector \(\uvec{u}\text{.}\) Hence directed segment \(\abray{PQ'}\) represents the scalar multiple vector \(2 \uvec{u}\text{.}\)
  • Points \(Q', S, R'\) are collinear, with \(S\) the midpoint between the other two points, and directed segments \(\abray{Q'S}\) and \(\abray{SR'}\) both represent the same vector \(\uvec{v}\text{.}\) Hence directed segment \(\abray{Q'R'}\) represents the scalar multiple vector \(2 \uvec{v}\text{.}\)
  • Points \(P, R, R'\) are collinear, with \(R\) the midpoint between the other two points, and directed segments \(\abray{PR}\) and \(\abray{RR'}\) both represent the sum vector \(\uvec{u} + \uvec{v}\text{.}\) Hence directed segment \(\abray{PR'}\) represents the scalar multiple vector \(2 (\uvec{u} + \uvec{v})\text{.}\)
With \(\abray{PQ'}\) and \(\abray{Q'R'}\) representing vectors \(2 \uvec{u}\) and \(2 \uvec{v}\text{,}\) triangle \(\abctriangle{PQ'R'}\) now forms a vector-addition triangle, so that directed segment \(\abray{PR'}\) represents sum vector \(2 \uvec{u} + 2 \uvec{v}\) (as well as representing \(2 (\uvec{u} + \uvec{v})\)).
Figure 11.3.12. Geometric confirmation that scalar multiplication distributes over vector addition, in the case of scale factor \(2\text{.}\)
We will provide more rules of vector algebra as Proposition 11.5.1 in Subsection 11.5.1. In Discovery 11.12, we decided that the algebra of vectors is the same as the algebra of column matrices (which we have already been referring to as column vectors), so we should be able to anticipate a number of the vector algebra rules that will appear in that proposition.

Aside: A look ahead.

Warning 11.3.13.

There is no multiplication operation for vectors!
Algebraically, vectors in \(\R^n\) are the same as column matrices, and you cannot multiply two column matrices together because their sizes do not match up (except in \(\R^1\text{,}\) but let’s ignore that for now). This also means that you cannot square a vector, you cannot square-root a vector, you cannot invert a vector, and you cannot divide by a vector. Do not try to use any of these operations in vector algebra! In Chapters 12–13, we will encounter some operations tied to the geometry of vectors that we will call “vector products” and for which we will use multiplication-like notation, but they will be for very specific geometric purposes and do not really correspond to our idea of multiplication in the algebra of numbers.

Subsection 11.3.7 The standard basis vectors

In Discovery 11.8, we encountered two very special vectors in the plane, \(\uvec{e}_1 = (1,0)\) and \(\uvec{e}_2 = (0,1)\text{.}\) These two vectors could be considered the fundamental changes in position in the plane — \(\uvec{e}_1\) represents a change by one unit right, and \(\uvec{e}_2\) represents a change by one unit up.
Any change in position can be built out of these two fundamental changes in position. Using the example in Discovery 11.8, the vector \(\uvec{v} = (5,2)\) represents a change in position by \(5\) units right and \(2\) units up. We can achieve the “\(5\) units right” part with \(5 \uvec{e}_1 = (5,0)\) and the “\(2\) units up part” with \(2 \uvec{e}_2 = (0,2)\text{.}\) To get the total change in position represented by \(\uvec{v}\text{,}\) we can combine these two building blocks in the linear combination \(\uvec{v} = 5 \uvec{e}_1 + 2 \uvec{e}_2\text{.}\)
Diagram of the standard two-dimensional basis vectors, drawn as directed line segments in the plane with initial points at the origin.
The standard basis vectors \(\uvec{e}_1 = (1,0)\) and \(\uvec{e}_2 = (0,1)\) are drawn as directed line segments on a set of \(xy\)-axes, with initial points at the origin and terminal points at \((1,0)\) and \((0,1)\text{,}\) respectively, so that \(\uvec{e}_1\) lies along the \(x\)-axis and \(\uvec{e}_2\) lies along the \(y\)-axis.
Figure 11.3.14. The standard basis vectors in \(\R^2\text{.}\)
Diagram illustrating a vector decomposition as a linear combination of the standard two-dimensional basis vectors. A right triangle is drawn with a directed line segment (representing a vector) as the hypotenuse, and head-to-tail chains of directed line segments as the legs.
Diagram illustrating a vector decomposition as a linear combination of standard basis vectors \(\uvec{e}_1\) and \(\uvec{e}_2\) in \(\R^2\text{.}\) A right triangle appears, with a slant hypotenuse and the right angle between a horizontal leg and a vertical leg. The hypotenuse is a directed line segment representing the vector \(\uvec{v} = (5,2)\text{,}\) so that its terminal point is \(5\) units to the right and \(2\) units up from its initial point. The horizontal leg is a chain of five directed line segments, each representing the standard basis vector \(\uvec{e}_1\text{,}\) where the first in the chain shares its initial point with \(\uvec{v}\text{,}\) and each subsequent segment in the chain as its initial point at the terminal point of the previous. Similarly, the vertical leg is a chain of two directed line segments, each representing the standard basis vector \(\uvec{e}_2\text{,}\) where the first has its initial point at the terminal point of the last segment in the horizontal chain, and the second has its initial point at the terminal point of the first and shares its terminal point with \(\uvec{v}\text{.}\)
Figure 11.3.15. An example of decomposing a vector in \(\R^2\) as a linear combination of the standard basis vectors.
As you can imagine, every vector in the plane can be decomposed into a linear combination of \(\uvec{e}_1\) and \(\uvec{e}_2\) in this manner: for \(\uvec{v} = (v_1,v_2)\text{,}\) we have \(\uvec{v} = v_1\,\uvec{e}_1 + v_2\,\uvec{e}_2\text{.}\) For this reason, the two vectors \(\uvec{e}_1,\uvec{e}_2\) together are called the standard basis vectors in \(\R^2\text{,}\) as they form a basis from which every other vector can be constructed. To use an analogy with chemistry, these two vectors are the basic atoms of \(\R^2\text{,}\) and every other vector in \(\R^2\) is a molecule built out of a specific combination of these atoms. Since there are only two fundamental directions in \(\R^2\) (right/left and up/down), it is not surprising that we need only two basis vectors to represent all possible directions. This is the reason we call vectors in \(\R^2\) two-dimensional vectors.

Aside: Note.

In \(\R^3\text{,}\) there are three fundamental directions, two horizontal and one vertical. We might use navigational terminology for the two horizontal directions and describe them as north/south and east/west, and then we can refer to the vertical direction as up/down. So we need three standard basis vectors in \(\R^3\text{,}\)
\begin{align*} \uvec{e}_1 \amp= (1,0,0), \amp \uvec{e}_2 \amp= (0,1,0), \amp \uvec{e}_3 \amp= (0,0,1), \end{align*}
which we can visualize as in Figure 11.3.16.
Diagram of the standard three-dimensional basis vectors, drawn as directed line segments in three-dimensional space with initial points at the origin.
The standard basis vectors \(\uvec{e}_1 = (1,0,0)\text{,}\) \(\uvec{e}_2 = (0,1,0)\text{,}\) and \(\uvec{e}_3 = (0,0,1)\) are drawn as directed line segments on a set of \(xyz\)-axes, with initial points at the origin and terminal points at \((1,0,0)\text{,}\) \((0,1,0)\text{,}\) and \((0,0,1)\text{,}\) respectively, so that \(\uvec{e}_1\) lies along the \(x\)-axis, \(\uvec{e}_2\) lies along the \(y\)-axis, and \(\uvec{e}_3\) lies along the \(z\)-axis.
Figure 11.3.16. The standard basis vectors in \(\R^3\text{.}\) This diagram is three-dimensional; view the \(z\)-axis as coming straight up out of the \(xy\)-plane.
As before, any vector in \(\R^3\) can be decomposed as a linear combination of these three fundamental vectors. For example, the vector \((1,-1,2)\) decomposes as
\begin{equation*} (1,-1,2) = 1\uvec{e}_1 + (-1)\uvec{e}_2 + 2\uvec{e}_3 \text{.} \end{equation*}
And we can repeat all this in \(\R^n\) for any value of \(n\text{,}\) where there are \(n\) standard basis vectors,
\begin{align*} \uvec{e}_1 \amp= (1,0,0,\dotsc,0), \amp \uvec{e}_2 \amp= (0,1,0,\dotsc,0), \amp \amp \dotsc, \amp \uvec{e}_n \amp= (0,0,\dotsc,0,1). \end{align*}
In fact, given a vector \(\uvec{v} = (v_1,v_2,\dotsc,v_n)\) in \(\R^n\) (whether \(n=2\) or \(n=3\) or higher), when we try to decompose
\begin{equation*} \uvec{v} = \fillinmath{XX}\uvec{e}_1 + \fillinmath{XX}\uvec{e}_2 + \dotsb + \fillinmath{XX}\uvec{e}_n\text{,} \end{equation*}
we find that there is only one unique combination of scalar values that can fill in the blanks and create an equality between \(\uvec{v}\) on the left and the linear combination on the right:
\begin{equation*} \uvec{v} = v_1\uvec{e}_1 + v_2\uvec{e}_2 \dotsb + v_n\uvec{e}_n\text{.} \end{equation*}

Aside: A look ahead.