Section 11.3 Concepts
Subsection 11.3.1 Vectors
A directed line segment (or arrow) could be thought of dynamically as describing a change in position, from the initial point to the terminal point. A two-dimensional vector in the plane or a three-dimensional vector in space captures just the change part of “change in position,” leaving the position part (that is, the initial and terminal points) unspecified. For example, in the plane, the instructions “move two units right and three units down” describe a way to change positions, but don’t actually specify from where or to where the change in position is occurring. So a vector corresponds to an infinite number of directed line segments, where each of these directed line segments has a different initial point but all of them require the same “change” to change positions from initial point to terminal point. Continuing our example, every change in position between some initial and terminal points in the plane that requires moving two units right and three units down can be represented by the same vector.
Example of a vector associated to several equivalent directed line segements.
We describe a two-dimensional vector in the plane with a pair of numerical components: the change in and the change in If and are points in the plane, then the vector associated to the directed line segment has components (A three-dimensional vector in space requires a third component as well: the change in )
Notice what happens when we use the origin as the initial point and an arbitrary point as the terminal point in a directed line segment: the vector associated to is then So when the initial point is the origin, the components of the vector are exactly the coordinates of the terminal point. In Discovery 11.1 we saw that this works in reverse as well. That is, if we have a vector then we could consider the numbers as coordinates of a point with and and then the vector associated to is just again. In this way, every vector corresponds to one unique directed line segment with initial point at the origin, and so that is sort of the “natural position” of the vector as a directed line segment. But we will find that it is often convenient to consider other directed line segments that correspond to a particular vector.
We live in a three-dimensional world (or, at least, it appears that way to us), and our little human brains cannot visualize points or arrows in four- or higher-dimensional spaces. However, we can still describe such imaginary objects using our experience from two- and three-dimensional points and vectors. For example, if we had two points and in an imaginary four-dimensional space, they would each require four coordinates, so we would describe them as and Then the vector corresponding to the directed line segment would have four components and we would compute it as
Subsection 11.3.2 Vector addition
A vector describes a change in position. If we chain two changes in position together, by making the initial point of the second vector the same as the terminal point of the first vector, then we could consider the overall change in position.
Moving directly from the initial point of one vector to the terminal point of another vector, when the two vectors are “chained” together, tail-to-head.
If these are points and vectors in the plane, then clearly the change in position from to will be described by the total net change in and the total net change in as we discovered in Discovery 11.2. So, in the diagram above, we obtain the components for the vector corresponding to by adding corresponding components of and That is, if and then the components of the dashed arrow labelled with a question mark are For obvious reasons, we call this the sum of and
Diagram of a vector sum.
In Discovery 11.2 we also considered the result of interchanging the order of a pair of vectors that have been chained together.
Diagram of the commutivity of vector sums.
In the diagram above, the vector for is the same as that for because they represent the same change in position, just with a different initial point. Accordingly, we have labelled both vectors with And the same applies to with respect to and
The diagram illustrates that if we start at and chain together the change-of-position instructions contained in vectors and the order that we do so does not matter — the overall change in position will be from to Thus, the order of vector addition doesn’t matter. Algebraically, we could have predicted that this would be the case because it doesn’t matter what order you add components: the identities and are both valid. But it’s useful conceptually to have the above geometric picture of vector addition because, whether you believe this about yourself or not, humans are spatial thinkers. And the geometric version of the vector identity makes a pretty picture of a parallelogram, so we call it the parallelogram rule.
For three-dimensional vectors, we can imagine diagrams like the ones we have drawn above floating in space, and the parallelogram rule would hold there as well. In higher dimensions, we cannot draw pictures, but we could imagine that they are similar. At any rate, the algebra of vector addition is the same in any dimension: for and in we have
Subsection 11.3.3 The zero vector
There is one special change in position that is unlike any other — the one where the initial and terminal points are the same, so that there is actually no change in position. In two dimensions, this means there is no change in either or so the components are Similarly, in any number of dimensions we have the zero vector
As we explored in Discovery 11.3, if we chain together a vector representing some change in position, with the zero vector, which represents no change, then the net result is just the change of That is, and also
Subsection 11.3.4 Vector negatives and vector subtraction
If we move from to and from there move from back to the net result is no change in position, which is represented by the zero vector. This means if we add the vector corresponding to to the one corresponding to the result is So if we label the vector for as it seems reasonable to label the vector for as the negative of so that we have
Diagram of a vector and its negative.
If we are to have and the components of are all then since we add vectors by adding corresponding components, the components of must be the negatives of the components of For example, if in the plane, then In any dimension, we have
This relationship between the components of and will lead to an identity between the negative and a certain scalar multiple of a vector in Subsection 11.3.5 below.
Remembering that the “natural” position for a vector is with its tail at the origin, it’s useful to visualize negatives in the following manner.
Diagram of a vector and its negative.
That is, the negative of a vector will change positions by the same distance but in the opposite direction.
To subtract vectors, we add to a vector the negative of another.
Diagram of vector subtraction.
Here, the diagonal vector labelled is obtained by adding and As we explored in Discovery 11.5, we get an interesting pattern if we draw in another copy of the vector labelled with its initial point at
Diagram showing the connection between vector subtraction, addition, and negative.
Triangle creates the vector addition pattern But notice that creates a vector addition pattern starting at and ending up at by So we can think of a difference of two vectors as a vector that runs between the heads of the two vectors in the difference when they share the same initial point. Algebraically, we can think of the and cancelling in the expression leaving just
Diagram showing the connection between vector subtraction, addition, and negative.
Now creates a vector addition pattern starting at and ending up at so that But also creates a vector addition pattern starting at and ending up at so that And finally, the fact that and both run between and but in opposite directions, verifies geometrically that as we would expect algebraically.
Subsection 11.3.5 Scalar multiplication
Geometrically, when we scalar multiply a vector we “stretch” or scale its length by the scale factor. (If this scale factor is negative, then we also flip the vector around in the opposite direction.) Here are some examples.
Diagram of a scalar multiple of a vector.
Notice how each of these vectors either points in the same direction as or in the opposite direction to In particular, they are all parallel to one another. This happens precisely when the vectors are scalar multiples of one another.
Thinking of the vectors in the diagram above as vectors in the plane, if we scale by a factor of then our knowledge of similar triangles tells us that the change in both and must be double.
Diagram of scalar multiplication as stretching the length of a vector.
So if then This relationship between original vector and scaled vector holds in general, in any dimension, and even for negative
Remark 11.3.1.
It may seem redundant to write don’t both sides mean the same thing? In terms of the effect on components of yes they are the same. However, when we explore abstract vectors in Chapter 15, we won’t have components or the geometric notion of “opposite direction” as means of seeing this equality, and so there will initially be a subtle difference between the idea of a vector having an additive negative (so that ) and the operation of scalar multiplying a vector by the particular scalar
We can connect scalar multiplication to addition, as we explored in Discovery 11.6. If we add a vector to itself, then the sum vector will be twice as long as the original.
Diagram relating scalar multiplication to vector addition.
So we have
Subsection 11.3.6 Vector algebra
We have already discovered a few rules of vector algebra, such as
Diagram illustrating distribution of scalar multiplication over vector addition.
We will provide more rules of vector algebra as Proposition 11.5.1 in Subsection 11.5.1. In Discovery 11.12, we decided that the algebra of vectors is the same as the algebra of column matrices (which we have already been referring to as column vectors), so we should be able to anticipate a number of the vector algebra rules that will appear in that proposition.
Warning 11.3.2.
There is no multiplication operation for vectors!
Algebraically, vectors in are the same as column matrices, and you cannot multiply two column matrices together because their sizes do not match up (except in but let’s ignore that for now). This also means that you cannot square a vector, you cannot square-root a vector, you cannot invert a vector, and you cannot divide by a vector. Do not try to use any of these operations in vector algebra! In Chapters 12–13, we will encounter some operations tied to the geometry of vectors that we will call “vector products” and for which we will use multiplication-like notation, but they will be for very specific geometric purposes and do not really correspond to our idea of multiplication in the algebra of numbers.
Subsection 11.3.7 The standard basis vectors
In Discovery 11.8, we encountered two very special vectors in the plane, and These two vectors could be considered the fundamental changes in position in the plane — represents a change by one unit right, and represents a change by one unit up.
Diagram of the standard basis vectors in
Any change in position can be built out of these two fundamental changes in position. Using the example in Discovery 11.8, the vector represents a change in position by units right and units up. We can achieve the “ units right” part with and the “ units up part” with To get the total change in position represented by we can combine these two building blocks in the linear combination
Diagram of a vector decomposition in terms of the standard basis vectors in
As you can imagine, every vector in the plane can be decomposed into a linear combination of and in this manner: for we have For this reason, the two vectors together are called the standard basis vectors in as they form a basis from which every other vector can be constructed. To use an analogy with chemistry, these two vectors are the basic atoms of and every other vector in is a molecule built out of a specific combination of these atoms. Since there are only two fundamental directions in (right/left and up/down), it is not surprising that we need only two basis vectors to represent all possible directions. This is the reason we call vectors in two-dimensional vectors.
In there are three fundamental directions, two horizontal and one vertical. We might use navigational terminology for the two horizontal directions and describe them as north/south and east/west, and then we can refer to the vertical direction as up/down. So we need three standard basis vectors in
which we can visualize as below.
Diagram of the standard basis vectors in
As before, any vector in can be decomposed as a linear combination of these three fundamental vectors. For example, the vector decomposes as
we find that there is only one unique combination of scalar values that can fill in the blanks and create an equality between on the left and the linear combination on the right: