We write โvโ to mean the length of the vector v in the plane. Keep in mind in all that follows that โvโ is always a single number, since it measures a length. If v has components v=(v1,v2) (where v1=ฮx and v2=ฮy), then solving for โ in the Pythagorean equation above gives us
The word length ceases to have any meaning in R4, so in general we refer to โvโ as the norm of v in any dimension. We imagine that if we were able to somehow measure length in Rn for nโฅ4, then the pattern where we used length in R2 to help us compute length in R3 would be repeated, and we would be able to use length in R3 to help us compute โlengthโ in R4, and then we would be able to use โlengthโ in R4 to help us compute โlengthโ in R5, and so on. So it seems reasonable to define the norm of a vector v=(v1,v2,โฆ,vn) in Rn to be
We explored some other basic properties of the norm in Discovery 13.3. First, when we take the square root of a nonzero number, we always take the positive square root, so a norm is never a negative number. This property agrees with our conception of norm as a length in R2 and R3, since in geometry we usually require lengths to be nonnegative.
And it is the only vector that has norm 0, since as soon as one of the components of a vector is nonzero, the sum of squares under the square root sign in the norm formula will be a positive number. There is no possibility of cancellation to zero under the square root, even if a vector has a mix of positive and negative components, because squaring the components will never have negative results.
Finally, we considered the effect of a scalar multiplication on norm. Geometrically, in R2 and R3 we think of scalar multiplication as scaling a vector's length by some scale factor k, so we should expect the numerical norm of a vector to be multiplied by the scale factor. And that is (almost) exactly what happens:
We need to be a little careful with the last step, because it is not always true that โk2=k. In particular, the result of โk2 is never negative, so if k is negative then it is impossible for โk2 to be equal to k. The proper formula for all values of k is โk2=|k|, so our norm formula becomes
In the plane or in space, a vector with length 1 is convenient geometrically because it can be used as a โmeter stickโ โ every scalar multiple of that vector will have length equal to the (absolute value of) the scale factor. For example, if u has length 1, then both 3u and โ3u have length 3. The same pattern will hold in any dimension when we replace the word โlengthโ with โnorm.โ A vector with norm 1 is called a unit vector. One of the reasons the standard basis vectors are so special is that each of them is a unit vector, as we saw in Discovery 13.4. Thus each standard basis vector can be used as a โmeter stickโ along the corresponding axis.
We also explored how to scale a nonzero vector to a unit vector in Discovery 13.4. For example, if a vector has norm 1/2, then we can scale it up to a unit vector by multiplying it by 2 to double its norm. Conversely, if a vector has norm 2, we can scale it down to a unit vector by multiplying it by 1/2 to halve its norm. In general, we can scale any nonzero vector v in Rn up or down to a unit vector by multiplying it by scale factor k=1โvโ, since then
In the above, we have used the formula for the norm of a scalar multiple, โkvโ=|k|โvโ, with k=1โvโ. The absolute value brackets on this particular scalar k can be removed because norms are never negative, and so |k|=k in this case.
As we saw in Subsection 12.3.4, if we position u and v to share the same initial points, then the difference vectors uโv and vโu run between the terminal points of u and v.
So we can measure the distance between the terminal points of u and v by computing โuโvโ or โvโuโ, as we discovered in Discovery 13.5. This process is even more straightforward when the common initial point of u and v is chosen to be the origin, so that the components of u and v are the same as the coordinates of their respective terminal points.
The analysis above illustrates a useful strategy to compute distances in the plane or in space: determine some vector that traverses the distance in question, and then compute the norm of that vector to obtain the desired distance. Combined with some of the vector geometry that we will develop in the next few chapters, this strategy is often easier than trying to determine the coordinates of the points at the endpoints of the desired distance. You should remember this strategy when we explore the geometry of lines and planes in Chapters 14โ15.
We only need to know one of these two angles, since the other can be computed from the knowledge that the sum of the two angles is 2ฯ radians. We generally prefer to avoid ambiguity in math, so it would be nice to have a systematic way to choose one of the two angles between a pair of vectors that we can refer to as the angle between the vectors. We will not distinguish between clockwise and counterclockwise, because those terms will become meaningless when we move up a dimension. Instead we will always choose the smaller angle to be the angle between the two vectors.
Thus, the angle between two vectors in the plane will always be between 0 and ฯ radians. Note that it is possible for the angle to be exactly0 radians or exactlyฯ radians, in the case the the two vectors are parallel.
In space, two vectors that are positioned to share the same initial point can be completed to a triangle, and that triangle will lie in a plane. The angle between the two vectors can then be taken to be the smaller of the two angles between the two vectors in that shared plane.
In Discovery 13.7, we combined vector geometry with some high school geometry to determine a formula for the (cosine of the) angle between two plane vectors. Recall from Subsection 12.3.4 that a vector that runs between the terminal points of two vectors that share an initial point is a difference vector.
Using the expression on the right above for the left-hand side of the equality a2+b2โc2=2abcosฮธ for cosฮธ, solving for cosฮธ, and then substituting a=โuโ and b=โvโ leads to
The expression on the left and the denominator on the right are both familiar โ we have the ordinary cosine function from trigonometry and we have some vector norms. However, before we worked through Discovery 13.7, the expression in the numerator on the right-hand side was unknown.
Earlier in this chapter, we mentioned how two vectors in space with their initial points at the origin lie inside a common flat plane (see Figure 13.3.2). If we repeated the above geometric analysis of vector angle in this flat surface inside space, we would come to a similar conclusion:
There is an obvious pattern to the numerators on the right-hand sides of equations (โ) and (โโ). And it seems that the value that these numerator formulas compute is important, since it provides a link between the two most important quantities in geometry: length and angle. So we give it a name, the dot product (or the Euclidean inner product), and use the symbol โ between two vectors to represent this quantity. The formula can obviously be extended to higher dimensions than just the plane R2 and space R3, so we will do just that:
The result of the computation uโ v is a number, which is important to keep in mind if you are working algebraically with an expression containing a dot product. See Proposition 13.5.3 in Subsection 13.5.1 for algebraic rules involving the dot product.
Even though we can't โseeโ geometry in Rn for n>3, we have already seen that we can perform computations related to geometry in these spaces. We can attach the number โvโ to a vector v in Rn that can be interpreted as its โlength.โ And for two vectors u and v in Rn, we can compute the number uโ v that is somehow related to the geometric relationship between u and v. We have seen that in the plane and in space, uโ v links the lengths of u and v to the angle between them. But do higher-dimensional vectors have angles between them? Is there some number that we can attach to u and v that โmeasuresโ the angle between them, even if we can't see or measure this angle directly?
The equalities in (โ) and (โโ) suggest a pattern we can copy into Rn in general. We define the angle between u and v to be the unique angle ฮธ, between 0 and ฯ, that makes
For every pair of vectors u and v in Rn, can we always determine a suitable angle ฮธ in the domain 0โคฮธโคฯ that works (i.e. that makes the two quantities in (โโโ) equal)?
For some pair of vectors u and v in Rn, might it be possible that there are several values of ฮธ in the domain 0โคฮธโคฯ that work?
Fortunately, for a pair of (nonzero) plane vectors or space vectors, there is exactly one number (once we restrict to the domain 0โคฮธโคฯ) that gets to call itself the angle between the vectors. It would not bode well for the possibility of somehow doing geometry in higher-dimensional spaces if there were sometimes two numbers that could be reasonably called โthe angleโ between a pair of vectors, or sometimes none at all. Luckily neither of these is possible.
Second, looking at the provided graph of y=cosฮธ, there are no instances in the domain 0โคฮธโคฯ where cosฮธ computes to the same value for two different values of ฮธ.
On this domain, we call the graph one-to-one. So a pair of vectors in Rn can never have two angles in the domain 0โคฮธโคฯ between them, because there are never two solutions to the equation
But is there always some solution to equation (โ )? No matter what domain you work on, cosฮธ never evaluates to a number greater than 1 or less than โ1. Perhaps if we tried hard enough we could discover some unlucky pair of vectors u and v in R13 where
computed to a number greater than 1 or to a number less than โ1. In that case, it would be impossible for cosฮธ to be equal to that number, and u and v would have no angle between them. It turns out that finding such an unlucky pair of vectors is impossible, and we know this courtesy of a couple of dead guys.
Since the graph y=cosฮธ passes through every possible y-value in the range โ1โคyโค1, and does so only once, equation (โ ) always has one unique solution for a pair of nonzero vectors.
We have already seen that the dot product is intimately tied to the geometry of Rn, acting as a link between norm (length) and angle. But as we discovered in Discovery 13.9, it is also directly linked to the norm by the observation
Really, this โnewโ link between dot product and norm is just the special case of equation (โ ) where u is taken to be equal to v, since in this case the angle between u and v (i.e. between v and itself) is zero, and cos0=1.
The pattern in the formula for the dot product of two vectors should look vaguely familiar to you โ it is a sum of products, which is exactly the pattern of the left-hand side of a linear equation, and so also the pattern in our โrow-times-columnโ view of matrix multiplication in Subsection 4.3.7. In fact, the dot product can be defined in terms of matrix multiplication if we take our vectors to be column vectors and use the transpose to turn one of the columns into a row. Indeed, for
Technically, the result of multiplying the 1รn matrix vT and the nร1 matrix u should be a 1ร1 matrix. But algebraically there is no difference between numbers and 1ร1 matrices with respect to the operations of addition, subtraction, and multiplication, so it is common to think of a 1ร1 matrix as just a number, as we did above.
(as we did in Discovery 13.11), instead of the seemingly pointless reversal of order in the formula uโ v=vTu. However, in Chapter 36, we will see that this reversal of order is necessary when studying complex vectors (that is, vectors where the components are complex numbers). Since this reversal of order is harmless here, we will start using it now so as to avoid confusion later.