Section 13.3 Concepts
Subsection 13.3.1 Values of
where is the angle between nonzero vectors and On the right of equation (✶), the denominator is always positive, so whether the whole fraction is positve, negative, or zero depends entirely on the dot product in the numerator. On the left, the cosine function is positive, negative, or zero precisely when the angle is acute, obtuse, or right. So we come to the following conclusions.
acute: | positive | |
right: | zero | |
obtuse: | negative |
Subsection 13.3.2 Orthogonal vectors
Right angles are extremely important in geometry, and from Figure 13.3.1 we see that the dot product gives us a very convenient way to tell when the angle between two nonzero vectors and is right: we have precisely when . In the plane or in space, and will be perpendicular when and Since we can’t “see” right angles and perpendicular lines in higher dimensions, in general we say that and are orthogonal when
Subsubsection 13.3.2.1 Orthogonal vectors in
In Discovery 13.2, we tried to find a pattern to the task of choosing some vector that is orthogonal to a given one in the plane. Rather than struggle with the geometry, we unleash the power of algebra: given vector we are looking for a vector so that Expanding out the dot product, we are looking to fill in the blanks in the following equation with components for
Two numbers add to zero only if one is the negative of the other. We can make both terms in the sum the same number by entering in the first blank and in the second, so we can make the sum cancel to zero by also flipping the sign of one of those entries. For example,
We have now answered the question in Discovery 13.2.c.
Pattern 13.3.2. Orthogonal vectors in the plane.
Given vector in the plane, two examples of vectors that are orthogonal to are and and every vector that is orthogonal to is some scalar multiple of this example
Note 13.3.3.
Subsection 13.3.3 Orthogonal projection
Orthogonal projection is a vector solution to a problem in geometry.
Question 13.3.4.
Given a line through the origin in the plane, and a point not on the line, what point on the line is closest to the given point?
In Question 13.3.4, write for the line through the origin and for the point not on that line. Consider the point on at the foot of the perpendicular to from . Any other point on will form a right triangle with and making it farther from than since the distance is the length of the hypotenuse in the right triangle.
Diagram illustrating the shortest distance from a point to a line in the plane.
All we know about is that it is on line and it is at the vertex of a right angle with and But if we introduce some vectors to help tackle this problem, then maybe we can use what we know about the dot product and right angles to help determine
Diagram illustrating orthogonal projection onto a line through the origin.
In this diagram, is the vector corresponding to directed line segment and is the vector corresponding to the directed line segment where is our unknown closest point. Since is placed with its tail at the origin, the components of are precisely the coordinates of So determining will solve the problem.
We are assuming that the line is known, and it would be nice to also have a vector means of describing it. But the vectors created by the points on this line (using the origin as a universal tail point) will all be parallel to each other, so (as we discovered in Discovery 13.3.a) line could be described as all scalar multiples of a particular vector This vector can be arbitrarily chosen as any vector parallel to the line. Once we have chosen we have reduced our problem from determining the two unknown components of the vector to determining a single unknown scalar so that
As mentioned, since is the closest point, the directed line segment must be perpendicular to On the diagram above, we have used the vector to represent this direct line segment. As in Discovery 13.3.b, we know that must be zero — this is the perpendicular condition. However, the vector is unknown as well, since we don’t know its initial point. But we can also use the triangle formed by and to replace
Replacing by this expression in the condition gives us an equation of numbers that we can solve for the unknown scale factor as we did in Discovery 13.3.d:
This vector pointing from the origin to the desired closest point is called the projection of onto or sometimes the vector component of parallel to , and we write to represent it.
Procedure 13.3.5. Closest point on a line (orthogonal projection).
Given a line through the origin and point that does not lie on compute the point on that is closest to as follows.
- Choose any point
on the line (excluding the origin), and form the parallel vector - Form the vector
- Compute the projection vector
This projection vector will now point from the origin to the desired closest point parallel to the line , so that
Remark 13.3.6.
It is not actually necessary that be external to the line. If you were to carry out the procedure above in the case that lies on the calculations would end up with confirming that was already the point on the line that is closest to itself.
The normal vector in the diagram above is sometimes called the vector component of orthogonal to . Together, the projection vector and corresponding normal vector are called components of (relative to ) because they represent an orthogonal decomposition of :
where is parallel to and is orthogonal to While this decomposition is relative to it is really only the direction of that matters — if is parallel to (even possibly opposite to ), then both
will be true.
Procedure 13.3.7. Shortest distance to a line.
Given a line through the origin and point that does not lie on compute the shortest distance from to the line as follows.
- Compute the projection vector
as in Procedure 13.3.5. - Compute the normal vector
- Compute the norm
Remark 13.3.8.
- These procedures and calculations can be easily modified to work for lines that do not pass through the origin: simply choose some arbitrary “initial” point
on the line to “act” as the origin. - All of these calculations can be performed in higher dimensions as well. In higher dimensions, it is true that there is no longer one unique perpendicular direction to a given vector
but the calculation of as above will pick out the correction direction to extend from the line to the point at a right angle to the line.
Subsection 13.3.4 Normal vectors of lines in the plane
The left-hand side of this calculation looks a lot like a dot product — we could reinterpret equation (✶✶) as
So verifying that the point is on the line is equivalent to checking that the corresponding vector (with its tail at the origin) is orthogonal to the vector whose components are the coefficients from our line equation.
Diagram of a normal vector to a line in the plane (homogeneous case).
Every other point on the line satisfies the same relationship, as the equation for the line could be rewritten in a vector form as
The vector is called a normal vector for the line. Note that normal vectors for a line are not unique — every nonzero scalar multiple of will also be normal to the line, and this is equivalent to noting that we could multiply the equation by any nonzero factor to obtain a different equation that represents the same line in the plane.
In Discovery 13.7 we considered a line defined by a nonhomogeneous equation This line has the same slope as the line defined by that we investigate above, and so the vector obtained from the coefficients on and in the equation must still be normal. The constant just changes the -intercept.
Diagram of a normal vector to a line in the plane (nonhomogeneous case).
In the homogeneous case, vectors from the origin determined by a point on the line were also parallel to the line. Since things have shifted away from the origin in the nonhomogeneous case, to get a vector parallel to the line we need to consider two vectors from the origin to points on the line. Two convenient points for this the line are and with corresponding vectors and Then the difference vector
is parallel to the line, as in the diagram above. In fact, this vector is the same as previous vector that appears parallel to the line through the origin in the diagram for the homogeneous case above, so we know it satisfies
Is there a way to use the normal vector to create a vector condition by which we can tell if a vector represents a point on the line, as we did with equation (✶✶✶) in the homoegenous case? We need two points on the line to create a parallel difference vector, but we could compare the variable vector with a arbitrarily chosen fixed vector representing a point on the line (like say).
Diagram of a normal vector to a line in the plane (nonhomogeneous case).
Every such difference vector is parallel to the line and hence orthogonal to the normal vector so that we can describe the line as all points where the corresponding vector satisfies
This is called the point-normal form for the line, referring to the point on the line at the terminal point of and the normal vector
Pattern 13.3.9. Point-normal form for a line in .
If is a point on the line (that is, is true), then can alternatively be described as all points that satisfy
Remark 13.3.10.
It may seem like the line parameter has disappeared in converting from algebraic form to point-normal form. But it has merely be replaced by the point since In fact, if we use the algebraic properties of the dot product to expand the left-hand side of the point-normal form equation, we can recover the original algebraic equation:
Subsection 13.3.5 Normal vectors of planes in space
A similar analysis can be made for an equation describing a plane in space. The coefficients form a normal vector For vectors and that both have initial point at the origin and terminal points on the plane, then the difference vector is parallel to the plane, hence normal to If we keep a fixed choice of but replace by a variable vector we can describe the plane as all points whose difference is orthogonal to giving us a point-normal for a plane just as in equation (†).
Pattern 13.3.11. Point-normal form for a plane in .
If is a point on the plane (that is, is true), then can alternatively be described as all points that satisfy
Remark 13.3.12.
A line in space does not have a point-normal form, because it does not have one unique normal “direction” like a line in the plane or a plane in space does. To describe a line in space in a similar fashion you would need two normal vectors. We will see several more convenient ways to describe a line in space in the next chapter.
Subsection 13.3.6 The cross product
Seeing how the algebraic equation for a plane in is connected to a normal vector to the plane, a basic problem is how to quickly obtain a normal vector. If we know two vectors that are parallel to the plane in question, the problem reduces to the following.
Question 13.3.13.
Given two nonzero, nonparallel vectors in determine a third vector that is orthogonal to each of the first two.
Diagram of the setup for the cross product problem (Question 13.3.13 above).
for the unknown vector Expanding out the dot products, we get (surprise!) a system of linear equations:
Specifically, we get a homogeneous system of two equations in the three unknown coordinates Now, since this system is homogeneous, it is consistent. But its general solution will also require at least one parameter, since its rank is at most while we have three variables. In the diagram above, we can see what the “freedom” of a parameter corresponds to — we can make longer or shorter, or turn it around to be opposite of the way it is pictured, and it will remain orthogonal to and Our end goal is a calculation formula and procedure that will compute one particular solution to this problem, so let’s introduce a somewhat arbitrary additional equation to eliminate the need for a parameter in the solution.
In matrix form, this system can be expressed as with
Assuming that Cramer’s rule tells us the solution to this system.
Now, each of has a common factor of and all this common factor does is scale the length of our solution vector without affecting orthogonality with and Even worse, depends on that extra equation we threw in, and we would like our solution to depend only on and So let’s remove it and use solution
We call this the cross product of and , and write instead of There is a trick to remembering how to compute the cross product: if we replace the top row of by the standard basis vectors in then the cross product will be equal to its determinant expanded by cofactors along the first row. That is, setting
and expanding the determinant along the first row yields
as desired. See Example 13.4.4 in Subsection 13.4.3 for an example of using formula (†††) to compute cross products.
The cross product follows the right-hand rule — if you orient your right hand so that your fingers point in the direction of and curl towards then your thumb will point in the direction of
Computing instead of should still produce a vector that is orthogonal to both and but the right-hand rule tells that the two should be opposite to each other. From equation (†††) we can be even more specific. Computing would swap the second and third rows of the special matrix in equation (†††), and we know that the resulting determinant would be the negative of that for the matrix for computing and so
Remark 13.3.14.
There is one more thing to say about our development of the cross product — Cramer’s rule can only be applied if is not zero, where is the matrix in (††). However, the coefficients in the extra equation we introduced did not figure into our final solution. So if ended up being zero for some particular vectors and we could just change the variable coefficients in that extra equation (but keep the in the equals column) so that is not zero, and we would still come to the same formula for And it follows from concepts we will learn in Chapter 20 that it is always possible to fill in the top row of this matrix so that its determinant is nonzero, as long as we start with nonparallel vectors and