Skip to main content
Logo image

Section 13.5 Theory

Subsection 13.5.1 Properties of orthogonal vectors and orthogonal projection

First we record a few properties of orthogonal vectors and orthogonal projection.

Proof.

These properties of orthogonal vectors follow directly from the definition of orthogonality (i.e. dot product equals \(0\)) and from the algebraic properties of the dot product listed in Proposition 12.5.3, so we will omit detailed proofs.

Proof of Rule 4.

Starting with the formula we determined for orthogonal projection, and using Rule 3 of Proposition 12.5.1 and Rule 7 of Proposition 12.5.3, we have
\begin{align*} \proj_{(k\uvec{a})} \uvec{u} \amp = \frac{\dotprod{\uvec{u}}{(k\uvec{a})}}{\norm{k\uvec{a}}^2} \; (k\uvec{a})\\ \amp = \frac{k(\udotprod{u}{a})}{\abs{k}^2\norm{\uvec{a}}^2} \; (k\uvec{a})\\ \amp = \frac{\cancel{k^2}(\udotprod{u}{a})}{\cancel{k^2}\norm{\uvec{a}}^2} \; \uvec{a}\\ \amp = \frac{\udotprod{u}{a}}{\norm{\uvec{a}}^2} \; \uvec{a}\\ \amp = \proj_{\uvec{a}} \uvec{u}\text{.} \end{align*}

Proof of Rule 5.

If \(\uvec{u}\) is parallel to \(\uvec{a}\text{,}\) then it is a scalar multiple of \(\uvec{a}\text{:}\) \(\uvec{u} = k\uvec{a}\) for some scalar \(k\text{.}\) Then, using Rule 6 and Rule 8 of Proposition 12.5.3, we have
\begin{align*} \proj_{\uvec{a}} \uvec{u} \amp = \frac{\dotprod{\uvec{u}}{\uvec{a}}}{\unorm{a}^2} \; \uvec{a}\\ \amp = \frac{\dotprod{(k\uvec{a})}{\uvec{a}}}{\unorm{a}^2} \; \uvec{a}\\ \amp = \frac{k(\dotprod{\uvec{a}}{\uvec{a}})}{\unorm{a}^2} \; \uvec{a}\\ \amp = k\frac{\unorm{a}^2}{\unorm{a}^2} \; \uvec{a}\\ \amp = k \uvec{a}\\ \amp = \uvec{u}\text{.} \end{align*}

Proofs of other rules.

The rest of these properties of orthogonal projection follow from the properties of the dot product in Proposition 12.5.3 and from the formula
\begin{equation*} \proj_{\uvec{a}} \uvec{u} = \frac{\udotprod{u}{a}}{\unorm{a}^2}\;\uvec{a} \text{,} \end{equation*}
so we will leave the remaining proofs to you, the reader.

Subsection 13.5.2 Decomposition of a vector into orthogonal components

The following fact says that the decomposition of one vector into components (parallel and orthogonal) relative to another vector is unique.

Proof.

Clearly such a decomposition exists — see Remark 13.5.4 below. But suppose we have two such decompositions,
\begin{align*} \uvec{u} \amp= \uvec{p}_{\uvec{a}} + \uvec{n}_{\uvec{a}}, \amp \uvec{u} \amp= \uvec{p}_{\uvec{a}}' + \uvec{n}_{\uvec{a}}', \end{align*}
where both \(\uvec{p}_{\uvec{a}},\uvec{p}_{\uvec{a}}'\) are parallel to \(\uvec{a}\) and both \(\uvec{n}_{\uvec{a}},\uvec{n}_{\uvec{a}}'\) are orthogonal to \(\uvec{a}\text{.}\) Then each of \(\uvec{n}_{\uvec{a}},\uvec{n}_{\uvec{a}}'\) are also orthogonal to each of \(\uvec{p}_{\uvec{a}},\uvec{p}_{\uvec{a}}'\) (Rule 1 of Proposition 13.5.1).
We can use the two decompositions to obtain two expressions for each of \(\dotprod{\uvec{p}_{\uvec{a}}}{\uvec{u}}\) and \(\dotprod{\uvec{p}_{\uvec{a}}'}{\uvec{u}}\text{:}\)
\begin{align*} \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{u}} \amp= \dotprod{\uvec{p}_{\uvec{a}}}{(\uvec{p}_{\uvec{a}} + \uvec{n}_{\uvec{a}})} \amp \dotprod{\uvec{p}_{\uvec{a}}'}{\uvec{u}} \amp= \dotprod{\uvec{p}_{\uvec{a}}'}{(\uvec{p}_{\uvec{a}}' + \uvec{n}_{\uvec{a}}')}\\ \amp= \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{p}_{\uvec{a}}} + \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{n}_{\uvec{a}}} \amp \amp= \dotprod{\uvec{p}_{\uvec{a}}'}{\uvec{p}_{\uvec{a}}'} + \dotprod{\uvec{p}_{\uvec{a}}'}{\uvec{n}_{\uvec{a}}'}\\ \amp= \norm{\uvec{p}_{\uvec{a}}}^2 + \zerovec \amp \amp= \norm{\uvec{p}_{\uvec{a}}'}^2 + \zerovec\\ \amp= \norm{\uvec{p}_{\uvec{a}}}^2, \amp \amp= \norm{\uvec{p}_{\uvec{a}}'}^2,\\ \\ \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{u}} \amp= \dotprod{\uvec{p}_{\uvec{a}}}{(\uvec{p}_{\uvec{a}}' + \uvec{n}_{\uvec{a}}')} \amp \dotprod{\uvec{p}_{\uvec{a}}'}{\uvec{u}} \amp= \dotprod{\uvec{p}_{\uvec{a}}'}{(\uvec{p}_{\uvec{a}} + \uvec{n}_{\uvec{a}})}\\ \amp= \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{p}_{\uvec{a}}'} + \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{n}_{\uvec{a}}'} \amp \amp= \dotprod{\uvec{p}_{\uvec{a}}'}{\uvec{p}_{\uvec{a}}} + \dotprod{\uvec{p}_{\uvec{a}}'}{\uvec{n}_{\uvec{a}}}\\ \amp= \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{p}_{\uvec{a}}'} + \zerovec \amp \amp= \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{p}_{\uvec{a}}'} + \zerovec\\ \amp= \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{p}_{\uvec{a}}'}, \amp \amp= \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{p}_{\uvec{a}}'}. \end{align*}
Since the bottom two calculations yield the same result, the quantities they begin with must be equal:
\begin{equation*} \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{u}} = \dotprod{\uvec{p}_{\uvec{a}}'}{\uvec{u}}\text{.} \end{equation*}
But these are also the beginning quantities of the top two calculations, so those two calculations must have the same result,
\begin{equation*} \norm{\uvec{p}_{\uvec{a}}}^2 = \norm{\uvec{p}_{\uvec{a}}'}^2 \text{.} \end{equation*}
Therefore, we can conclude that \(\uvec{p}_{\uvec{a}}\) and \(\uvec{p}_{\uvec{a}}'\) are the same length. Since these two vectors are also parallel (because they are both parallel to \(\uvec{a}\)), we must have that either they are the same vector or are negatives of each other. However, if they were negatives of each other (i.e. \(\uvec{p}_{\uvec{a}}' = -\uvec{p}_{\uvec{a}}\)), tracing through the two calculations of \(\dotprod{\uvec{p}_{\uvec{a}}}{\uvec{u}}\) above would tell us that
\begin{equation*} \norm{\uvec{p}_{\uvec{a}}}^2 = \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{u}} = \dotprod{\uvec{p}_{\uvec{a}}}{\uvec{p}_{\uvec{a}}'} = \dotprod{\uvec{p}_{\uvec{a}}}{(-\uvec{p}_{\uvec{a}})} = -(\dotprod{\uvec{p}_{\uvec{a}}}{\uvec{p}_{\uvec{a}}}) = -\norm{\uvec{p}_{\uvec{a}}}^2\text{,} \end{equation*}
which is only possible if \(\norm{\uvec{p}_{\uvec{a}}} = 0\text{,}\) in which case \(\uvec{p}_{\uvec{a}} = \zerovec\text{,}\) and then also \(\uvec{p}_{\uvec{a}}' = -\uvec{p}_{\uvec{a}} = \zerovec\text{.}\) Thus, in every case we have \(\uvec{p}_{\uvec{a}}' = \uvec{p}_{\uvec{a}}\text{.}\) But then
\begin{equation*} \uvec{n}_{\uvec{a}}' = \uvec{u} - \uvec{p}_{\uvec{a}}' = \uvec{u} - \uvec{p}_{\uvec{a}} = \uvec{n}_{\uvec{a}} \text{.} \end{equation*}
So the two decompositions we started with are actually the same decomposition, and it is not possible to have more than one such decomposition.

Remark 13.5.4.

Clearly, in this decomposition we have \(\uvec{p}_{\uvec{a}} = \proj_{\uvec{a}} \uvec{u}\) and \(\uvec{n}_{\uvec{a}} = \uvec{u} - \proj_{\uvec{a}} \uvec{u}\text{.}\)

Subsection 13.5.3 Properties of the cross product

Finally, we record a few properties of the cross product.

Proof idea.

The first two statements just reflect the design goal in inventing the cross product: we were looking for a vector that was orthogonal to each of the two input vectors. The rest of the statements follow easily from the determinant formula (†††) for the cross product expressed in Subsection 13.3.6 combined with the properties of the determinant contained in Proposition 9.4.2. We leave detailed proofs to you, the reader.