Draw the vector \(\uvec{u} = (3,2)\) in the \(xy\)-plane, then draw a representation of the decomposition \(\uvec{u} = 3 \uvec{e}_1 + 2 \uvec{e}_2\text{,}\) where \(\uvec{e}_1\) and \(\uvec{e}_2\) are the standard basis vectors in \(\R^2\text{.}\)
Then call on the help of some dead Greek dude to help you compute the length of \(\uvec{u}\text{.}\)
(b)
Does the same method work to determine the length of \(\uvec{w} = (3,-2)\text{?}\) (And what is the point of checking this case?)
(c)
In general, the formula for the length of a two-dimensional vector \(\uvec{v} = (v_1,v_2)\) is .
(d)
The same sort of formula works for in three or more dimensions. Fill in the general formulas below.
The length of \(\uvec{v} = (v_1,v_2,v_3)\) is .
The “length” of \(\uvec{v} = (v_1,v_2,v_3,v_4)\) is .
The “length” of \(\uvec{v} = (v_1,v_2,\dotsc,v_n)\) is .
The quantity for which we developed formulas in Discovery 12.1 is called the norm of \(\uvec{v}\text{,}\) and is denoted \(\unorm{v}\text{.}\) (We don’t use the word “length” for \(n>3\) — how do you measure length in four dimensions?)
Describe the pattern of your formula for \(\unorm{v}^2\)in words without using any letter variables:
\begin{equation*}
\textit{the square of the norm of a vector is equal to} \quad \fillinmath{XXXXXXXXXXXXXXXXXXXX} \text{.}
\end{equation*}
Discovery12.3.
In this activity, make sure you can answer the questions for all dimensions, and make sure you can justify your answer using the formula for norm from Discovery 12.2, not just geometrically.
(a)
Can \(\unorm{v}\) ever be negative?
(b)
What is \(\unorm{0}\text{?}\) Is \(\zerovec\) the only vector that has this value for its norm?
A unit vector is one whose norm is equal to \(1\text{.}\)
(a)
Verify that the standard basis vectors are all unit vectors, in all dimensions.
(b)
Fill in the blanks with an appropriate scalar multiple.
If \(\unorm{u} = 1/2\text{,}\) then \(\fillinmath{XX}\uvec{u}\) is a unit vector.
If \(\unorm{w} = 2\text{,}\) then \(\fillinmath{XX}\uvec{w}\) is a unit vector.
For every nonzero \(\uvec{v}\text{,}\)\(k\uvec{v}\) is a unit vector for both\(k=\fillinmath{XXX}\) and \(k=\fillinmath{XXX}\text{.}\)
Discovery12.5.
Plot points \(P(1,3)\) and \(Q(4,-1)\) in the \(xy\)-plane. Now draw in the vectors \(\uvec{u}\) and \(\uvec{v}\) that correspond to \(\abray{OP}\) and \(\abray{OQ}\text{.}\) Complete the triangle by drawing a vector between \(P\) and \(Q\text{.}\) Do you remember how to express this vector as a combination of \(\uvec{u}\) and \(\uvec{v}\text{?}\) Now compute the distance between \(P\) and \(Q\) by computing the norm of this third vector.
Recall that in math we measure angles in radians. Here are some common conversions:
In the \(xy\)-plane, what is the angle between \(\uvec{e}_1\) and \(\uvec{e}_2\text{?}\) … between \(\uvec{e}_1\) and \(\uvec{u} = (1,1)\text{?}\) … between \(\uvec{e_1}\) and \(2\uvec{e}_1\text{?}\) … between \(\uvec{e_1}\) and \(-\uvec{e}_2\text{?}\) … between \(\uvec{e}_1\) and \(\uvec{v} = (1,-1)\text{?}\) … between \(\uvec{e_1}\) and \(-\uvec{e}_1\text{?}\)
(b)
Fill in the blanks: an angle \(\theta\) between a pair of two-dimensional vectors should satisfy \(\fillinmath{XX} \le \theta \le \fillinmath{XX} \text{.}\)
Discovery12.7.
In the diagram below, consider \(\uvec{u}\) and \(\uvec{v}\) to be two-dimensional vectors. Label the third vector with the appropriate combination of \(\uvec{u}\) and \(\uvec{v}\text{,}\) just as you did in Discovery 12.5.
There is a version of Pythagoras that applies here even though \(\theta\neq\degree{90}\text{,}\) called the law of cosines:
where \(a\) is the length of \(\uvec{u}\text{,}\)\(b\) is the length of \(\uvec{v}\text{,}\) and \(c\) is the length of the “hypotenuse” across from \(\theta\text{.}\) (If \(\theta\) were \(\degree{90}\text{,}\) the right-hand side of this equality would be zero and this law would “collapse” to the same equality as Pythagoras.)
Use the formulas from Discovery 12.2 to rewrite the left-hand side of the law of cosines in terms of the components of \(\uvec{u} = (u_1,u_2)\) and \(\uvec{v} = (v_1,v_2)\text{,}\) then simplify until you get
Using the new expression \(2\times(\text{simple formula})\) from Discovery 12.7 as the left-hand side in the law of cosines, and dividing both sides by \(2 a b\text{,}\) we get
(Remember that \(a\) and \(b\) are the lengths of \(\uvec{u}\) and \(\uvec{v}\text{,}\) respectively.)
The “simple formula” part of this angle formula turns out to be an important one — it is called the Euclidean inner product or standard inner product (or just simply the dot product) of \(\uvec{u}\) and \(\uvec{v}\text{,}\) and written \(\udotprod{u}{v}\text{.}\)
Discovery12.8.
Let’s extend the computational pattern from Discovery 12.7. In the two-dimensional case in Task a below, you should just enter the “simple formula” you discovered above. In the subsequent tasks in higher dimensions, use the pattern from the two-dimensional case to create a similar higher-dimensional formula.
(a)In two dimensions.
For \(\uvec{u} = (u_1,u_2)\text{,}\)\(\uvec{v} = (v_1,v_2)\text{:}\)\(\quad \udotprod{u}{v} = \fillinmath{XXXXXXXXXX} \text{.}\)
(b)In three dimensions.
For \(\uvec{u} = (u_1,u_2,u_3)\text{,}\)\(\uvec{v} = (v_1,v_2,v_3)\text{:}\)\(\quad \udotprod{u}{v} = \fillinmath{XXXXXXXXXX} \text{.}\)
(c)In four dimensions.
For \(\uvec{u} = (u_1,u_2,u_3,u_4)\text{,}\)\(\uvec{v} = (v_1,v_2,v_3,v_4)\text{:}\)\(\quad \udotprod{u}{v} = \fillinmath{XXXXXXXXXX} \text{.}\)
(d)Arbitrary dimension.
For \(\uvec{u} = (u_1,u_2,\dotsc,u_n)\text{,}\)\(\uvec{v} = (v_1,v_2,\dotsc,v_n)\text{:}\)\(\quad \udotprod{u}{v} = \fillinmath{XXXXXXXXXX} \text{.}\)
Discovery12.9.
What is the formula for the dot product of a vector with itself?
For \(\uvec{v} = (v_1,v_2,\dotsc,v_n)\text{,}\)\(\udotprod{v}{v} = \fillinmath{XXXXXXXXXXXX}\text{.}\)
Do you think all these properties will still be true for higher-dimensional vectors?
Discovery12.11.
(a)
For two-dimensional column vectors \(\uvec{u} = \left[\begin{smallmatrix}u_1\\u_2\end{smallmatrix}\right]\) and \(\uvec{v} = \left[\begin{smallmatrix}v_1\\v_2\end{smallmatrix}\right]\text{,}\) compute the matrix product \((\utrans{\uvec{u}})\uvec{v}\text{.}\)
What do you notice? Do you think the same will happen for higher-dimensional column vectors?
(b)
Suppose \(\uvec{u}\) and \(\uvec{v}\) are \(n\)-dimensional column vectors and \(A\) is an \(n\times n\) matrix. Use what you discovered in Task a to fill in the blank: