Suppose \(V\) in an inner product space, and \(U\) is a subspace of \(V\text{.}\) The collection of all vectors orthogonal to \(U\) is called the orthogonal complement of \(U\text{,}\) and is denoted \(\orthogcmp{U}\text{.}\) That is, \(\orthogcmp{U}\) consists of all vectors that are orthogonal to every vector in \(U\text{.}\)
Note. In the first two tasks, keep in mind that if a directed line segment is translated, the vector associated to the translated segment is equal to the vector associated to the original segment.
Suppose \(U = \Span \{\uvec{u}_1,\uvec{u}_2,\uvec{u}_3\}\) is a subspace of an inner product space \(V\text{.}\) Convince yourself that a vector \(\uvec{v}\) is in \(\orthogcmp{U}\) if and only if \(\uvec{v}\) is orthogonal to each of \(\uvec{u}_1,\uvec{u}_2,\uvec{u}_3\text{.}\)
Since \(\orthogcmp{U}\) is defined by a homogeneous condition (inner product equals \(0\)), we expect it to be a subspace. The orthogonality condition can be used to determine a basis for \(\orthogcmp{U}\text{.}\)
Consider \(V = \matrixring_{2 \times 2}(\R)\) as an inner product space with \(\inprod{A}{B} = \trace(\utrans{B} A)\text{.}\) Let \(U\) represent the subspace of every upper triangular matrix whose upper-right entry is equal to its trace.
A set of vectors in an inner product space is called an orthogonal set if each vector in the set is orthogonal to every other vector in the set. A set of vectors is called an orthonormal set if it is an orthogonal set where every member is a unit vector.
Geometrically we think of linearly independent vectors as “pointing in different directions,” so it is reasonable to expect an orthogonal set of vectors to be independent.
Suppose \(\{ \uvec{v}_1, \uvec{v}_2, \uvec{v}_3 \}\) is an orthogonal set of nonzero vectors in an inner product space. To test for independence, we start with the homogeneous vector equation
Suppose \(\basisfont{B} = \{\uvec{e}_1,\uvec{e}_2,\uvec{e}_3\}\) is both a basis and an orthogonal set in an inner product space \(V\text{.}\) Since \(\basisfont{B}\) is a basis, every vector \(\uvec{v}\) in \(V\) has a unique expression
Substitute (✶✶) into \(\inprod{\uvec{v}}{\uvec{e}_1}\) to obtain an expression for \(\inprod{\uvec{v}}{\uvec{e}_1}\) in terms of the \(k_j\) and \(\uvec{e}_j\text{.}\) Then isolate \(k_1\text{.}\)
Similar to Task a, use (✶✶) in \(\inprod{\uvec{v}}{\uvec{e}_2}\) to obtain a formula for \(k_2\text{,}\) and in \(\inprod{\uvec{v}}{\uvec{e}_3}\) to obtain a formula for \(k_3\text{.}\)
If \(V\) has dimension \(n\) (instead of dimension \(3\)), then the coordinates of a vector \(\uvec{v}\) relative to an orthogonal basis \(\basisfont{B} = \{\uvec{e}_1,\uvec{e}_2,\dotsc,\uvec{e}_n\}\) are .
To keep it simple, let’s suppose \(V\) has dimension \(3\text{.}\) The beginning ingredient for our procedure is some (probably nonorthogonal) basis \(\basisfont{B}_0 = \{\uvec{v}_1,\uvec{v}_2,\uvec{v}_3\}\text{,}\) and the end result should be some definitely orthogonal basis \(\basisfont{B} = \{\uvec{e}_1,\uvec{e}_2,\uvec{e}_3\}\text{.}\)
To get the process started, we might as well first choose \(\uvec{e}_1 = \uvec{v}_1\text{,}\) since we don’t yet have any other \(\uvec{e}_j\) vectors chosen yet to which \(\uvec{e}_1\) needs to be orthogonal.
If we already knew the answer \(\basisfont{B} = \{\uvec{e}_1,\uvec{e}_2,\uvec{e}_3\}\text{,}\) and we expanded \(\uvec{v}_2 = k_1\uvec{e}_1 + k_2 \uvec{e}_2 + k_3 \uvec{e}_3\) relative to \(\basisfont{B}\text{,}\) what would the coefficient \(k_1\) be?
Draw a diagram of \(\uvec{v}_2\text{,}\)\(\uvec{e}_1\text{,}\) and \(k_1 \uvec{e}_1\) as if these were vectors in \(\R^n\text{,}\) keeping in mind that \(k_1 \uvec{e}_1\) should be exactly that part of \(\uvec{v}_2\) that is parallel to \(\uvec{e}_1\text{.}\) (Does this diagram remind you of some previous concept?)
For expansion \(\uvec{v}_3 = k_1\uvec{e}_1 + k_2\uvec{e}_2 + k_3\uvec{e}_3\) relative to \(\basisfont{B}\text{,}\) what would be the coordinates \(k_1\) and \(k_2\text{?}\)
Draw a diagram of \(\uvec{v}_3\text{,}\)\(\uvec{e}_1\text{,}\)\(\uvec{e}_2\text{,}\) and \(k_1 \uvec{e}_1 + k_2\uvec{e}_2\text{,}\) keeping in mind that \(k_1 \uvec{e}_1 + k_2\uvec{e}_2\) should be exactly that part of \(\uvec{v}_3\) that is “parallel” to \(\Span\{\uvec{e}_1,\uvec{e}_2\}\text{.}\)
What do you think the result would be if you unknowingly applied the procedure of Discovery 37.6 to a starting basis \(\basisfont{B}_0\) that was already orthogonal?
is such an enlarged basis for \(V\text{,}\) so that the \(\uvec{u}_j\) form a basis for \(U\text{.}\) If we apply the procedure of Discovery 37.6 to \(\basisfont{B}_0\) to obtain orthgonal basis