Skip to main content

Section 37.4 Examples

Subsection 37.4.1 Orthogonal complements

Example 37.4.1. Determining a basis for an orthogonal complement.

Let's carry out the example explored in Discovery 37.3, where we considered the space \(V = \matrixring_{2 \times 2}(\R)\) equipped with the inner product \(\inprod{A}{B} = \trace(\utrans{B} A) \text{,}\) and the subspace \(U\) of all upper triangular matrices whose upper-right entry is equal to its trace.

A typical element of \(U\) can be described parametrically as

\begin{equation*} \begin{bmatrix} x \amp x + y \\ 0 \amp y \end{bmatrix} \text{.} \end{equation*}

The parameters \(x,y\) have no further dependence relation between them, so they each have an associated basis vector:

\begin{equation*} U = \Span\left\{ \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 0 \amp 1 \end{bmatrix} \right\}\text{.} \end{equation*}

To be an element of \(\orthogcmp{U}\text{,}\) an arbitrary element

\begin{equation*} B = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \end{equation*}

of \(\matrixring_{2 \times 2} (\R)\) must be orthogonal to each of the basis vectors \(A_1,A_2\) for \(U\) above. We have

\begin{align*} \inprod{B}{A_1} \amp = \trace(\utrans{A}_1 B) \amp \inprod{B}{A_2} \amp = \trace(\utrans{A}_2 B)\\ \amp = \trace\left( \begin{bmatrix} 1 \amp 0 \\ 1 \amp 0 \end{bmatrix} \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \right) \amp \amp = \trace\left( \begin{bmatrix} 0 \amp 0 \\ 1 \amp 1 \end{bmatrix} \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \right)\\ \amp = \trace \begin{bmatrix} a \amp b \\ a \amp b \end{bmatrix} \amp \amp = \trace \begin{bmatrix} 0 \amp 0 \\ a + c \amp b + d \end{bmatrix}\\ \amp = a + b \text{,} \amp \amp = b + d \text{.} \end{align*}

For orthogonality, we need both of these results to be zero, leading to homogeneous system

\begin{equation*} \left\{ \begin{array}{cc} a + b \amp = 0, \\ b + d \amp = 0. \end{array} \right. \end{equation*}

Since parameter \(b\) appears in both equations, we choose to leave that free, and then have

\begin{equation*} a = d = -b \text{.} \end{equation*}

Parameter \(c\) does not appear in the system, hence must be free as well. So a typical element of \(\orthogcmp{U}\) can be described parametrically as

\begin{equation*} \left[\begin{array}{rr} -b \amp b \\ c \amp -b \end{array}\right] \text{,} \end{equation*}

leading to

\begin{equation*} \orthogcmp{U} = \Span\left\{ \left[\begin{array}{rr} -1 \amp 1 \\ 0 \amp -1 \end{array}\right], \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix} \right\}\text{.} \end{equation*}
Example 37.4.2. Determining a basis for an orthogonal complement in a complex space.

Consider space \(\C^4\) with the complex dot product, and subspace

\begin{equation*} W = \Span \bigl\{ (1,1,\ci,\ci), (1,-1,\ci,\ci) \bigr\} \text{.} \end{equation*}

Let \(\uvec{w}_1,\uvec{w}_2\) represent the two spanning vectors above.

To be in \(\orthogcmp{W}\text{,}\) an arbitrary vector \(\uvec{z} = (z_1,z_2,z_3,z_4)\) in \(\C^4\) must be orthogonal to each of \(\uvec{w}_1,\uvec{w}_2\text{.}\) So compute

\begin{align*} \dotprod{\uvec{z}}{\uvec{w}_1} \amp = z_1 + z_2 - z_3 \ci - z_4 \ci \text{,} \\ \dotprod{\uvec{z}}{\uvec{w}_2} \amp = z_1 - z_2 - z_3 \ci - z_4 \ci \text{.} \end{align*}

Setting the above expressions to be equal to zero for orthogonality leads to homogeneous system

\begin{equation*} \left[\begin{array}{rrrr} 1 \amp 1 \amp -\ci \amp -\ci \\ 1 \amp -1 \amp -\ci \amp -\ci \end{array}\right] \qquad\rowredarrow\qquad \left[\begin{array}{rrrr} 1 \amp 0 \amp -\ci \amp -\ci \\ 0 \amp 1 \amp 0 \amp 0 \end{array}\right]\text{.} \end{equation*}

So \(z_3,z_4\) are free, \(z_2 = 0\text{,}\) and

\begin{equation*} z_1 = z_3 \ci + z_4 \ci \text{,} \end{equation*}

so that a typical vector in \(\orthogcmp{W}\) can be described parametrically as

\begin{equation*} \uvec{z} = (s \ci + t \ci, 0, s, t) \text{.} \end{equation*}

Finally, the above parametric expression for elements of \(\orthogcmp{W}\) leads to

\begin{equation*} \orthogcmp{W} = \Span \bigl\{ (\ci, 0, 1, 0), (\ci, 0, 0, 1) \bigr\} \text{.} \end{equation*}

Subsection 37.4.2 Expansion relative to an orthogonal basis

Here we will provide an example of using the pattern of Discovery 37.5 (discussed in Subsection 37.3.2) to express a vector as a linear combination of orthogonal basis vectors.

Example 37.4.3.

Consider \(V = \poly_2(\R)\) equipped with inner product

\begin{equation*} \inprod{p}{q} = \integral{-1}{1}{p(x) q(x)}{x} \text{.} \end{equation*}

The collection

\begin{align*} p_1(x) \amp = x + 1, \amp p_2(x) \amp = x^2 - x, \amp p_3(x) \amp = 5 x^2 + x - 2 \end{align*}

forms an orthogonal basis \(\basisfont{B}\) for \(V\text{.}\)

What is the coordinate vector \(\rmatrixOf{q}{B}\) for \(q(x) = 3 x^2 + 3 x + 3 \text{?}\)

Compute

\begin{align*} \inprod{q}{p_1} \amp = \integral{-1}{1}{(3 x^2 + 3 x + 3)(x + 1)}{x} = 10 \text{,} \\ \inprod{q}{p_2} \amp = \integral{-1}{1}{(3 x^2 + 3 x + 3)(x^2 - x)}{x} = \frac{6}{5} \text{,} \\ \inprod{q}{p_3} \amp = \integral{-1}{1}{(3 x^2 + 3 x + 3)(5 x^2 + x - 2)}{x} = 2 \text{.} \end{align*}

We also need the norms

\begin{align*} \norm{p_1}^2 \amp = \inprod{p_1}{p_1} = \integral{-1}{1}{(x + 1)^2}{x} = \frac{8}{3} \text{,} \\ \norm{p_2}^2 \amp = \inprod{p_2}{p_2} = \integral{-1}{1}{(x^2 - x)^2}{x} = \frac{16}{15} \text{,} \\ \norm{p_3}^2 \amp = \inprod{p_3}{p_3} = \integral{-1}{1}{(5 x^2 + x - 2)^2}{x} = \frac{16}{3} \text{.} \end{align*}

Putting these together, we have

\begin{align*} q(x) \amp = \frac{\inprod{q}{p_1}}{\norm{p_1}^2} \, p_1(x) + \frac{\inprod{q}{p_2}}{\norm{p_2}^2} \, p_2(x) + \frac{\inprod{q}{p_3}}{\norm{p_3}^2} \, p_3(x)\\ \amp = \frac{10}{8/3} \cdot p_1(x) + \frac{6/5}{16/15} \cdot p_2(x) + \frac{2}{16/3} \cdot p_3(x)\\ \amp = \frac{15}{4} \cdot p_1(x) + \frac{9}{8} \cdot p_2(x) + \frac{3}{8} \cdot p_3(x)\text{,} \end{align*}

so that

\begin{equation*} \rmatrixOf{q}{B} = \left( \frac{15}{4}, \frac{9}{8}, \frac{3}{8} \right) \text{.} \end{equation*}

Subsection 37.4.3 Using the Gram-Schmidt orthogonalization process

Example 37.4.4. Producing an orthogonal basis.

Consider \(V = \poly_2(\R)\) equipped with inner product

\begin{equation*} \inprod{p}{q} = \integral{0}{1}{p(x) q(x)}{x} \text{.} \end{equation*}

Begin with the standard basis

\begin{equation*} \basisfont{S} = \{ 1, x, x^2 \} \text{.} \end{equation*}

Following the Gram-Schmidt orthogonalization process, set

\begin{equation*} e_1(x) = 1 \text{.} \end{equation*}

In order to compute \(e_2(x)\text{,}\) first compute

\begin{gather*} \inprod{x}{e_1} = \integral{0}{1}{x \cdot 1}{x} = \frac{1}{2} \text{,} \\ \norm{e_1}^2 = \inprod{e_1}{e_1} = \integral{0}{1}{1^2}{x} = 1 \text{.} \end{gather*}

Then we set

\begin{equation*} e_2(x) = x - \frac{\inprod{x}{e_1}}{\norm{e_1}^2} \, e_1(x) = x - \frac{1/2}{1} \cdot 1 = x - \frac{1}{2}\text{.} \end{equation*}

Computing \(e_3(x)\) will require quite a few more calculations:

\begin{gather*} \inprod{x^2}{e_1} = \integral{0}{1}{x^2 \cdot 1}{x} = \frac{1}{3} \text{,} \\ \inprod{x^2}{e_2} = \integral{0}{1}{x^2 \left(x - \frac{1}{2}\right)}{x} = \frac{1}{12} \text{,} \\ \norm{e_2}^2 = \inprod{e_1}{e_1} = \integral{0}{1}{\left(x - \frac{1}{2}\right)^2}{x} = \frac{1}{12} \text{.} \end{gather*}

Using these results, compute

\begin{align*} e_3(x) \amp = x^2 - \left( \frac{\inprod{x^2}{e_1}}{\norm{e_1}^2} \, e_1(x) + \frac{\inprod{x^2}{e_2}}{\norm{e_2}^2} \, e_2(x) \right)\\ \amp = x^2 - \left( \frac{1/3}{1} \cdot 1 + \frac{1/12}{1/12} \cdot \left(x - \frac{1}{2}\right) \right)\\ \amp = x^2 - x + \frac{1}{6}\text{.} \end{align*}

We know have orthogonal basis

\begin{equation*} \basisfont{B} = \left\{ 1, x - \frac{1}{2}, x^2 - x + \frac{1}{6} \right\} \text{.} \end{equation*}

If we would like an orthonormal basis, we need one further calculation:

\begin{equation*} \norm{e_3}^2 = \inprod{e_3}{e_3} = \integral{0}{1}{\left(x^2 - x + \frac{1}{6}\right)^2}{x} = \frac{1}{180}\text{.} \end{equation*}

Using \(\sqrt{12} = 2\sqrt{3}\) and \(\sqrt{180} = 6\sqrt{5}\text{,}\) we can normalize to

\begin{equation*} \basisfont{B}' = \left\{ \frac{e_1(x)}{\norm{e_1}}, \frac{e_2(x)}{\norm{e_2}}, \frac{e_3(x)}{\norm{e_3}} \right\} = \{ 1, 2 \sqrt{3} \, x - \sqrt{3}, 6\sqrt{5} \, x^2 - 6\sqrt{5} \, x + \sqrt{5} \}\text{.} \end{equation*}
Example 37.4.5. Producing an orthogonal basis for a complex inner product space.

Let's apply the Gram-Schmidt orthogonalization process to \(\C^4\text{,}\) but starting with the basis

\begin{equation*} \basisfont{B}_0 = \bigl\{ (1,1,\ci,\ci), (1,-1,\ci,\ci), (\ci, 0, 1, 0), (\ci, 0, 0, 1) \bigr\} \text{,} \end{equation*}

which is made up of the basis vectors for \(W\) and \(\orthogcmp{W}\) from Example 37.4.2.

Let \(\uvec{w}_1,\uvec{w}_2,\uvec{w}_3,\uvec{w}_4\) represent the four initial basis vectors above. Set \(\uvec{e}_1 = \uvec{w}_1\text{,}\) and compute

\begin{gather*} \inprod{\uvec{w}_2}{\uvec{e}_1} = \dotprod{\uvec{w}_2}{\uvec{e}_1} = 1 - 1 - \ci^2 - \ci^2 = 2 \text{,}\\ \norm{\uvec{e}_1}^2 = \dotprod{\uvec{e}_1}{\uvec{e}_1} = 1 + 1 - \ci^2 - \ci^2 = 4\text{.} \end{gather*}

Normally, we would now set \(\uvec{e}_2\) to be

\begin{equation*} \uvec{w}_2 - \frac{\inprod{\uvec{w}_2}{\uvec{e}_1}}{\norm{\uvec{e}_1}^2} \, \uvec{e}_1 = (1,-1,\ci,\ci) - \frac{2}{4} (1,1,\ci,\ci) = \left(\frac{1}{2}, -\frac{3}{2}, \frac{\ci}{2}, \frac{\ci}{2} \right)\text{.} \end{equation*}

However, scalar multiples do not affect orthogonality, so let's clear the fractions by multiplying by \(2\text{,}\) giving

\begin{equation*} \uvec{e}_2 = ( 1, -3, \ci, \ci ) \text{.} \end{equation*}

From here, we already know from Example 37.4.2 that \(\uvec{w}_3,\uvec{w}_4\) are in \(\orthogcmp{W}\) so they will produce inner products of \(0\) against both \(\uvec{e}_1,\uvec{e}_2\text{.}\) This means the formulas for \(\uvec{e}_3,\uvec{e}_4\) in the Gram-Schmidt orthogonalization process will reduce to

\begin{align*} \uvec{e}_3 \amp = \uvec{w}_3 - (\zerovec + \zerovec) \text{,} \amp \uvec{e}_4 \amp = \uvec{w}_4 - \left( \zerovec + \zerovec + \frac{\inprod{\uvec{w}_4}{\uvec{e}_3}}{\norm{\uvec{e}_3}^2} \, \uvec{e}_3 \right) \end{align*}

So using \(\uvec{e}_3 = \uvec{w}_3\text{,}\) calculate

\begin{gather*} \inprod{\uvec{w}_4}{\uvec{e}_3} = \dotprod{\uvec{w}_4}{\uvec{e}_3} = -\ci^2 + 0 + 0 + 0 = 1 \text{,}\\ \norm{\uvec{e}_3}^2 = \dotprod{\uvec{e}_3}{\uvec{e}_3} = -\ci^2 + 1 = 2\text{,} \end{gather*}

from which we would normally set \(\uvec{e}_4\) to be

\begin{align*} \uvec{e}_4 \amp = \uvec{w}_4 - \frac{\inprod{\uvec{w}_4}{\uvec{e}_3}}{\norm{\uvec{e}_3}^2} \, \uvec{e}_3\\ \amp = (\ci, 0, 0, 1) - \frac{1}{2} \, (\ci, 0, 1, 0)\\ \amp = \left( \frac{\ci}{2}, 0, - \frac{1}{2}, 1 \right)\text{,} \end{align*}

but again we will clear fractions to obtain

\begin{equation*} \uvec{e}_4 = (\ci, 0, -1, 2) \text{.} \end{equation*}

Putting all four vectors together gives us orthogonal basis

\begin{equation*} \basisfont{B} = \bigl\{ (1,1,\ci,\ci), ( 1, -3, \ci, \ci ), (\ci, 0, 1, 0), (\ci, 0, -1, 2) \bigr\} \text{.} \end{equation*}

Subsection 37.4.4 Obtaining an orthogonal complement using the Gram-Schmidt process

As discussed in Subsection 37.3.4, orthogonal bases tell us about subspaces and their complements. Since the Gram-Schmidt process is our main tool for obtaining orthogonal bases, we can use the process to determine orthogonal complements.

Example 37.4.6.

Let's revisit Example 37.4.1, where we considered subspace

\begin{equation*} U = \Span\left\{ \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 0 \amp 1 \end{bmatrix} \right\} \end{equation*}

of \(V = \matrixring_{2 \times 2}(\R)\text{.}\) (We equip \(V\) with the standard inner product \(\inprod{A}{B} = \trace (\utrans{B} A)\text{.}\))

The basis for \(U\) can be enlarged into a basis for \(V\) by including a couple of standard basis vectors:

\begin{equation*} \basisfont{B}_0 = \left\{ \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 0 \amp 1 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix} \right\}\text{.} \end{equation*}

We won't go through all the calculations this time, but applying the Gram-Schmidt orthogonalization process to \(\basisfont{B}_0\) (and clearing fractions along the way) results in orthogonal basis

\begin{equation*} \basisfont{B} = \left\{ \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix}, \left[\begin{array}{rc} -1 \amp 1 \\ 0 \amp 2 \end{array}\right], \left[\begin{array}{rc} -1 \amp 1 \\ 0 \amp -1 \end{array}\right], \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix} \right\}\text{.} \end{equation*}

As \(\dim U\) was \(2\) in the first place, the first two vectors in \(\basisfont{B}\) form a basis for \(U\text{.}\) And as the entire basis \(\basisfont{B}\) is an orthogonal set, the last two vectors in \(\basisfont{B}\) must form a basis for \(\orthogcmp{U}\text{.}\) That is, we can split \(\basisfont{B}\) in two to obtain an orthogonal basis for each of \(U,\orthogcmp{U}\text{:}\)

\begin{align*} \basisfont{B}_U \amp = \left\{ \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix}, \left[\begin{array}{rc} -1 \amp 1 \\ 0 \amp 2 \end{array}\right] \right\} \text{,} \amp \basisfont{B}_{\orthogcmp{U}} \amp = \left\{ \left[\begin{array}{rc} -1 \amp 1 \\ 0 \amp -1 \end{array}\right], \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix} \right\}\text{.} \end{align*}
Example 37.4.7.

Now we'll revisit Example 37.4.2, where we considered the subspace

\begin{equation*} W = \Span \bigl\{ (1,1,\ci,\ci), (1,-1,\ci,\ci) \bigr\} \end{equation*}

of \(\C^4\text{.}\)

In Example 37.4.5, we created an orthogonal basis for \(\C^4\) by applying the Gram-Schmidt process to an initial basis formed by joining a basis for \(W\) with a basis for \(\orthogcmp{W}\text{.}\) So we could obtain orthogonal bases for both \(W,\orthogcmp{W}\) by splitting that orthogonal \(\C^4\) basis apart. However, in this example we will proceed as if we did not initially know any basis for \(\orthogcmp{W}\text{.}\)

The above basis for \(W\) can be enlarged into a basis for \(\C^4\) by including two of the standard basis vectors:

\begin{equation*} \basisfont{B}_0 = \bigl\{ (1,1,\ci,\ci), (1,-1,\ci,\ci), (0,0,1,0), (0,0,0,1) \bigr\}\text{.} \end{equation*}

Applying the Gram-Schmidt orthogonalization process to \(\basisfont{B}_0\) (and clearing fractions along the way) results in orthogonal basis

\begin{equation*} \basisfont{B} = \bigl\{ (1,1,\ci,\ci), ( 1, -3, \ci, \ci ), (\ci, 0, 2, -1), (\ci, 0, 0, 1) \bigr\}\text{.} \end{equation*}

As \(\dim W\) was \(2\) in the first place, similarly to Example 37.4.6 we can split \(\basisfont{B}\) in two to obtain an orthogonal basis for each of \(W,\orthogcmp{W}\text{:}\)

\begin{align*} \basisfont{B}_W \amp = \bigl\{ (1,1,\ci,\ci), ( 1, -3, \ci, \ci ) \bigr\} \text{,} \amp \basisfont{B}_{\orthogcmp{W}} \amp = \bigl\{ (\ci, 0, 2, -1), (\ci, 0, 0, 1) \bigr\}\text{.} \end{align*}

Subsection 37.4.5 An infinite orthogonal set

In Example 36.5.7, we showed that the sine and cosine functions are orthogonal with respect to a certain inner product on a space of continuous functions. We can extend that orthogonal pair into an infinite orthogonal set by varying the frequency of the sine and cosine functions.

Example 37.4.8. An infinite orthogonal set of sines and cosines.

Consider the space \(V = C[0,1]\) of all continuous functions defined on the interval \(0 \le x \le 1\text{,}\) equipped with the inner product

\begin{equation*} \inprod{f}{g} = \integral{0}{1}{f(x) g(x)}{x} \text{.} \end{equation*}

Then the infinite set

\begin{equation*} S = \bigl\{ \sin (2 \pi x), \cos (2 \pi x), \sin (4 \pi x), \cos (4 \pi x), \dotsc, \sin (2 \pi n x), \cos (2 \pi n x), \dotsc \bigr\} \end{equation*}

is an infinite orthogonal set in \(V\text{.}\)

To verify this, assume \(m,n\) are integers with \(m \neq n\text{.}\) Setting \(k = m - n\) and \(K = m + n\) for convenience, we could calculate

\begin{gather*} \inprod{\sin (2 \pi m x)}{\sin (2 \pi n x)} = \frac{K \sin (2 \pi k) - k \sin (2 \pi K)} {4 \pi k K} \text{,}\\ \inprod{\cos (2 \pi m x)}{\cos (2 \pi n x)} = \frac{K \sin (2 \pi k) + k \sin (2 \pi K)} {4 \pi k K} \text{,}\\ \inprod{\sin (2 \pi m x)}{\cos (2 \pi n x)} = \frac{2 m - K \cos (2 \pi k) - k \cos (2 \pi K)} {4 \pi k K} \text{,}\\ \inprod{\sin (2 \pi n x)}{\cos (2 \pi n x)} = \frac{1 - \cos^2 (2 \pi n)}{4 \pi n} \text{.} \end{gather*}

The first two expressions evaluate to zero because sine evaluates to zero at every integer multiple of \(2 \pi\text{.}\) On the other hand, cosine evaluates to one at every integer multiple of \(2 \pi\text{,}\) so the numerator of the third expression becomes

\begin{equation*} 2 m - K - k = 2 m - (m + n) - (m - n) = 0 \text{,} \end{equation*}

and the numerator of the fourth expression also obviously evaluates to zero.

Since all possible combinations of different functions from \(S\) evaluate to zero in the inner product, \(S\) is an orthogonal set.