Skip to main content

Section 38.3 Examples

Subsection 38.3.1 Orthogonal projection

Example 38.3.1. Using the Expansion theorem to compute an orthogonal decomposition.

Equip the space \(V = \matrixring_{2 \times 2}(\R)\) with the standard inner product \(\inprod{A}{B} = \trace (\utrans{B} A)\text{,}\) and consider the subspace \(U\) of \(V\) consisting of upper-triangular matrices with \((1,2)\) entry equal to the trace of the matrix.

We explored this subspace in Discovery 37.3 and in Example 37.4.1, and we produced orthogonal basis

\begin{equation*} \basisfont{B}_U = \left\{ \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix}, \left[\begin{array}{rc} -1 \amp 1 \\ 0 \amp 2 \end{array}\right] \right\} \end{equation*}

for \(U\) in Example 37.4.6. Write \(A_1\) and \(A_2\) for these orthogonal basis vectors, respectively.

Using this orthogonal basis, we can (for example) compute \(\proj_U I\text{,}\) where \(I\) is the \(2 \times 2\) identity matrix.

Let's separately compute:

\begin{gather*} \inprod{I}{A_1} = \trace{\utrans{A_1} I} = 1 \text{,} \\ \inprod{I}{A_2} = \trace{\utrans{A_2} I} = 1 \text{,} \\ \norm{A_1}^2 = \inprod{A_1}{A_1} = \trace{\utrans{A_1} A_1} = \trace \begin{bmatrix} 1 \amp 1 \\ 1 \amp 1 \end{bmatrix} = 2 \text{,}\\ \norm{A_2}^2 = \inprod{A_2}{A_2} = \trace{\utrans{A_2} A_2} = \trace \left[\begin{array}{rr} 1 \amp -1 \\ -1 \amp 5 \end{array}\right] = 6 \text{.} \end{gather*}

With these calculations, we are able to calculate

\begin{align*} \proj_U I \amp = \frac{\inprod{I}{A_1}}{\norm{A_1}^2} \, A_1 + \frac{\inprod{I}{A_2}}{\norm{A_2}^2} \, A_2\\ \amp = \frac{1}{2} \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix} + \frac{1}{6} \left[\begin{array}{rc} -1 \amp 1 \\ 0 \amp 2 \end{array}\right]\\ \amp = \begin{bmatrix} \frac{1}{3} \amp \frac{2}{3} \\ 0 \amp \frac{1}{3} \end{bmatrix} \text{,}\\ \\ \proj_{\orthogcmp{U}} I \amp = I - \proj_U I\\ \amp = \left[\begin{array}{cr} \frac{2}{3} \amp -\frac{2}{3} \\ 0 \amp \frac{2}{3} \end{array}\right] \text{.} \end{align*}

By inspection we can see that

\begin{equation*} \proj_U I + \proj_{\orthogcmp{U}} I = I \text{,} \end{equation*}

and a quick calculation confirms

\begin{equation*} \inprod{\proj_U I}{\proj_{\orthogcmp{U}} I} = 0 \text{.} \end{equation*}

Subsection 38.3.2 Best approximation

Example 38.3.2. Approximating sine with a quadratic polynomial.

Consider the horizontally compressed sine function

\begin{equation*} f(x) = \sin (\pi x) \text{.} \end{equation*}

What quadratic polynomial best approximates \(f(x)\) over the interval \(0 \le x \le 1\text{?}\)

Looking at the graph of \(f(x)\text{,}\) one “naive” guess would be to choose the parabola that passes through the points

\begin{equation*} (0,0) \text{,} \quad (1/2, 1) \text{,} \quad (1, 0) \text{.} \end{equation*}

We can determine this parabola using the methods of Subsection 3.2.4 to be

\begin{equation*} q(x) = 4 x - 4 x^2 \text{.} \end{equation*}

Let's plot \(f(x)\) and \(q(x)\) on the same set of axes to compare.

A naive approximation of sine by a quadratic.

Let's see what kind of result we can get using best approximation. Our domain of interest is \(0 \le x \le 1\text{,}\) so we are working in the vector space \(C[0,1]\text{.}\) But we can constrain our problem to the finite-dimensional subspace

\begin{equation*} V = \Span \{ 1, x, x^2, \sin x \} \text{.} \end{equation*}

And to focus on our domain of interest, let's use inner product

\begin{equation*} \inprod{p}{q} = \integral{0}{1}{p(x) q(x)}{x} \text{.} \end{equation*}

Our best approximation will be \(\proj_U f\text{,}\) where \(U\) is the subspace \(\poly_2(\R)\) of \(V\text{.}\) Conveniently, we have already used the Gram-Schmidt orthogonalization process to produce orthogonal basis

\begin{equation*} \basisfont{B}_U = \left\{ 1, x - \frac{1}{2}, x^2 - x + \frac{1}{6} \right\} \end{equation*}

of \(U\) in Example 37.4.4. As in that example, write \(e_1(x),e_2(x),e_3(x)\) for these three basis polynomials, respectively.

To compute \(\proj_U f\text{,}\) first separately calculate

\begin{align*} \inprod{f}{e_1} \amp = \integral{0}{1}{f(x) e_1(x)}{x} = \frac{2}{\pi} \text{,} \\ \inprod{f}{e_2} \amp = \integral{0}{1}{f(x) e_2(x)}{x} = 0 \text{,} \\ \inprod{f}{e_3} \amp = \integral{0}{1}{f(x) e_3(x)}{x} = \frac{1}{3 \pi} - \frac{4}{\pi^3} \text{,} \\ \norm{e_1}^2 \amp = \inprod{e_1}{e_1} = \integral{0}{1}{\bigl[e_1(x)\bigr]^2}{x} = 1 \text{,} \\ \norm{e_3}^2 \amp = \inprod{e_3}{e_3} = \integral{0}{1}{\bigl[e_3(x)\bigr]^2}{x} = \frac{1}{180} \text{.} \end{align*}

Note that we skipped calculating \(\norm{e_2}^2\) since \(\inprod{f}{e_2} = 0\text{.}\)

Now compute

\begin{align*} \proj_U f \amp = \frac{\inprod{f}{e_1}}{\norm{e_1}^2} \, e_1(x) + \frac{\inprod{f}{e_2}}{\norm{e_2}^2} \, e_2(x) + \frac{\inprod{f}{e_3}}{\norm{e_3}^2} \, e_3(x)\\ \amp = \frac{2/\pi}{1} \cdot 1 + 0 + \frac{1/3\pi - 4/\pi^3}{1/180} \, \left( x^2 - x + \frac{1}{6} \right)\\ \amp = \frac{12}{\pi^3} \bigl[ \pi^2 (5 x^2 - 5 x + 1) - 10 (6 x^2 - 6 x + 1) \bigr]\text{.} \end{align*}

Again, let's plot \(f(x)\) and \((\proj_U f)(x)\) on the same set of axes to compare.

A best approximation of a cubic by a quadratic.

You can see that while this parabola does not hit the same peak as the sine graph, it stays much closer to the curve over our domain than the previous “naive” approximation.

Subsection 38.3.3 Least-squares solutions to an inconsistent system

Example 38.3.3.

The system

\begin{equation*} \left\{\begin{array}{rcrcrcr} x \amp \amp \amp + \amp z \amp = \amp 1, \\ x \amp - \amp y \amp \amp \amp = \amp 1, \\ \amp \amp 2y \amp + \amp 2z \amp = \amp 1, \\ x \amp + \amp y \amp + \amp 2z \amp = \amp 1, \end{array}\right. \end{equation*}

is inconsistent. What values of \(x,y,z\) are closest to being a solution?

We have coefficient matrix and vector of constants

\begin{align*} A \amp = \left[\begin{array}{crc} 1 \amp 0 \amp 1 \\ 1 \amp -1 \amp 0 \\ 0 \amp 2 \amp 2 \\ 1 \amp 1 \amp 2 \end{array}\right] \text{,} \amp \uvec{b} \amp = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}\text{.} \end{align*}

The inconsistent system

\begin{equation*} A \uvec{x} = \uvec{b} \end{equation*}

has associated normal system

\begin{equation*} \utrans{A} A \uvec{x} = \utrans{A} \uvec{b} \text{,} \end{equation*}

where

\begin{align*} \utrans{A} A \amp = \begin{bmatrix} 3 \amp 0 \amp 3 \\ 0 \amp 6 \amp 6 \\ 3 \amp 6 \amp 9 \end{bmatrix} \text{,} \amp \utrans{A} \uvec{b} \amp = \begin{bmatrix} 3 \\ 2 \\ 5 \end{bmatrix}\text{.} \end{align*}

Row reduce:

\begin{equation*} \left[\begin{array}{ccc|c} 3 \amp 0 \amp 3 \amp 3 \\ 0 \amp 6 \amp 6 \amp 2 \\ 3 \amp 6 \amp 9 \amp 5 \end{array}\right] \qquad\rowredarrow\qquad \left[\begin{array}{ccc|c} 1 \amp 0 \amp 1 \amp 1 \\ 0 \amp 1 \amp 1 \amp \frac{1}{3} \\ 0 \amp 0 \amp 0 \amp 0 \end{array}\right]\text{.} \end{equation*}

Setting free variable \(z\) to be a parameter, we have general solution

\begin{equation*} \uvec{x} = \begin{bmatrix} 1 - t \\ \frac{1}{3} - t \\ t \end{bmatrix} \text{.} \end{equation*}

So there is a whole line of approximate solutions, and each \(\uvec{x}\) on this line satisfies

\begin{equation*} A \uvec{x} = \begin{bmatrix} 1 \\ \frac{2}{3} \\ \frac{2}{3} \\ \frac{2}{3} \\ \frac{4}{3} \end{bmatrix} \approx \uvec{b} \text{.} \end{equation*}

Let's compare by calculating \(\proj_U \uvec{b}\text{,}\) where \(U\) is the column space of \(A\text{.}\) By inspection, we can see that the third column of \(A\) is the sum of the first two, whereas the first two columns of \(A\) are already orthogonal. So the first two columns \(\uvec{a}_1,\uvec{a}_2\) form an orthogonal basis for the column space of \(A\text{.}\) So if we compute

\begin{align*} \inprod{\uvec{b}}{\uvec{a}_1} \amp = 3 \text{,} \amp \norm{\uvec{a}_1}^2 \amp = \inprod{\uvec{a}_2}{\uvec{a}_2} = 3 \text{,}\\ \inprod{\uvec{b}}{\uvec{a}_2} \amp = 2 \text{,} \amp \norm{\uvec{a}_2}^2 \amp = \inprod{\uvec{a}_2}{\uvec{a}_2} = 6 \text{,} \end{align*}

we then have

\begin{equation*} \proj_U \uvec{b} = \frac{\inprod{\uvec{b}}{\uvec{a}_1}}{\norm{\uvec{a}_1}^2} \, \uvec{a}_1 + \frac{\inprod{\uvec{b}}{\uvec{a}_2}}{\norm{\uvec{a}_2}^2} \, \uvec{a}_2 = \uvec{a}_1 + \frac{1}{3} \, \uvec{a}_2 = \begin{bmatrix} 1 \\ \frac{2}{3} \\ \frac{2}{3} \\ \frac{2}{3} \\ \frac{4}{3} \end{bmatrix}\text{,} \end{equation*}

as expected.