Skip to main content
Contents Index
Dark Mode Prev Up Next
\(\require{cancel}
\newcommand{\bigcdot}{\mathbin{\large\boldsymbol{\cdot}}}
\newcommand{\basisfont}[1]{\mathcal{#1}}
\newcommand{\iddots}{{\mkern3mu\raise1mu{.}\mkern3mu\raise6mu{.}\mkern3mu \raise12mu{.}}}
\DeclareMathOperator{\RREF}{RREF}
\DeclareMathOperator{\adj}{adj}
\DeclareMathOperator{\proj}{proj}
\DeclareMathOperator{\matrixring}{M}
\DeclareMathOperator{\poly}{P}
\DeclareMathOperator{\Span}{Span}
\DeclareMathOperator{\rank}{rank}
\DeclareMathOperator{\nullity}{nullity}
\DeclareMathOperator{\nullsp}{null}
\DeclareMathOperator{\uppermatring}{U}
\DeclareMathOperator{\trace}{trace}
\DeclareMathOperator{\dist}{dist}
\DeclareMathOperator{\negop}{neg}
\DeclareMathOperator{\Hom}{Hom}
\DeclareMathOperator{\im}{im}
\newcommand{\R}{\mathbb{R}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\ci}{\mathrm{i}}
\newcommand{\cconj}[1]{\bar{#1}}
\newcommand{\lcconj}[1]{\overline{#1}}
\newcommand{\cmodulus}[1]{\left\lvert #1 \right\rvert}
\newcommand{\bbrac}[1]{\bigl(#1\bigr)}
\newcommand{\Bbrac}[1]{\Bigl(#1\Bigr)}
\newcommand{\irst}[1][1]{{#1}^{\mathrm{st}}}
\newcommand{\ond}[1][2]{{#1}^{\mathrm{nd}}}
\newcommand{\ird}[1][3]{{#1}^{\mathrm{rd}}}
\newcommand{\nth}[1][n]{{#1}^{\mathrm{th}}}
\newcommand{\leftrightlinesubstitute}{\scriptstyle \overline{\phantom{xxx}}}
\newcommand{\inv}[2][1]{{#2}^{-{#1}}}
\newcommand{\abs}[1]{\left\lvert #1 \right\rvert}
\newcommand{\degree}[1]{{#1}^\circ}
\newcommand{\blank}{-}
\newenvironment{sysofeqns}[1]
{\left\{\begin{array}{#1}}
{\end{array}\right.}
\newcommand{\iso}{\simeq}
\newcommand{\absegment}[1]{\overline{#1}}
\newcommand{\abray}[1]{\overrightarrow{#1}}
\newcommand{\abctriangle}[1]{\triangle #1}
\newcommand{\abcdquad}[1]{\square\, #1}
\newenvironment{abmatrix}[1]
{\left[\begin{array}{#1}}
{\end{array}\right]}
\newenvironment{avmatrix}[1]
{\left\lvert\begin{array}{#1}}
{\end{array}\right\rvert}
\newcommand{\mtrxvbar}{\mathord{|}}
\newcommand{\utrans}[1]{{#1}^{\mathrm{T}}}
\newcommand{\rowredarrow}{\xrightarrow[\text{reduce}]{\text{row}}}
\newcommand{\bidentmattwo}{\begin{bmatrix} 1 \amp 0 \\ 0 \amp 1 \end{bmatrix}}
\newcommand{\bidentmatthree}{\begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \end{bmatrix}}
\newcommand{\bidentmatfour}{\begin{bmatrix} 1 \amp 0 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp 0
\\ 0 \amp 0 \amp 0 \amp 1 \end{bmatrix}}
\newcommand{\uvec}[1]{\mathbf{#1}}
\newcommand{\zerovec}{\uvec{0}}
\newcommand{\bvec}[2]{#1\,\uvec{#2}}
\newcommand{\ivec}[1]{\bvec{#1}{i}}
\newcommand{\jvec}[1]{\bvec{#1}{j}}
\newcommand{\kvec}[1]{\bvec{#1}{k}}
\newcommand{\injkvec}[3]{\ivec{#1} - \jvec{#2} + \kvec{#3}}
\newcommand{\norm}[1]{\left\lVert #1 \right\rVert}
\newcommand{\unorm}[1]{\norm{\uvec{#1}}}
\newcommand{\dotprod}[2]{#1 \bigcdot #2}
\newcommand{\udotprod}[2]{\dotprod{\uvec{#1}}{\uvec{#2}}}
\newcommand{\crossprod}[2]{#1 \times #2}
\newcommand{\ucrossprod}[2]{\crossprod{\uvec{#1}}{\uvec{#2}}}
\newcommand{\uproj}[2]{\proj_{\uvec{#2}} \uvec{#1}}
\newcommand{\adjoint}[1]{{#1}^\ast}
\newcommand{\matrixOfplain}[2]{{\left[#1\right]}_{#2}}
\newcommand{\rmatrixOfplain}[2]{{\left(#1\right)}_{#2}}
\newcommand{\rmatrixOf}[2]{\rmatrixOfplain{#1}{\basisfont{#2}}}
\newcommand{\matrixOf}[2]{\matrixOfplain{#1}{\basisfont{#2}}}
\newcommand{\invmatrixOfplain}[2]{\inv{\left[#1\right]}_{#2}}
\newcommand{\invrmatrixOfplain}[2]{\inv{\left(#1\right)}_{#2}}
\newcommand{\invmatrixOf}[2]{\invmatrixOfplain{#1}{\basisfont{#2}}}
\newcommand{\invrmatrixOf}[2]{\invrmatrixOfplain{#1}{\basisfont{#2}}}
\newcommand{\stdmatrixOf}[1]{\left[#1\right]}
\newcommand{\ucobmtrx}[2]{P_{\basisfont{#1} \to \basisfont{#2}}}
\newcommand{\uinvcobmtrx}[2]{\inv{P}_{\basisfont{#1} \to \basisfont{#2}}}
\newcommand{\uadjcobmtrx}[2]{\adjoint{P}_{\basisfont{#1} \to \basisfont{#2}}}
\newcommand{\coordmapplain}[1]{C_{#1}}
\newcommand{\coordmap}[1]{\coordmapplain{\basisfont{#1}}}
\newcommand{\invcoordmapplain}[1]{\inv{C}_{#1}}
\newcommand{\invcoordmap}[1]{\invcoordmapplain{\basisfont{#1}}}
\newcommand{\similar}{\sim}
\newcommand{\inprod}[2]{\left\langle\, #1,\, #2 \,\right\rangle}
\newcommand{\uvecinprod}[2]{\inprod{\uvec{#1}}{\uvec{#2}}}
\newcommand{\orthogcmp}[1]{{#1}^{\perp}}
\newcommand{\vecdual}[1]{{#1}^\ast}
\newcommand{\vecddual}[1]{{#1}^{\ast\ast}}
\newcommand{\change}[1]{\Delta #1}
\newcommand{\dd}[2]{\frac{d{#1}}{d#2}}
\newcommand{\ddx}[1][x]{\dd{}{#1}}
\newcommand{\ddt}[1][t]{\dd{}{#1}}
\newcommand{\dydx}{\dd{y}{x}}
\newcommand{\dxdt}{\dd{x}{t}}
\newcommand{\dydt}{\dd{y}{t}}
\newcommand{\intspace}{\;}
\newcommand{\integral}[4]{\int^{#2}_{#1} #3 \intspace d{#4}}
\newcommand{\funcdef}[3]{#1\colon #2\to #3}
\newcommand{\lt}{<}
\newcommand{\gt}{>}
\newcommand{\amp}{&}
\definecolor{fillinmathshade}{gray}{0.9}
\newcommand{\fillinmath}[1]{\mathchoice{\colorbox{fillinmathshade}{$\displaystyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\textstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptscriptstyle\phantom{\,#1\,}$}}}
\)
Section B.7 Best approximation
Subsection B.7.1 Approximating a matrix
Here we will use Sage to carry out the calculations in
Example 38.3.1 .
We carry out the whole process, including Gram-Schmidt to obtain the orthogonal basis. The subspace \(U\) is described as consisting of [those] upper-triangular matrices with \((1,2)\) entry equal to the trace of the matrix . Parametrically, we can describe \(U\) as
\begin{equation*}
U = \left\{ \begin{bmatrix} a \amp a + b \\ 0 \amp b \end{bmatrix} \right\} \text{,}
\end{equation*}
where \(a\) and \(b\) are free parameters. This leads to (nonorthogonal) basis
\begin{equation*}
\left\{
\begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix},
\begin{bmatrix} 0 \amp 1 \\ 0 \amp 1 \end{bmatrix}
\right\} \text{,}
\end{equation*}
where these matrices correspond to parameter choices \(\{a = 1, b = 0\}\) and \(\{a = 0, b = 1\}\text{,}\) respectively.
Set up.
First let’s load our initial basis vectors for
\(U\) into Sage.
As in
Section B.6 , let’s make life easier by creating a Python procedure for our inner product.
Note: B.T is a Sage “shortcut” for
B.transpose().
Gram-Schmidt That Thing.
It’s the dead duo’s time to shine.
Ugh, fractions. Since scaling doesn’t affect orthogonality, let’s scale
\(E_2\) to clear the fractions.
Projection time.
The point of this example is to compute the matrix in
\(U\) that is closest to being the identity matrix. For that, we want to compute
\(\proj_U I\text{.}\)
As expected, this is an upper-triangular matrix with the
\((1,2)\) entry equal to the trace. In other words, the result does indeed lie in the subspace
\(U\text{.}\)
Let’s verify that
\(\proj_U I\) and
\(\proj_{U^\perp} I = I - proj_U I\) are orthogonal.
Yep, they’re orthogonal. Which means we can use the projection onto
\(U^\perp\) to compute the distance from
\(I\) to the space
\(U\text{.}\)
Subsection B.7.2 Approximating a function
Here we will use Sage to carry out the calculations in
Example 38.3.2 .
As we have already demonstrated using Sage to apply Gram-Schmidt on our initial basis for this problem in
Section B.6 , we’ll skip that part this time and proceed immediately to entering in our function, our orthogonal basis, and our inner product function.
Note: The backslash at the end of some lines are a Python line continuation character, so that separate lines can be joined into one command for better readability.
Huh, that was pretty easy. Who knew that with the right set-up, a computer could turn a tedious calculation into something so simple?
Finally, let’s calculate the “error” in our approximation.
How does this compare to the “error” in the “naive” approximation described at the beginning of
Example 38.3.2 ?
So the orthogonal projection is better by a factor of
\(2\text{.}\) (Though this judgement is relative to the definition of the inner product.)