Skip to main content

Section 36.5 Examples

Subsection 36.5.1 Inner products on familiar spaces

Example 36.5.1. An inner product on \(\poly_n(\R)\).

In Discovery 36.2, we verified the four real inner product axioms for an example inner product on \(\poly_2(\R)\text{,}\) the space of polynomials with real coefficients of degree \(2\) or less. We can mimic this example to create an inner product on \(\poly_n(\R)\) for any \(n\text{:}\) choose \(n+1\) distinct real numbers \(c_0, c_1, \dotsc, c_n\text{,}\) and using them create pairing

\begin{equation*} \inprod{p}{q} = p(c_0) q(c_0) + p(c_1) q(c_1) + \dotsb + p(c_n) q(c_n) \text{.} \end{equation*}

Checking Axiom RIP 1, Axiom RIP 2, and Axiom RIP 3 is straightforward. And The Fundamental Theorem of Algebra (Real Version) guarantees that no nonzero polynomial \(p\) can evaluate to zero at more than \(n\) input values, hence not all of the terms in the pairing expression

\begin{equation*} \inprod{p}{p} = \bigl[p(c_0)\bigr]^2 + \bigl[p(c_1)\bigr]^2 + \dotsb + \bigl[p(c_n)\bigr]^2 \end{equation*}

can be zero, which verifies Axiom RIP 4.

Example 36.5.2. The standard inner product on \(\matrixring_{m\times n}(\R)\).

An \(m \times n\) matrix is just \(mn\) “components” (i.e. entries) arranged in a grid instead of in a column. So we would expect the pairing

\begin{equation*} \inprod{A}{B} = a_{11} b_{11} + a_{12} b_{12} + \dotsb + a_{mn} b_{mn} \text{,} \end{equation*}

which is really just a “dot product” of matrices, to create an inner product on \(\matrixring_{m\times n}(\R)\text{.}\) And it does.

We can wrap this pairing up in a neat formula by

\begin{equation*} \inprod{A}{B} = \trace (\utrans{B} A) \text{.} \end{equation*}

(Again, the reversal of order is in preparation of the complex version.)

Let's verify Axiom RIP 1:

\begin{align*} \inprod{B}{A} \amp = \trace (\utrans{A} B) \amp \amp \text{(i)} \\ \amp = \trace \left(\utrans{A} \utrans{(\utrans{B})}\right) \amp \amp \text{(ii)} \\ \amp = \trace \utrans{(\utrans{B} A)} \amp \amp \text{(iii)} \\ \amp = \trace (\utrans{B} A) \amp \amp \text{(iv)} \\ \amp = \inprod{A}{B} \amp \amp \text{(v)} \text{,} \end{align*}

with justifications

  1. definition of the pairing;
  2. Rule 5.a of Proposition 4.5.1;
  3. Rule 5.d of Proposition 4.5.1;
  4. transpose does not change the diagonal entries, so trace remains the same; and
  5. definition of the pairing.

Axiom RIP 2 and Axiom RIP 3 are also easily verified using the properties of transpose and trace. So let's finish this example by verifying Axiom RIP 4. Consider a matrix \(A\) as being made up of column vectors in \(\R^n\text{:}\)

\begin{equation*} A = \begin{bmatrix} | \amp | \amp \amp | \\ \uvec{a}_1 \amp \uvec{a}_2 \amp \cdots \amp \uvec{a}_n \\ | \amp | \amp \amp | \end{bmatrix}\text{.} \end{equation*}

Then the diagonal entries of \(\utrans{A} A\) are of the form

\begin{equation*} \utrans{\uvec{a}}_j \uvec{a} = \dotprod{\uvec{a}_j}{\uvec{a}_j} = \norm{\uvec{a}_j}^2 \text{.} \end{equation*}

If \(A \neq \zerovec\text{,}\) then at least one of its columns \(\uvec{a}_j\) must be nonzero, and that column will contribute the positive value \(\norm{\uvec{a}_j}^2\) to

\begin{equation*} \inprod{A}{A} = \trace (\utrans{A} A) = \norm{\uvec{a}_1}^2 + \norm{\uvec{a}_2}^2 + \dotsb + \norm{\uvec{a}_n}^2 \text{.} \end{equation*}
Example 36.5.3. The standard inner product on \(\matrixring_{m\times n}(\C)\).

Similar to the real case, we can effectively make a complex matrix “dot product” by setting

\begin{equation*} \inprod{A}{B} = a_{11} \cconj{b}_{11} + a_{12} \cconj{b}_{12} + \dotsb + a_{mn} \cconj{b}_{mn} \text{.} \end{equation*}

Again, we can achieve this result with the compact formula

\begin{equation*} \inprod{A}{B} = \trace (\adjoint{B} A) \text{.} \end{equation*}

We leave it to you, the reader, to verify that this pairing will satisfy the axioms for a complex inner product.

Example 36.5.4. An inner product for continuous functions.

Let \(C[a,b]\) represent the space of all continuous functions on the closed interval \(a \le x \le b\text{.}\) Since adding continuous functions or vertically scaling a continuous function always results in a continuous function, this is a subspace of \(F[a,b]\text{,}\) the space of all functions defined on domain \(a \le x \le b\text{.}\)

Define a pairing on \(C[a,b]\) by

\begin{equation*} \inprod{f}{g} = \integral{a}{b}{f(x) g(x)}{x} \text{.} \end{equation*}

A product of two continuous functions is also continuous, and the Fundamental Theorem of Calculus tells us that continuous functions are always integrable.

This pairing obviously satisfies Axiom RIP 1, and the basic properties of definite integrals tell us that Axiom RIP 2 and Axiom RIP 3 are also satisfied. For Axiom RIP 4, consider that

\begin{equation*} \inprod{f}{f} = \integral{a}{b}{\bigl[f(x)\bigr]^2}{x} \end{equation*}

must at least be nonnegative because the integrand is, but if \(f(x)\) is not the zero function, then the properties of continuous functions require that this integral will evaluate to a positive number.

Subsection 36.5.2 Geometry in inner product spaces

Example 36.5.5. The “length” of a matrix.

Let's use the inner product

\begin{equation*} \inprod{A}{B} = \trace (\utrans{B} A) \end{equation*}

on \(\matrixring_{2 \times 2}(\R)\) to compute the norm of the vector

\begin{equation*} A = \begin{bmatrix} 2 \amp 3 \\ 1 \amp -1 \end{bmatrix} \text{.} \end{equation*}

We have

\begin{align*} \inprod{A}{A} \amp = \trace\left( \begin{bmatrix} 2 \amp 1 \\ 3 \amp -1 \end{bmatrix} \begin{bmatrix} 2 \amp 3 \\ 1 \amp -1 \end{bmatrix} \right)\\ \amp = \trace \begin{bmatrix} 5 \amp 5 \\ 5 \amp 10 \end{bmatrix}\\ \amp = 15\text{,} \end{align*}

and so

\begin{equation*} \norm{A} = \sqrt{15} \text{.} \end{equation*}

What unit vectors in \(\matrixring_{2 \times 2}(\R)\) are “parallel” to \(A\text{?}\) Just as in \(\R^n\text{,}\) we can normalize a vector to a unit vector by dividing by its norm. So

\begin{equation*} U = \frac{1}{\sqrt{15}} \begin{bmatrix} 2 \amp 3 \\ 1 \amp -1 \end{bmatrix} \end{equation*}

is one unit vector that is “parallel” to \(A\text{,}\) and \(-U\) is another.

Example 36.5.6. Angle between matrices.

What is the angle between

\begin{align*} A \amp = \begin{bmatrix} 3 \amp 5 \\ 0 \amp 4 \end{bmatrix}, \amp B \amp = \left[\begin{array}{rr} 0 \amp 1 \\ -1 \amp 0 \end{array}\right] \end{align*}

in \(\matrixring_{2 \times 2}(\R)\) when using the inner product \(\inprod{X}{Y} = \trace (\utrans{B} A)\text{?}\)

Compute:

\begin{gather*} \inprod{A}{B} = \trace\left( \left[\begin{array}{rr} 0 \amp -1 \\ 1 \amp 0 \end{array}\right] \begin{bmatrix} 3 \amp 5 \\ 0 \amp 4 \end{bmatrix} \right) = \trace\left( \left[\begin{array}{rr} 0 \amp -4 \\ 3 \amp 5 \end{array}\right] \right) = 5 \text{,}\\ \\ \norm{A}^2 = \trace\left( \begin{bmatrix} 3 \amp 0 \\ 5 \amp 4 \end{bmatrix} \begin{bmatrix} 3 \amp 5 \\ 0 \amp 4 \end{bmatrix} \right) = \trace\left( \begin{bmatrix} 9 \amp 15 \\ 15 \amp 41 \end{bmatrix} \right) = 50 \text{,}\\ \\ \norm{B}^2 = \trace\left( \left[\begin{array}{rr} 0 \amp -1 \\ 1 \amp 0 \end{array}\right] \left[\begin{array}{rr} 0 \amp 1 \\ -1 \amp 0 \end{array}\right] \right) = \trace\left( \begin{bmatrix} 1 \amp 0 \\ 0 \amp 1 \end{bmatrix} \right) = 2 \text{.} \end{gather*}

Put these calculations together in the formula

\begin{equation*} \theta = \inv{\cos} \left( \frac{\inprod{A}{B}}{\norm{A}\norm{B}} \right) = \inv{\cos} \left( \frac{5}{\sqrt{50}\sqrt{2}} \right) = \inv{\cos} \left( \frac{1}{2} \right) = \frac{\pi}{3}\text{.} \end{equation*}
Example 36.5.7. Orthogonal functions.

The functions \(f(x) = \sin x\) and \(g(x) = \cos x\) are continuous, and so are vectors in \(C[0,2\pi]\text{.}\) If we use the inner product of Example 36.5.4 to compute

\begin{equation*} \inprod{f}{g} = \integral{0}{2\pi}{\sin(x) \cos(x)}{x} = 0 \text{,} \end{equation*}

we find that the angle between these functions is

\begin{equation*} \theta = \inv{\cos} \left( \frac{\inprod{f}{g}}{\unorm{f}\unorm{g}} \right) = \inv{\cos} 0 = \frac{\pi}{2}\text{,} \end{equation*}

so that \(f\) and \(g\) are at a right angle to each other.

Subsection 36.5.3 Skewing geometry in \(\R^n\)

In the usual geometry of \(\R^2\) (i.e. relative to the standard inner product), the unit circle consists of those points that are a distance \(1\) from the origin.

The unit circle in \(\R^2\text{.}\)

What happens if we skew this geometry using a different inner product? The matrix

\begin{equation*} A = \begin{bmatrix} 1 \amp 0 \\ 0 \amp 2 \end{bmatrix} \end{equation*}

is symmetric and satisfies

\begin{equation*} \begin{bmatrix} x \amp y \end{bmatrix} \begin{bmatrix} 1 \amp 0 \\ 0 \amp 2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = x^2 + 2 y^2 > 0 \end{equation*}

for all \((x,y) \neq (0,0)\text{.}\) Therefore,

\begin{equation*} \uvecinprod{u}{v} = \utrans{\uvec{v}} A \uvec{u} \end{equation*}

defines an inner product on \(\R^2\text{.}\)

What is the unit circle for this inner product? That is, what vectors \((x,y)\) in \(\R^2\) will satisfy

\begin{equation*} \begin{bmatrix} x \amp y \end{bmatrix} \begin{bmatrix} 1 \amp 0 \\ 0 \amp 2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = 1\text{?} \end{equation*}

Using our calculation above, this occurs precisely when

\begin{equation*} x^2 + 2 y^2 = 1 \text{,} \end{equation*}

which is the equation of an ellipse.

A distorted unit circle in \(\R^2\text{.}\)

So, by using a different inner product, we can treat an ellipse as if it were a circle.