Section 41.4 Examples
In this section.
Subsection 41.4.1 Quadratic forms represented by matrices
Example 41.4.1. Determining the quadratic polynomial associated to a symmetric matrix.
The symmetric matrix
creates a quadratic form
We can use the correspondence between matrix entries and polynomial coefficients discussed in Subsection 41.3.1 to obtain
without having to calculate out the matrix multiplication of \(\utrans{\uvec{x}} A \uvec{x} \text{.}\)
Example 41.4.2. Determining the symmetric matrix associated to a quadratic form.
The quadratic form
can be represented by matrix multiplication
for a symmetric matrix \(A\text{.}\)
To do create the matrix \(A\text{,}\) each diagonal entry should be the coefficient on the corresponding squared term in the quadratic polynomial, and each off-diagonal entry should be one-half of the coefficient of the corresponding cross term in the quadratic polynomial:
Subsection 41.4.2 Level curves of two-variable quadratic forms
Example 41.4.3. An ellipse in the \(xy\)-plane.
Let's work through the example of Discovery 41.5. The symmetric matrix
creates quadratic form
using variables \(x,y\) instead of \(x_1,x_2\text{.}\) It's not clear from this polynomial expression what shape a level curve \(q_A(\uvec{x}) = k\) will take. But since \(A\) is symmetric, we can diagonalize it to remove the cross term in the polynomial for \(q_A\text{.}\)
The eigenvalues of \(A\) are \(\lambda_1 = 8\) and \(\lambda_2 = 18\text{.}\) Calculate bases for the eigenspaces. First \(\lambda_1 = 8\text{:}\)
Now \(\lambda_2 = 18\text{:}\)
These two eigenvectors are orthogonal as expected (Statement 2 of Theorem 40.6.1), but they need to be normalized:
Use the normalized eigenvectors to create an orthogonal transition matrix
which diagonalizes \(A\text{:}\)
Using the change of variables
the transformed quadratic form is
A level curve \(q_{\inv{P} A P}(\uvec{w}) = k\) (\(k \gt 0\)) is clearly an ellipse. For example, the curve
is an ellipse with \(w\)-intercepts at \(w = \pm 3\) and \(z\)-intercepts at \(z = \pm 2\text{.}\)
Just as the orthonormal basis formed by the standard vectors \(\uvec{e}_1\) and \(\uvec{e}_2\) represent the \(x\)- and \(y\)-axes respectively, the orthonormal basis formed by the columns of \(P\) represent the \(w\)- and \(z\)-axes, and give us a way to plot the ellipse \(q_A(\uvec{x}) = 72\) on the \(xy\)-axes.
Relative to how it appears on the \(wz\)-axes, the ellipse has clearly been rotated by \(\pi/4\) counter-clockwise. This is consistent with viewing \(P\) as a rotation matrix:
Also notice how the fact that the columns of \(P\) are orthonormal allow use to use multiples of those vectors to determine distances and positions along each of the \(w\)- and \(z\)-axes.
As our change of variables was
points in \(wz\)-coordinates must be rotated by \(\pi/4\) to be converted into \(xy\)-coordinates.
Example 41.4.4. A hyperbola in the \(xy\)-plane.
Consider the quadratic form
This form corresponds to the symmetric matrix
The eigenvalues of \(A\) are \(\lambda_1 = 16\) and \(\lambda_2 = -36\text{,}\) so even without computing the transition matrix we know that the decoupled quadratic form is
so the level curves of \(q\) will be hyperbolas. For example, the hyperbola
has \(w\)-intercepts \(w = \pm \frac{1}{2}\) and asymptotes \(z = \pm \frac{2}{3} w\text{.}\)
But to determine how to plot this hyperbola on a set of \(xy\)-axes, we need the transition matrix, so calculate bases for the eigenspaces. First \(\lambda_1 = 16\text{:}\)
Now \(\lambda_2 = 36\text{:}\)
Again, these eigenvectors should be normalized in order to create orthogonal transition matrix
This transition matrix is of the form
which represents a rotation followed by a reflection.
Notice how the \(wz\)-axes are rotated \(\pi/6\) counter-clockwise, but then the \(z\)-axis has been reflected to the other side of the \(w\)-axis. However, we could have chosen our eigenvectors so that \(P\) represented only a rotation, if we had used
instead (see Theorem 41.5.2).
Remark 41.4.5.
Note that in both examples in this subsection, it was not actually necessary to compute the second eigenspace. Since we are working in two dimensions, and we know that symmetric matrices have orthogonal eigenspaces (Statement 2 of Theorem 40.6.1), once we had a basis vector for one eigenspace we could easily obtain a basis vector for the other eigenspace using our pattern of orthogonality in the plane from Discovery 14.2.c (also discussed in Subsection 14.3.2).