Section B.8 Orthogonal/unitary diagonalization
Subsection B.8.1 Orthogonally diagonalizing a symmetric matrix
Here we will use Sage to carry out the calculations in Example 40.5.1.
First let's load our matrix into Sage.
Eigenvalues and eigenvectors.
First we need to carry out the diagonalization procedure (see Subsection 25.4.3).
First let's analyze \(\lambda = 2\text{.}\)
As expected from the algebraic multiplicity of \(\lambda = 2\text{,}\) only one parameter is required, so we only get one eigenvector. Assigning \(x_4 = t\) leads to eigenvector \(\uvec{p}_1 = (1,1,1,1)\text{.}\)
Notice that \(S \uvec{p}_1 = 2 \uvec{p}_1\text{,}\) as expected.
Next we'll analyze \(\lambda = - 2\text{.}\)
Again, only one parameter required as the the algebraic multiplicity of \(\lambda = -2\) is \(1\text{.}\) Assigning \(x_4 = t\) leads to eigenvector \(\uvec{p}_2 = (-1,1,-1,1)\text{.}\)
Notice that \(S \uvec{p}_2 = - 2 \uvec{p}_2\text{,}\) as expected.
Finally we analyze \(\lambda = 0\text{.}\) For this eigenvalue, there is no need to work with \(\lambda I - S\text{,}\) as solving \((0 I - S) \uvec{x} = \zerovec\) is equivalent to solving \(S \uvec{x} = \zerovec\text{.}\)
Here the geometric multiplicity is equal to the algebraic multiplicity, so \(S\) is indeed diagonalizable. Setting parameters \(x_3 = s\) and \(x_4 = t\) leads to eigenvectors
The second output line verifies that both vectors are in the null space of \(S\text{,}\) and hence in eigenspace \(E_0(S)\text{.}\)
The transition matrix.
We want an orthogonal transition matrix, so let's check which of our eigenvectors are orthogonal to the others. We know that for a symmetric matrix like \(S\text{,}\) eigenvectors from different eigenspaces should automatically be orthogonal:
So both \(\uvec{p}_1\) and \(\uvec{p}_2\) are orthogonal to the eigenspace \(E_0(S)\text{,}\) as predicted by our theory, and it only remains to ensure that we have an orthogonal basis for that \(2\)-dimensional eigenspace, \(E_0(S)\text{.}\)
Conveniently, this eigenspace basis is already orthogonal, so no need for Gram-Schmidt. However, for a matrix to be orthogonal, its columns need to be orthonormal, so we need create our transition matrix out of normalized eigenvectors.
Finally, let's double-check both that \(P\) is orthogonal, and that it diagonalizes \(S\text{.}\)
Yep, it all worked out. And notice that in our calculation of the diagonal form matrix, we used \(\utrans{P}\) in place of \(\inv{P}\) — since \(P\) is orthogonal, the two are equal.
Subsection B.8.2 Unitarily diagonalizing a normal matrix
Here we will use Sage to carry out the calculations in Example 40.5.4.
Set up.
First, let's create a convenience inner product function to minimize having to type conjugate
a lot.
Now let's load our matrix into Sage. Remember that Sage used I
to represent the imaginary root of \(x^2 + 1\text{.}\)
Let's double-check that \(N\) is normal. We can use the Sage method conjugate_transpose
to compute adjoints. Rather than check entry-by-entry that \(N \adjoint{N}\) and \(\adjoint{N} N\) are equal, we can subtract the two — if they are equal, their difference should be the zero matrix.
Great, \(N\) is normal, and so the unitary diagonalization procedure should succeed.
Eigenvalues and eigenvectors.
Our first step is to carry out the diagonalization procedure (see Subsection 25.4.3).
First let's analyze \(\lambda = 3 \ci\text{.}\)
The geometric multiplicity is \(1\text{,}\) as expected. Assigning parameter \(x_3 = t\) leads to eigenvector \(\uvec{u}_1 = (-\ci, \ci, 1) \text{.}\)
Notice that \(N \uvec{u}_1 = 3 \ci \uvec{u}_1\text{,}\) as expected.
Now let's analyze \(\lambda = 3\text{.}\)
Since the geometric multiplicity of this eigenvalue is \(2\text{,}\) we have confirmed that this matrix is at least diagonalizable. Assigning parameters \(x_2 = s\) and \(x_3 = t\) leads to eigenvectors
The transition matrix.
First let's form a transition matrix with the eigenvectors we currently have, to check that it diagonalizes.
It worked, as expected. But does it unitarily diagonalize?
No, our transition matrix is not unitary. From the theory, we expect at least that the eigenvector \(\uvec{u}_1\text{,}\) which came from the eigenspace \(E_{3 \ci}(N)\text{,}\) should be orthogonal with the other two, which came from the eigenspace \(E_3(N)\text{.}\)
But what about the two vectors within \(E_3(N)\text{?}\)
So we'll need to Gram-Schmidt these two. But just these two — we should not include the eigenvector from \(E_{3 \ci}(N)\) in the process.
The calculation \(3 \uvec{v}_3 - N \uvec{v}_3 = \zerovec\) verifies that our new third vector is still an eigenvector for \(\lambda = 3\text{,}\) while the calculation \(\inprod{\uvec{v}_2}{\uvec{v}_3} = 0\) verifies that our new third vector is orthogonal to the second.
Last step: to be unitary, a matrix must have orthonormal columns. So we normalize each vector as we enter it into a new transition matrix.
So our new transition matrix is unitary, as we have verified that \(\adjoint{U} U = I\text{,}\) and also our new transition matrix still diagonalizes. And notice that in our calculation of the diagonal form matrix, we used \(\adjoint{U}\) in place of \(\inv{U}\) — since \(U\) is unitary, the two are equal.