Processing math: 100%
Skip to main content

Section B.8 Orthogonal/unitary diagonalization

Subsection B.8.1 Orthogonally diagonalizing a symmetric matrix

Here we will use Sage to carry out the calculations in Example 40.5.1.

First let's load our matrix into Sage.

Eigenvalues and eigenvectors.

First we need to carry out the diagonalization procedure (see Subsection 25.4.3).

First let's analyze λ=2.

As expected from the algebraic multiplicity of λ=2, only one parameter is required, so we only get one eigenvector. Assigning x4=t leads to eigenvector p1=(1,1,1,1).

Notice that Sp1=2p1, as expected.

Next we'll analyze λ=−2.

Again, only one parameter required as the the algebraic multiplicity of λ=−2 is 1. Assigning x4=t leads to eigenvector p2=(−1,1,−1,1).

Notice that Sp2=−2p2, as expected.

Finally we analyze λ=0. For this eigenvalue, there is no need to work with λI−S, as solving (0I−S)x=0 is equivalent to solving Sx=0.

Here the geometric multiplicity is equal to the algebraic multiplicity, so S is indeed diagonalizable. Setting parameters x3=s and x4=t leads to eigenvectors

p3=(−1,0,1,0),p4=(0,−1,0,1).

The second output line verifies that both vectors are in the null space of S, and hence in eigenspace E0(S).

The transition matrix.

We want an orthogonal transition matrix, so let's check which of our eigenvectors are orthogonal to the others. We know that for a symmetric matrix like S, eigenvectors from different eigenspaces should automatically be orthogonal:

So both p1 and p2 are orthogonal to the eigenspace E0(S), as predicted by our theory, and it only remains to ensure that we have an orthogonal basis for that 2-dimensional eigenspace, E0(S).

Conveniently, this eigenspace basis is already orthogonal, so no need for Gram-Schmidt. However, for a matrix to be orthogonal, its columns need to be orthonormal, so we need create our transition matrix out of normalized eigenvectors.

Finally, let's double-check both that P is orthogonal, and that it diagonalizes S.

Yep, it all worked out. And notice that in our calculation of the diagonal form matrix, we used PT in place of P−1 — since P is orthogonal, the two are equal.

Subsection B.8.2 Unitarily diagonalizing a normal matrix

Here we will use Sage to carry out the calculations in Example 40.5.4.

Set up.

First, let's create a convenience inner product function to minimize having to type conjugate a lot.

Now let's load our matrix into Sage. Remember that Sage used I to represent the imaginary root of x2+1.

Let's double-check that N is normal. We can use the Sage method conjugate_transpose to compute adjoints. Rather than check entry-by-entry that NN∗ and N∗N are equal, we can subtract the two — if they are equal, their difference should be the zero matrix.

Great, N is normal, and so the unitary diagonalization procedure should succeed.

Eigenvalues and eigenvectors.

Our first step is to carry out the diagonalization procedure (see Subsection 25.4.3).

First let's analyze λ=3i.

The geometric multiplicity is 1, as expected. Assigning parameter x3=t leads to eigenvector u1=(−i,i,1).

Notice that Nu1=3iu1, as expected.

Now let's analyze λ=3.

Since the geometric multiplicity of this eigenvalue is 2, we have confirmed that this matrix is at least diagonalizable. Assigning parameters x2=s and x3=t leads to eigenvectors

u2=(1,1,0),u3=(i,0,1).
The transition matrix.

First let's form a transition matrix with the eigenvectors we currently have, to check that it diagonalizes.

It worked, as expected. But does it unitarily diagonalize?

No, our transition matrix is not unitary. From the theory, we expect at least that the eigenvector u1, which came from the eigenspace E3i(N), should be orthogonal with the other two, which came from the eigenspace E3(N).

But what about the two vectors within E3(N)?

So we'll need to Gram-Schmidt these two. But just these two — we should not include the eigenvector from E3i(N) in the process.

The calculation 3v3−Nv3=0 verifies that our new third vector is still an eigenvector for λ=3, while the calculation ⟨v2,v3⟩=0 verifies that our new third vector is orthogonal to the second.

Last step: to be unitary, a matrix must have orthonormal columns. So we normalize each vector as we enter it into a new transition matrix.

So our new transition matrix is unitary, as we have verified that U∗U=I, and also our new transition matrix still diagonalizes. And notice that in our calculation of the diagonal form matrix, we used U∗ in place of U−1 — since U is unitary, the two are equal.