Section 9.1 Pre-read
Subsection Definitions of some important matrix groups
Definition 9.1.1. General linear group: \(\GL_n(\R)\).
The group of invertible \(n \times n\) matrices with real entries, with multiplication as the group operation.
Remark 9.1.2. For students who have taken AUMAT 220.
The linear part of the terminology stems from the fact that a general linear group consists of all invertible linear (matrix) operators on the vector space \(\R^n\text{.}\) An abstract vector space \(V\) also has an associated general linear group \(\GL(V)\) of invertible linear operators, though this is a group of functions not of matrices. (Though as we learn in AUMAT 220, if \(V\) is finite-dimensional and we have made a fixed choice of basis for \(V\text{,}\) then to each linear operator on \(V\) we can associate a matrix.)
If \(A\) is an \(n \times n\) matrix and \(\vec{x}\) is an \(n \times 1\) column vector in \(\R^n\text{,}\) then the result of computing \(A \vec{x}\) will also be an \(n \times 1\) column vector in \(\R^n\text{.}\) If \(A\) is invertible (i.e. if \(A\) is an element in \(\GL_n(\R)\)), then the mapping \(\vec{x} \mapsto A \vec{x}\) can be reversed, which means that “multiplication by \(A\)” defines a permutation of \(\R^n\text{.}\) In other words, \(\GL_n(\R)\) is a subgroup of \(S_{\R^n}\).
Remark 9.1.3.
There is no need to perform the Subgroup Test to verify that \(\GL_n(\R)\) is a subgroup of \(S_{\R^n}\) — we know from AUMAT 120 that
the product of two invertible matrices is an invertible matrix, and
the inverse of an invertible matrix is an invertible matrix.
In this chapter we will study several important subgroups of \(\GL_n(\R)\text{.}\)
Definition 9.1.4. Special linear group: \(\SL_n(\R)\).
The group consisting of those invertible \(n \times n\) matrices that have determinant equal to \(1\text{.}\)
Remark 9.1.5.
The definition of this group via the determinant is an example of a more general pattern that we will encounter again in this course. From AUMAT 120 we know that the determinant of a product is the product of the determinants. This course has given us a more sophisticated way to think of this fact: the determinant is an Operation-preserving map map from \(\GL_n(\R)\) to the multiplicative group \(\multunits{\R}\) of nonzero real numbers. (Remember that a matrix is invertible precisely when its determinant is nonzero.) And \(\SL_n(\R)\) is defined to consist of precisely those matrices in \(\GL_n(\R)\) that this operation-preserving map sends to the identity element in \(\multunits{\R}\text{.}\)
Moreover, this pattern for operation-preserving maps on groups is one instance of a more widespread phenomenon in abstract algebra. Another instance of the same phenomenon is the fact that the kernel of a linear transformation between vector spaces is always subspace of the domain space, as we will learn (or have learned, as the case may be) in AUMAT 220.
Definition 9.1.6. Orthogonal group: \(\Or_n(\R)\).
The group consisting of those invertible \(n \times n\) matrices that satisfy the condition
(where \(I\) is the \(n \times n\) identity matrix, as usual).
Here are some facts about orthogonal matrices from AUMAT 220. We will not discuss their proofs in this course; it would not be too difficult to verify these on your own, or you can just take them on faith.
Fact 9.1.7.
A square matrix is orthogonal precisely when its columns form an orthonormal set. That is, as vectors in \(\R^n\text{,}\) each column has length one and pairs of columns are orthogonal (have dot product equal to \(0\)).
Fact 9.1.8.
Suppose \(A\) is an orthogonal matrix. Then multiplication by \(A\) against column vectors in \(\R^n\) preserves
dot product (that is, the value of the dot product between the vectors \(A \vec{x}\) and \(A \vec{y}\) is always equal to the value of the dot product between \(\vec{x}\) and \(\vec{y}\));
length (that is, the length of the vector \(A \vec{x}\) is always equal to the value of the length of \(\vec{x}\));
angle (that is, the angle between the vectors \(A \vec{x}\) and \(A \vec{y}\) is always equal to the angle between \(\vec{x}\) and \(\vec{y}\)); and
orthogonality (that is, vectors \(A \vec{x}\) and \(A \vec{y}\) are orthogonal whenever \(\vec{x}\) and \(\vec{y}\) are orthogonal);
Remark 9.1.9.
Thinking of \(\Or_n(\R)\) as a subgroup of \(S_{\R}\text{,}\) Fact 9.1.8 severely restricts how elements of \(\Or_n(\R)\) can permute the vectors of \(\R^n\text{,}\) as they must preserve all aspects of the geometric relationships between vectors.
Definition 9.1.10. Special orthogonal group: \(\SO_n(\R)\).
The group consisting of those orthogonal \(n \times n\) matrices that have determinant equal to \(1\text{.}\)
Note 9.1.11.
By definition, \(\SO_n(\R)\) is a subgroup of both \(\SL_n(\R)\) and \(\Or_n(\R)\) (and, of course, \(\GL_n(\R)\)). In fact, \(\SO_n(\R)\) is the intersection of \(\SL_n(\R)\) and \(\Or_n(\R)\text{,}\) and we found in Discovery 5.9 that the intersection of two subgroups is again a subgroup.
Subsection Geometry in \(\R^2\)
Recall that
are called the standard basis vectors for \(\R^2\text{.}\) Every vector in \(\R^2\) is naturally a linear combination of these two vectors:
In AUMAT 120, as part of our exploration of eigenvectors and diagonalization, we discovered that the collected results of matrix-times-standard-basis-vector calculations tells us about the matrix:
So, for a \(2 \times 2\) matrix \(A\text{,}\) if we know the results of these two calculations, we know the matrix \(A\text{.}\)
Remark 9.1.12.
Again, this is a specific instance of a more abstract pattern: in AUMAT 220 we will learn (or have learned) that a linear transformation between vector spaces is completely determined by how it transforms a fixed choice of basis for the domain space.
Our exploration of the orthogonal group will rely on the matrix-times-standard-basis-vector pattern described above. Now, orthogonal matrices also preserve all aspects of geometry of \(\R^n\) (Fact 9.1.8), so we would should pay special attention to the geometric properties of the standard basis. In particular, the standard basis is an orthonormal set: each standard basis vector has length \(1\text{,}\) and pairs of standard basis vectors are orthogonal (i.e. perpendicular). What other vectors have these properties?
The head of a vector in \(\R^2\) of length \(1\) will land on the unit circle when its tail is placed at the origin. And the unit circle is best described by trigonometry: by definition, the point on the unit circle at angle \(\theta\) from the positive \(x\)-axis (counter-clockwise) has coordinates
Here is a diagram giving the coordinates of four vectors around the unit circle at intervals of \(\pi/2\text{,}\) so that consecutive pairs are orthogonal.
To convince you that the coordinates of other three points (besides \((\cos \theta, \sin \theta)\)) are labelled correctly, use the trig identities for
and the fact that oppositely directed vectors are negatives of one another.