Section 23.4 Examples
Subsection 23.4.1 Instances of complex vector spaces
Bringing complex numbers into the mix lets us convert some familiar examples into ew complex examples.
Example 23.4.1. The vector space \(\C^n\).
We have already discussed this space in Subsection 23.3.1, and just as we view \(\R^n\) as the model example of the abstract concept of real vector space, we view \(\C^n\) as the model example of the abstract concept of complex vector space. This space still has the familiar zero vector \(\zerovec = (0,0,\dotsc,0)\text{,}\) and still has the familiar standard basis consisting of the vectors \(\uvec{e}_j\) that have all components equal to zero except for a single \(1\) in the \(\nth[j]\) position. Just like \(\R^n\text{,}\) when convenient we will realize vectors in \(\C^n\) as (complex) column matrices.
Example 23.4.2. Spaces of complex matrices (\(\matrixring_{m \times n}(\C)\)).
Just as with \(\C^n\) versus \(\R^n\text{,}\) there is little new in \(\matrixring_{m \times n}(\C)\) versus \(\matrixring_{m \times n}(\R)\text{,}\) except that of course we allow complex entries in the matrices and allow multiplication by complex scalars. The space \(\matrixring_{m \times n}(\C)\) still has the zero matrix as the zero vector, and still has the familiar standard basis consisting of the matrices \(E_{ij}\) that have all entries equal to zero except for a single \(1\) in the \((i,j)\) entry.
Example 23.4.3. Spaces of complex polynomials.
Similar to the real case, we write \(\poly(\C)\) for the space of all complex polynomials, and write \(\poly_n(\C)\) for the space of those polynomials of degree \(n\) or less. To help distinguish between the real and complex contexts, we will continue to use \(x\) as the indeterminate in real polynomials (e.g. \(p(x) = 2 x^2 - x + 1\)), but will use \(z\) as the indeterminate in complex polynomials (e.g. \(p(z) = \ci z^2 - (5 - \ci) z + (1 + 2 \ci)\)). Again, there is nothing really new here in \(\poly(\C)\) versus \(\poly(\R)\text{,}\) except for the switch to allowing complex coefficients and to considering the indeterminate \(z\) as representing an indeterminate complex number. These spaces still have the familiar zero polynomial \(\zerovec(z) = 0\) as zero vector, and each space \(\poly_n(\C)\) still has familiar standard basis \(\{1,z,z^2,\dotsc,z^n\}\text{.}\)
Subsection 23.4.2 Calculations in complex vector spaces
In this subsection, we carry out some common vector space calculations in the complex context.
Example 23.4.4. Algebraic operations involving \(n\)-dimensional complex vectors.
Just as in \(\R^n\text{,}\) vector addition in \(\C^n\) is performed component-wise. Here is an example in \(\C^4\text{,}\) with vectors realized as column vectors.
\begin{equation*}
\begin{bmatrix}
1 + \ci \\
-\ci \\
3 - 2 \ci \\
6
\end{bmatrix}
+
\begin{bmatrix}
1 - \ci \\
4 \\
-3 + \ci \\
6 + 6 \ci
\end{bmatrix}
=
\begin{bmatrix}
2 \\
4 - \ci \\
- \ci \\
12 + 6 \ci
\end{bmatrix}
\end{equation*}
And just as in \(\R^n\text{,}\) scalar multiplication in \(\C^n\) is performed by multiplying each component by the scalar. Here is an example in \(\C^4\text{.}\)
\begin{equation*}
(3 + 2 \ci)
\begin{bmatrix}
1 \\
-\ci \\
3 - 2 \ci \\
1 - \ci
\end{bmatrix}
=
\begin{bmatrix}
3 + 2 \ci \\
2 - 3 \ci \\
13 \\
5 - \ci
\end{bmatrix}
\end{equation*}
Example 23.4.5. Checking for inclusion in a span.
In the space \(\matrixring_2(\C)\text{,}\) is
\begin{equation*}
\begin{bmatrix} 3 + 2 \ci \amp -1 + 5 \ci \\ -7 - 5 \ci \amp 3 + 3 \ci \end{bmatrix}
\end{equation*}
contained in the subspace
\begin{equation*}
\Span\left\{
\begin{bmatrix} 1 + \ci \amp 3 \ci \\ 0 \amp -1 \end{bmatrix},
\begin{bmatrix} 2 + \ci \amp 0 \\ -2 \amp 2 + 3 \ci \end{bmatrix},
\begin{bmatrix} 0 \amp 1 - 2 \ci \\ 5 + 5 \ci \amp -2 \end{bmatrix}
\right\}
\quad \text{?}
\end{equation*}
This question is equivalent to asking whether the matrix equation
\begin{equation*}
k_1 \begin{bmatrix} 1 + \ci \amp 3 \ci \\ 0 \amp -1 \end{bmatrix}
+ k_2 \begin{bmatrix} 2 + \ci \amp 0 \\ -2 \amp 2 + 3 \ci \end{bmatrix}
+ k_3 \begin{bmatrix} 0 \amp 1 - 2 \ci \\ 5 + 5 \ci \amp -2 \end{bmatrix}
= \begin{bmatrix} 3 + 2 \ci \amp -1 + 5 \ci \\ -7 - 5 \ci \amp 3 + 3 \ci \end{bmatrix}
\end{equation*}
has a solution in the (complex) scalars \(k_1\text{,}\) \(k_2\text{,}\) \(k_3\text{.}\) As usual, we combine the linear combination on the left into a single matrix:
\begin{equation*}
\begin{bmatrix} (1 + \ci) k_1 + (2 + \ci) k_2 + k_3 \amp 3 \ci k_1 + (1 - 2 \ci) k_3 \\ -2 k_2 + (5 + 5 \ci) k_3 \amp -k_1 + (2 + 3 \ci) k_2 - 2 k_3 \end{bmatrix}
= \begin{bmatrix} 3 + 2 \ci \amp -1 + 5 \ci \\ -7 - 5 \ci \amp 3 + 3 \ci \end{bmatrix}\text{.}
\end{equation*}
Then we turn this matrix equation into a (complex) linear system:
\begin{equation*}
\begin{sysofeqns}{rcrcrcr}
(1 + \ci) k_1 \amp + \amp (2 + \ci) k_2 \amp + \amp k_3 \amp = \amp 3 + 2 \ci \text{,} \\
3 \ci k_1 \amp + \amp \amp \amp (1 - 2 \ci) k_3 \amp = \amp -1 + 5 \ci \text{,} \\
\amp \amp -2 k_2 \amp + \amp (5 + 5 \ci) k_3 \amp = \amp -7 - 5 \ci \text{,} \\
-k_1 \amp + \amp (2 + 3 \ci) k_2 \amp - \amp 2 k_3 \amp = \amp 3 + 3 \ci \text{.}
\end{sysofeqns}
\end{equation*}
Create an augmented matrix and row reduce:
\begin{equation*}
\begin{abmatrix}{ccc|c}
1 + \ci \amp 2 + \ci \amp 1 \amp 3 + 2 \ci \\
3 \ci \amp 0 \amp 1 - 2 \ci \amp -1 + 5 \ci \\
0 \amp -2 \amp 5 + 5 \ci \amp -7 - 5 \ci \\
-1 \amp 2 + 3 \ci \amp - 2 \amp 3 + 3 \ci
\end{abmatrix}
\qquad \rowredarrow \qquad
\begin{abmatrix}{rrr|r}
1 \amp 0 \amp 0 \amp 1 \\
0 \amp 1 \amp 0 \amp 1 \\
0 \amp 0 \amp 1 \amp -1 \\
0 \amp 0 \amp 0 \amp 0
\end{abmatrix}\text{.}
\end{equation*}
The system has a solution, so the answer is yes, the given vector is in the span of the other three. In particular, from the reduced augmented matrix, we see that
\begin{equation*}
\begin{bmatrix} 3 + 2 \ci \amp -1 + 5 \ci \\ -7 - 5 \ci \amp 3 + 3 \ci \end{bmatrix}
= \begin{bmatrix} 1 + \ci \amp 3 \ci \\ 0 \amp -1 \end{bmatrix}
+ \begin{bmatrix} 2 + \ci \amp 0 \\ -2 \amp 2 + 3 \ci \end{bmatrix}
- \begin{bmatrix} 0 \amp 1 - 2 \ci \\ 5 + 5 \ci \amp -2 \end{bmatrix}\text{.}
\end{equation*}
Example 23.4.6. Testing for linear dependence/independence.
In the space \(\C^4\text{,}\) are the vectors
\begin{equation*}
\begin{bmatrix} 1 + \ci \\ 3 \ci \\ 0 \\ -1 \end{bmatrix},
\begin{bmatrix} 2 + \ci \\ 0 \\ -2 \\ 2 + 3 \ci \end{bmatrix},
\begin{bmatrix} 0 \\ 1 - 2 \ci \\ 5 + 5 \ci \\ -2 \end{bmatrix},
\begin{bmatrix} 3 + 2 \ci \\ -1 + 5 \ci \\ -7 - 5 \ci \\ 3 + 3 \ci \end{bmatrix}
\end{equation*}
linearly dependent or independent? Applying the Test for Linear Dependence/Independence (Procedure 18.3.1), we set up the vector equation
\begin{equation*}
k_1 \begin{bmatrix} 1 + \ci \\ 3 \ci \\ 0 \\ -1 \end{bmatrix}
+ k_2 \begin{bmatrix} 2 + \ci \\ 0 \\ -2 \\ 2 + 3 \ci \end{bmatrix}
+ k_3 \begin{bmatrix} 0 \\ 1 - 2 \ci \\ 5 + 5 \ci \\ -2 \end{bmatrix}
+ k_4 \begin{bmatrix} 3 + 2 \ci \\ -1 + 5 \ci \\ -7 - 5 \ci \\ 3 + 3 \ci \end{bmatrix}
= \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}\text{.}
\end{equation*}
As usual, we combine the linear combination on the left into a single vector:
\begin{equation*}
\begin{bmatrix}
(1 + \ci) k_1 + (2 + \ci) k_2 + k_3 + (3 + 2 \ci) k_4 \\
3 \ci k_1 + (1 - 2 \ci) k_3 + (-1 + 5 \ci) k_4 \\
-2 k_2 + (5 + 5 \ci) k_3 + (-7 - 5 \ci) k_4 \\
-k_1 + (2 + 3 \ci) k_2 - 2 k_3 + (3 + 3 \ci) k_4
\end{bmatrix}
= \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}\text{.}
\end{equation*}
Then we turn this vector equation into a homogeneous (complex) linear system:
\begin{equation*}
\begin{sysofeqns}{rcrcrcrcr}
(1 + \ci) k_1 \amp + \amp (2 + \ci) k_2 \amp + \amp k_3 \amp + \amp (3 + 2 \ci) k_4 \amp = \amp 0 \text{,} \\
3 \ci k_1 \amp + \amp \amp \amp (1 - 2 \ci) k_3 \amp + \amp (-1 + 5 \ci) k_4 \amp = \amp 0 \text{,} \\
\amp \amp -2 k_2 \amp + \amp (5 + 5 \ci) k_3 \amp + \amp (-7 - 5 \ci) k_4 \amp = \amp 0 \text{,} \\
-k_1 \amp + \amp (2 + 3 \ci) k_2 \amp - \amp 2 k_3 \amp + \amp (3 + 3 \ci) k_4 \amp = \amp 0 \text{.}
\end{sysofeqns}
\end{equation*}
Since the system is homogeneous, we can solve by reducing the coefficient matrix:
\begin{equation*}
\begin{bmatrix}
1 + \ci \amp 2 + \ci \amp 1 \amp 3 + 2 \ci \\
3 \ci \amp 0 \amp 1 - 2 \ci \amp -1 + 5 \ci \\
0 \amp -2 \amp 5 + 5 \ci \amp -7 - 5 \ci \\
-1 \amp 2 + 3 \ci \amp - 2 \amp 3 + 3 \ci
\end{bmatrix}
\qquad \rowredarrow \qquad
\begin{abmatrix}{rrrr}
1 \amp 0 \amp 0 \amp 1 \\
0 \amp 1 \amp 0 \amp 1 \\
0 \amp 0 \amp 1 \amp -1 \\
0 \amp 0 \amp 0 \amp 0
\end{abmatrix}\text{.}
\end{equation*}
Since there is no leading one in the fourth column, solving the system requires a parameter, hence there are nontrivial solutions. Therefore, the set of vectors is linearly dependent. In particular, the fourth column provides a dependence relation amongst the four vectors:
\begin{equation*}
\begin{bmatrix} 3 + 2 \ci \amp -1 + 5 \ci \\ -7 - 5 \ci \amp 3 + 3 \ci \end{bmatrix}
= \begin{bmatrix} 1 + \ci \amp 3 \ci \\ 0 \amp -1 \end{bmatrix}
+ \begin{bmatrix} 2 + \ci \amp 0 \\ -2 \amp 2 + 3 \ci \end{bmatrix}
- \begin{bmatrix} 0 \amp 1 - 2 \ci \\ 5 + 5 \ci \amp -2 \end{bmatrix}\text{.}
\end{equation*}
Remark 23.4.7.
You may have noticed that Example 23.4.5 and Example 23.4.6 are essentially the same example. Their similarity can be explained by the following two observations.
-
The vectors in Example 23.4.6 are the coordinate vectors of the vectors in Example 23.4.5, with respect to the standard basis of \(\matrixring_2(\C)\text{.}\)
-
It is possible to rephrase the definition of linear dependence in terms of span: a set of vectors is linearly dependent precisely when one of the vectors is in the span of the others.
Aside: Review.
Example 23.4.8. Determining a basis for a null space.
Suppose we would like to determine a basis for the null space of the complex matrix
\begin{equation*}
A = \begin{bmatrix}
1 + \ci \amp -1 + 5 \ci \amp 1 - \ci \amp 2 + 4 \ci \\
-2 + \ci \amp -7 - 4 \ci \amp 2 + 2 \ci \amp -9 + 3 \ci \\
-2 - \ci \amp -1 - 8 \ci \amp -2 + \ci \amp -1 - 5 \ci \\
5 \ci \amp -15 + 10 \ci \amp 2 - 2 \ci \amp 5 + 13 \ci
\end{bmatrix}\text{.}
\end{equation*}
As usual, row reduce to get
\begin{equation*}
\begin{bmatrix}
1 \amp 2 + 3 \ci \amp 0 \amp 1 - \ci \\
0 \amp 0 \amp 1 \amp -2 + 2 \ci \\
0 \amp 0 \amp 0 \amp 0 \\
0 \amp 0 \amp 0 \amp 0
\end{bmatrix}\text{.}
\end{equation*}
You may recognize this as the reduced matrix from Example 11.3.3 (except with an extra row of zeros), where we solved the corresponding homogeneous system by setting parameters for free variables \(x_2\) and \(x_4\text{,}\) obtaining the general solution in parametric form as
\begin{align*}
x_1 \amp = (-2 - 3 \ci) s + (-1 + \ci) t \text{,} \amp
x_2 \amp = s \text{,} \amp
x_3 \amp = (2 - 2 \ci) t \text{,} \amp
x_4 \amp = t \text{.}
\end{align*}
To describe this solution space in terms of a basis, use the variables as coordinates in a vector, and then separate the parametric expressions by parameter in order to express the general solution vector as a linear combination:
\begin{equation*}
\uvec{x}
= \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix}
= \begin{bmatrix}
(-2 - 3 \ci) s + (-1 + \ci) t \\
s \\
(2 - 2 \ci) t \\
t
\end{bmatrix}
= s \begin{bmatrix}
-2 - 3 \ci \\
1 \\
0 \\
0
\end{bmatrix}
+ t \begin{bmatrix}
-1 + \ci \\
0 \\
2 - 2 \ci \\
1
\end{bmatrix}\text{.}
\end{equation*}
Since every solution vector can be expressed as a linear combination of these two particular solution vectors, the null space of matrix \(A\) is
\begin{equation*}
\Span \left\{
\begin{bmatrix}
-2 - 3 \ci \\
1 \\
0 \\
0
\end{bmatrix},
\begin{bmatrix}
-1 + \ci \\
0 \\
2 - 2 \ci \\
1
\end{bmatrix}
\right\}\text{.}
\end{equation*}
The row reduction process guarantees that the spanning vectors we obtain from solving are linearly independent, so a basis for the null space of \(A\) is
\begin{equation*}
\basisfont{B}_{\mathrm{null}}
= \left\{
\begin{bmatrix}
-2 - 3 \ci \\
1 \\
0 \\
0
\end{bmatrix},
\begin{bmatrix}
-1 + \ci \\
0 \\
2 - 2 \ci \\
1
\end{bmatrix}
\right\}\text{.}
\end{equation*}

