Section 23.4 Examples
In this section, we carry out some common vector space calculations in the complex context.
Example 23.4.1. Algebraic operations involving \(n\)-dimensional complex vectors.
Just as in \(\R^n\text{,}\) vector addition in \(\C^n\) is performed component-wise. Here is an example in \(\C^4\text{,}\) with vectors realized as column vectors.
\begin{equation*}
\begin{bmatrix}
1 + \ci \\
-\ci \\
3 - 2 \ci \\
6
\end{bmatrix}
+
\begin{bmatrix}
1 - \ci \\
4 \\
-3 + \ci \\
6 + 6 \ci
\end{bmatrix}
=
\begin{bmatrix}
2 \\
4 - \ci \\
- \ci \\
12 + 6 \ci
\end{bmatrix}
\end{equation*}
And just as in \(\R^n\text{,}\) scalar multiplication in \(\C^n\) is performed by multiplying each component by the scalar. Here is an example in \(\C^4\text{.}\)
\begin{equation*}
(3 + 2 \ci)
\begin{bmatrix}
1 \\
-\ci \\
3 - 2 \ci \\
1 - \ci
\end{bmatrix}
=
\begin{bmatrix}
3 + 2 \ci \\
2 - 3 \ci \\
13 \\
5 - \ci
\end{bmatrix}
\end{equation*}
Example 23.4.2. Checking for inclusion in a span.
In the space \(\matrixring_2(\C)\text{,}\) is
\begin{equation*}
\begin{bmatrix} 3 + 2 \ci \amp -1 + 5 \ci \\ -7 - 5 \ci \amp 3 + 3 \ci \end{bmatrix}
\end{equation*}
contained in the subspace
\begin{equation*}
\Span\left\{
\begin{bmatrix} 1 + \ci \amp 3 \ci \\ 0 \amp -1 \end{bmatrix},
\begin{bmatrix} 2 + \ci \amp 0 \\ -2 \amp 2 + 3 \ci \end{bmatrix},
\begin{bmatrix} 0 \amp 1 - 2 \ci \\ 5 + 5 \ci \amp -2 \end{bmatrix}
\right\}
\quad \text{?}
\end{equation*}
This question is equivalent to asking whether the matrix equation
\begin{equation*}
k_1 \begin{bmatrix} 1 + \ci \amp 3 \ci \\ 0 \amp -1 \end{bmatrix}
+ k_2 \begin{bmatrix} 2 + \ci \amp 0 \\ -2 \amp 2 + 3 \ci \end{bmatrix}
+ k_3 \begin{bmatrix} 0 \amp 1 - 2 \ci \\ 5 + 5 \ci \amp -2 \end{bmatrix}
= \begin{bmatrix} 3 + 2 \ci \amp -1 + 5 \ci \\ -7 - 5 \ci \amp 3 + 3 \ci \end{bmatrix}
\end{equation*}
has a solution in the (complex) scalars \(k_1\text{,}\) \(k_2\text{,}\) \(k_3\text{.}\) As usual, we combine the linear combination on the left into a single matrix:
\begin{equation*}
\begin{bmatrix} (1 + \ci) k_1 + (2 + \ci) k_2 + k_3 \amp 3 \ci k_1 + (1 - 2 \ci) k_3 \\ -2 k_2 + (5 + 5 \ci) k_3 \amp -k_1 + (2 + 3 \ci) k_2 - 2 k_3 \end{bmatrix}
= \begin{bmatrix} 3 + 2 \ci \amp -1 + 5 \ci \\ -7 - 5 \ci \amp 3 + 3 \ci \end{bmatrix}\text{.}
\end{equation*}
Then we turn this matrix equation into a (complex) linear system:
\begin{equation*}
\left\{\begin{array}{rcrcrcr}
(1 + \ci) k_1 \amp + \amp (2 + \ci) k_2 \amp + \amp k_3 \amp = \amp 3 + 2 \ci \text{,} \\
3 \ci k_1 \amp + \amp \amp \amp (1 - 2 \ci) k_3 \amp = \amp -1 + 5 \ci \text{,} \\
\amp \amp -2 k_2 \amp + \amp (5 + 5 \ci) k_3 \amp = \amp -7 - 5 \ci \text{,} \\
-k_1 \amp + \amp (2 + 3 \ci) k_2 \amp - \amp 2 k_3 \amp = \amp 3 + 3 \ci \text{.}
\end{array}\right.
\end{equation*}
Create an augmented matrix and row reduce:
\begin{equation*}
\left[\begin{array}{ccc|c}
1 + \ci \amp 2 + \ci \amp 1 \amp 3 + 2 \ci \\
3 \ci \amp 0 \amp 1 - 2 \ci \amp -1 + 5 \ci \\
0 \amp -2 \amp 5 + 5 \ci \amp -7 - 5 \ci \\
-1 \amp 2 + 3 \ci \amp - 2 \amp 3 + 3 \ci
\end{array}\right]
\qquad \rowredarrow \qquad
\left[\begin{array}{rrr|r}
1 \amp 0 \amp 0 \amp 1 \\
0 \amp 1 \amp 0 \amp 1 \\
0 \amp 0 \amp 1 \amp -1 \\
0 \amp 0 \amp 0 \amp 0
\end{array}\right]\text{.}
\end{equation*}
The system has a solution, so the answer is yes, the given vector is in the span of the other three. In particular, from the reduced augmented matrix, we see that
\begin{equation*}
\begin{bmatrix} 3 + 2 \ci \amp -1 + 5 \ci \\ -7 - 5 \ci \amp 3 + 3 \ci \end{bmatrix}
= \begin{bmatrix} 1 + \ci \amp 3 \ci \\ 0 \amp -1 \end{bmatrix}
+ \begin{bmatrix} 2 + \ci \amp 0 \\ -2 \amp 2 + 3 \ci \end{bmatrix}
- \begin{bmatrix} 0 \amp 1 - 2 \ci \\ 5 + 5 \ci \amp -2 \end{bmatrix}\text{.}
\end{equation*}
Example 23.4.3. Testing for linear dependence/independence.
In the space \(\C^4\text{,}\) are the vectors
\begin{equation*}
\begin{bmatrix} 1 + \ci \\ 3 \ci \\ 0 \\ -1 \end{bmatrix},
\begin{bmatrix} 2 + \ci \\ 0 \\ -2 \\ 2 + 3 \ci \end{bmatrix},
\begin{bmatrix} 0 \\ 1 - 2 \ci \\ 5 + 5 \ci \\ -2 \end{bmatrix},
\begin{bmatrix} 3 + 2 \ci \\ -1 + 5 \ci \\ -7 - 5 \ci \\ 3 + 3 \ci \end{bmatrix}
\end{equation*}
linearly dependent or independent? Applying the Test for Linear Dependence/Independence, we set up the vector equation
\begin{equation*}
k_1 \begin{bmatrix} 1 + \ci \\ 3 \ci \\ 0 \\ -1 \end{bmatrix}
+ k_2 \begin{bmatrix} 2 + \ci \\ 0 \\ -2 \\ 2 + 3 \ci \end{bmatrix}
+ k_3 \begin{bmatrix} 0 \\ 1 - 2 \ci \\ 5 + 5 \ci \\ -2 \end{bmatrix}
+ k_4 \begin{bmatrix} 3 + 2 \ci \\ -1 + 5 \ci \\ -7 - 5 \ci \\ 3 + 3 \ci \end{bmatrix}
= \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}\text{.}
\end{equation*}
As usual, we combine the linear combination on the left into a single vector:
\begin{equation*}
\begin{bmatrix}
(1 + \ci) k_1 + (2 + \ci) k_2 + k_3 + (3 + 2 \ci) k_4 \\
3 \ci k_1 + (1 - 2 \ci) k_3 + (-1 + 5 \ci) k_4 \\
-2 k_2 + (5 + 5 \ci) k_3 + (-7 - 5 \ci) k_4 \\
-k_1 + (2 + 3 \ci) k_2 - 2 k_3 + (3 + 3 \ci) k_4
\end{bmatrix}
= \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}\text{.}
\end{equation*}
Then we turn this vector equation into a homogeneous (complex) linear system:
\begin{equation*}
\left\{\begin{array}{rcrcrcrcr}
(1 + \ci) k_1 \amp + \amp (2 + \ci) k_2 \amp + \amp k_3 \amp + \amp (3 + 2 \ci) k_4 \amp = \amp 0 \text{,} \\
3 \ci k_1 \amp + \amp \amp \amp (1 - 2 \ci) k_3 \amp + \amp (-1 + 5 \ci) k_4 \amp = \amp 0 \text{,} \\
\amp \amp -2 k_2 \amp + \amp (5 + 5 \ci) k_3 \amp + \amp (-7 - 5 \ci) k_4 \amp = \amp 0 \text{,} \\
-k_1 \amp + \amp (2 + 3 \ci) k_2 \amp - \amp 2 k_3 \amp + \amp (3 + 3 \ci) k_4 \amp = \amp 0 \text{.}
\end{array}\right.
\end{equation*}
Since the system is homogeneous, we can solve by reducing the coefficient matrix:
\begin{equation*}
\begin{bmatrix}
1 + \ci \amp 2 + \ci \amp 1 \amp 3 + 2 \ci \\
3 \ci \amp 0 \amp 1 - 2 \ci \amp -1 + 5 \ci \\
0 \amp -2 \amp 5 + 5 \ci \amp -7 - 5 \ci \\
-1 \amp 2 + 3 \ci \amp - 2 \amp 3 + 3 \ci
\end{bmatrix}
\qquad \rowredarrow \qquad
\left[\begin{array}{rrrr}
1 \amp 0 \amp 0 \amp 1 \\
0 \amp 1 \amp 0 \amp 1 \\
0 \amp 0 \amp 1 \amp -1 \\
0 \amp 0 \amp 0 \amp 0
\end{array}\right]\text{.}
\end{equation*}
Since there is no leading one in the fourth column, solving the system requires a parameter, hence there are nontrivial solutions. Therefore, the set of vectors is linearly dependent. In particular, the fourth column provides a dependence relation amongst the four vectors:
\begin{equation*}
\begin{bmatrix} 3 + 2 \ci \amp -1 + 5 \ci \\ -7 - 5 \ci \amp 3 + 3 \ci \end{bmatrix}
= \begin{bmatrix} 1 + \ci \amp 3 \ci \\ 0 \amp -1 \end{bmatrix}
+ \begin{bmatrix} 2 + \ci \amp 0 \\ -2 \amp 2 + 3 \ci \end{bmatrix}
- \begin{bmatrix} 0 \amp 1 - 2 \ci \\ 5 + 5 \ci \amp -2 \end{bmatrix}\text{.}
\end{equation*}
Example 23.4.5. Determining a basis for a null space.
Suppose we would like to determine a basis for the null space of the complex matrix
\begin{equation*}
A = \begin{bmatrix}
1 + \ci \amp -1 + 5 \ci \amp 1 - \ci \amp 2 + 4 \ci \\
-2 + \ci \amp -7 - 4 \ci \amp 2 + 2 \ci \amp -9 + 3 \ci \\
-2 - \ci \amp -1 - 8 \ci \amp -2 + \ci \amp -1 - 5 \ci \\
5 \ci \amp -15 + 10 \ci \amp 2 - 2 \ci \amp 5 + 13 \ci
\end{bmatrix}\text{.}
\end{equation*}
As usual, row reduce to get
\begin{equation*}
\begin{bmatrix}
1 \amp 2 + 3 \ci \amp 0 \amp 1 - \ci \\
0 \amp 0 \amp 1 \amp -2 + 2 \ci \\
0 \amp 0 \amp 0 \amp 0 \\
0 \amp 0 \amp 0 \amp 0
\end{bmatrix}\text{.}
\end{equation*}
You may recognize this as the reduced matrix from Example 11.3.3 (except with an extra row of zeros), where we solved the corresponding homogeneous system by setting parameters for free variables \(x_2\) and \(x_4\text{,}\) obtaining the general solution in parametric form as
\begin{align*}
x_1 \amp = (-2 - 3 \ci) s + (-1 + \ci) t \text{,} \amp
x_2 \amp = s \text{,} \amp
x_3 \amp = (2 - 2 \ci) t \text{,} \amp
x_4 \amp = t \text{.}
\end{align*}
To describe this solution space in terms of a basis, use the variables as coordinates in a vector, and then separate the parametric expressions by parameter in order to express the general solution vector as a linear combination:
\begin{equation*}
\uvec{x}
= \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix}
= \begin{bmatrix}
(-2 - 3 \ci) s + (-1 + \ci) t \\
s \\
(2 - 2 \ci) t \\
t
\end{bmatrix}
= s \begin{bmatrix}
-2 - 3 \ci \\
1 \\
0 \\
0
\end{bmatrix}
+ t \begin{bmatrix}
-1 + \ci \\
0 \\
2 - 2 \ci \\
1
\end{bmatrix}\text{.}
\end{equation*}
Since every solution vector can be expressed as a linear combination of these two particular solution vectors, the null space of matrix \(A\) is
\begin{equation*}
\Span \left\{
\begin{bmatrix}
-2 - 3 \ci \\
1 \\
0 \\
0
\end{bmatrix},
\begin{bmatrix}
-1 + \ci \\
0 \\
2 - 2 \ci \\
1
\end{bmatrix}
\right\}\text{.}
\end{equation*}
The row reduction process guarantees that the spanning vectors we obtain from solving are linearly independent, so a basis for the null space of \(A\) is
\begin{equation*}
\basisfont{B}_{\mathrm{null}}
= \left\{
\begin{bmatrix}
-2 - 3 \ci \\
1 \\
0 \\
0
\end{bmatrix},
\begin{bmatrix}
-1 + \ci \\
0 \\
2 - 2 \ci \\
1
\end{bmatrix}
\right\}\text{.}
\end{equation*}