The dimensions (in order) are \(3 \times 3\text{,}\)\(5 \times 3\text{,}\)\(3 \times 4\text{,}\) and \(4 \times 4\text{.}\) Only the first and last matrices are square.
The first matrix from ExerciseΒ 3 (call it \(A\)) is square, and has main diagonal \(a_{11} = -5\text{,}\)\(a_{22} = -4\text{,}\) and \(a_{33} = 2\text{:}\)
The last matrix from ExerciseΒ 3 (call it \(D\)) is square, and has main diagonal \(d_{11} = 2\text{,}\)\(d_{22} = 9\text{,}\)\(d_{33} = -1\text{,}\) and \(d_{44} = 7\text{:}\)
As usual, let \(i\) represent the row index and \(j\) represent the column index of each entry in a matrix. Create a \(6 \times 6\) matrix has the following properties:
All entries with \(i \gt j\) are zero, all other entries are nonzero.
Consider the relationship between this matrix multiplication task, the final matrix multiplication task in ExerciseΒ 29, and the three matrix products you have already computed in this exercise.
Notice. Here have multiplied two matrices, neither of which is a zero matrix, yet the result is a zero matrix. This confirms that for matrices, result \(A B = \zerovec\) does NOT imply that at least one of \(A\) or \(B\) must be zero.
The first factor in the matrix product is \(2 \times 1\) and the second factor is \(1 \times 3\text{,}\) so the product result should be a \(2 \times 3\) matrix:
You should notice a simple pattern in the result due to the βformβ of these matrices. Will the same pattern hold for \(2 \times 2\) matrices of the same form? For \(4 \times 4\) matrices? For \(n \times n\) matrices?
Undefined. We cannot take powers of a matrix that is not square as there will be a mismatch between the number of columns of the first factor with the number of rows of the second factor.
Similar to ExerciseΒ 37, You should notice a simple pattern in the result due to the βformβ of this matrix. Will the same pattern hold for a \(2 \times 2\) matrix of the same form? For a \(4 \times 4\) matrix? For a \(n \times n\) matrix?
Undefined. We computed that initial matrix product in ExerciseΒ 44 and the result had dimensions \(2 \times 2\text{,}\) which is not compatible with addition to a \(2 \times 3\) matrix.
We can reduce the number of matrix products we need to compute by factoring the common \(X\) out of the first two terms, but we must be careful to factor it in the correct βdirectionβ:
\begin{equation*}
X^2 + A X - B^2 = (X + A) X - B^2 \text{.}
\end{equation*}
In each case, determine the dimensions of the product matrix \(A B\text{,}\) or state why the product matrix does not exist, based on the stated dimensions of the individual factor matrices \(A\) and \(B\text{.}\)
Interpreting matrix equality as a system of equations.
In each case, use the definition of equal matrices to determine the variable values that make the matrix equality true. In any equality where \(\zerovec\) appears, assume it represents a zero matrix of appropriate dimensions.
\(\displaystyle
\begin{bmatrix}
b + 3 c \amp c + 3 b - 2 a \\
a + 4 c \amp a - 4 b - 3 c
\end{bmatrix}
=
\begin{abmatrix}{rr}
-5 \amp 3 \\
-9 \amp 0
\end{abmatrix}\)
This system is consistent with one unique solution \(k_1 = -8\text{,}\)\(b = 3\text{.}\) You may test that this is the correct solution by recomputing the matrix algebra on the left-hand side of the original matrix equality with those values in place of \(k_1\) and \(k_2\text{.}\)
Careful. It is tempting to attempt to solve the system of equations directly, without using an augmented matrix, as we clearly have \(c_2 = -5\) from the first equation, and then we can use the third equation to solve \(c_1 = 0\text{.}\) But if you substitute those two values into the left-hand side of the original matrix equality and recompute the matrix algebra, you will find that the matrix equality is not true for these values. The reason for this is that the original matrix equality leads to a system of three equations, and we cannot ignore any one equation in that system.
In the context of the original matrix equality, the \(\zerovec\) symbol on the right-hand side should represent the \(2 \times 2\) zero matrix, and so our simplified matrix equality is now
This system is consistent (as all homogeneous systems are) but the solution requires a parameter, so in fact there are an infinite number of combinations of scalar values for \(k_1,k_2,k_3,k_4\) that make the original matrix equality true. In particular, if we assign parameter \(k_4 = t\text{,}\) the general solution is
\begin{align*}
k_1 \amp = -4 t \text{,} \amp
k_2 \amp = -7 t \text{,} \amp
k_3 \amp = 3 t \text{,} \amp
k_4 \amp = t \text{.}
\end{align*}
Careful. We may have been misled by the zero matrix on the right-hand side of the original matrix equality, and decided that the solution must be to set each of \(k_1,k_2,k_3,k_4\) to \(0\text{.}\) While that is one possible solution to the matrix equation (which we would call the trivial solution), by re-interpreting the matrix equation as a system of (ordinary) equations, we were able to arrive at the complete collection of all possible solutions to the matrix equation. So while it is possible to make the matrix equality true by setting each of \(k_1,k_2,k_3,k_4\) to \(0\text{,}\) it is also possible, for example, by setting
In the context of the original matrix equality, the \(\zerovec\) symbol on the right-hand side should represent the \(3 \times 2\) zero matrix, and so our simplified matrix equality is now
This system is consistent (as all homogeneous systems are) with one unique solution, the trivial one: \(c_1 = 0\text{,}\)\(c_2 = 0\text{,}\) and \(c_3 = 0\text{.}\)
Careful. We may have been misled by the zero matrix on the right-hand side of the original matrix equality, and prematurely jumped to the conclusion that the only solution must be to set each of \(c_1,c_2,c_3\) to \(0\) (the trivial solution). While that turned out to be the case this time, it is important that we went through the full process of re-interpreting the matrix equation as a system of (ordinary) equations so that we could confirm that the trivial solution was indeed the only solution.
\begin{gather*}
5 \utrans{X} + A = \zerovec \\
5 \utrans{X} = - A \\
\utrans{X} = - \tfrac{1}{5} A \text{.}
\end{gather*}
The final algebraic step is to βundoβ the transpose that has been applied to \(X\text{,}\) but by RuleΒ 5.a of PropositionΒ 4.5.1, transpose reverses itself:
\begin{gather*}
\utrans{X} = - \tfrac{1}{5} A \\
\utrans{(\utrans{X})} = \utrans{(- \tfrac{1}{5} A)} \\
X = - \tfrac{1}{5} \utrans{A} \text{,}
\end{gather*}
In both cases, the multiplication result matches the column of constants from the answer to ExerciseΒ 72, which verifies that each of the proposed solutions is an actual solution.
Each of the first two results matches the column of constants from the answer to ExerciseΒ 73, which verifies that these two proposed solution are both actual solutions. However, the third result does not match that column of constants, and so the third proposed solution is not actually a solution.
This result matches the (homogeneous) column of constants from the answer to ExerciseΒ 74, which verifies that the proposed solution is an actual (and in this case, nontrivial) solution.
In each case, express the general solution to the system \(A \uvec{x} = \uvec{b}\) in vector form. (See SubsubsectionΒ 4.4.4.2.) Then use your vector form of the solution to determine the specific solution obtained by setting all parameters to \(0\text{,}\) as well as all possible specific solutions obtained by setting a single parameter to \(1\) while setting all other parameters to \(0\text{.}\)
Using \(x,y\) as the variables, variable \(y\) is free and we assign it a parameter: \(y = t\text{.}\) Interpreting the first row in the reduced matrix as an equation, we can isolate
\begin{equation*}
x = 4 - 2 t \text{.}
\end{equation*}
This matrix is already reduced. Using \(x,y\) as the variables, variable \(y\) is free and we assign it a parameter: \(y = t\text{.}\) Interpreting the row in the augmented matrix as an equation, we can isolate
\begin{equation*}
x = 7 + 8 t \text{.}
\end{equation*}
Using \(x,y,z\) as the variables, variable \(z\) is free and we assign it a parameter: \(z = t\text{.}\) Interpreting the nonzero rows of the reduced matrix as equations, we can isolate
\begin{align*}
x \amp = 4 t \text{,} \amp y \amp = 2 - 4 t \text{.}
\end{align*}
This is a homogeneous system with a coefficient matrix that is already reduced. Using \(x,y,z\) as the variables, variables \(y\) and \(z\) are free and we assign them parameters: \(y = s\text{,}\)\(z = t\text{.}\) Interpreting the row in \(A\) as coefficients in a homogeneous equation, we can isolate
\begin{equation*}
x = - 3 s + 4 t \text{.}
\end{equation*}
\begin{equation*}
\uvec{x}
= \begin{bmatrix} x \\ y \\ z \end{bmatrix}
= \begin{bmatrix} -3 s + 4 t \\ s \\ t \end{bmatrix}
= s \begin{abmatrix}{r} -3 \\ 1 \\ 0 \end{abmatrix}
+ t \begin{bmatrix} 4 \\ 0 \\ 1 \end{bmatrix}\text{.}
\end{equation*}
For each matrix operation, write out the relationship between the entries of the βinputβ matrices and the entries of the resulting βoutputβ matrix.
For \(C = A + B\text{,}\) where \(A\) and \(B\) are both of size \(m \times n\text{,}\) express the general entry \(c_{ij}\) of the \(m \times n\) sum matrix \(C\) as a formula in the entries of \(A\) and \(B\text{.}\)
For \(C = A - B\text{,}\) where \(A\) and \(B\) are both of size \(m \times n\text{,}\) express the general entry \(c_{ij}\) of the \(m \times n\) difference matrix \(C\) as a formula in the entries of \(A\) and \(B\text{.}\)
For \(C = -A\text{,}\) where \(A\) is of size \(m \times n\text{,}\) express the general entry \(c_{ij}\) of the \(m \times n\) negative matrix \(C\) as a formula in the entries of \(A\text{.}\)
For \(C = k A\text{,}\) where \(A\) is of size \(m \times n\) and \(k\) is a scalar, express the general entry \(c_{ij}\) of the \(m \times n\) scaled matrix \(C\) as a formula in the entries of \(A\text{.}\)
For \(C = \utrans{A}\text{,}\) where \(A\) is of size \(m \times n\text{,}\) express the general entry \(c_{ij}\) of the \(m \times n\) transposed matrix \(C\) as a formula in the entries of \(A\text{.}\)
For \(C = A B\text{,}\) where \(A\) is an \(\ell \times m\) matrix and \(B\) is an \(m \times n\) matrix, express the general entry \(c_{ij}\) of the \(\ell \times n\) product matrix \(C\) as a formula in the entries of \(A\) and \(B\text{.}\)
For \(C = A^2\text{,}\) where \(A\) is of square size \(n \times n\text{,}\) express the general entry \(c_{ij}\) of the \(n \times n\) power matrix \(C\) as a formula in the entries of \(A\text{.}\)
In ExerciseΒ 38 we found that by multiplying two matrices whose entries follow a particular pattern, we can effectively βisolateβ the addition part of the multiply-then-add procedure of matrix multiplication. Or, put another way, this pattern reveals that there is a way to βrecastβ addition of scalars (numbers) as a form of multiplication of \(2 \times 2\) matrices.
Suppose \(\uvec{x}_0\) is a column vector that represents a specific solution to the homogeneous system \(A \uvec{x} = \zerovec\text{,}\) and let \(k\) represent an arbitrary scalar value. Verify that the scaled vector \(\uvec{y} = k \uvec{x}_0\) also represents a specific solution to that same homogeneous system.
Suppose \(\uvec{x}_1\) and \(\uvec{x}_2\) are column vectors of the same dimensions, each of which represents a specific solution to the homogeneous system \(A \uvec{x} = \zerovec\text{.}\) Verify that the sum vector \(\uvec{y} = \uvec{x}_1 + \uvec{x}_2\) also represents a specific solution to that same homogeneous system.
Suppose \(\uvec{x}_1, \uvec{x}_2, \dotsc, \uvec{x}_m\) are all column vectors of the same dimensions, each of which represents a specific solution to the homogeneous system \(A \uvec{x} = \zerovec\text{,}\) and let each of \(k_1, k_2, \dotsc, k_m\) represent an arbitrary scalar value. Verify that the linear combination vector