Consider the relationship between this matrix multiplication task, the final matrix multiplication task in Exercise 28, and the three matrix products you have already computed in this exercise.
Notice: Here have multiplied two matrices, neither of which is a zero matrix, yet the result is a zero matrix. This confirms that for matrices, result does NOT imply that at least one of or must be zero.
Undefined: cannot take powers of a matrix that is not square as there will be a mismatch between the number of columns of the first factor with the number of rows of the second factor.
We can reduce the number of matrix products we need to compute by factoring the common out of the first two terms, but we must be careful to factor it in the correct “direction”:
In each case, determine the dimensions of the product matrix , or state why the product matrix does not exist, based on the stated dimensions of the individual factor matrices and .
Interpreting matrix equality as a system of equations.
In each case, use the definition of equal matrices to determine the variable values that make the matrix equality true. In any equality where appears, assume it represents a zero matrix of appropriate dimensions.
First compute the left-hand side to a single column matrix:
.
Our simplified matrix equality is now
.
Comparing corresponding entries on each side of the matrix equality leads to system of equations
.
Turn this system into an augmented matrix and reduce:
rowreduce.
This system is consistent with one unique solution ,. You may test that this is the correct solution by recomputing the matrix algebra on the left-hand side of the original matrix equality with those values in place of and .
First compute the left-hand side to a single column matrix:
.
Our simplified matrix equality is now
.
Comparing corresponding entries on each side of the matrix equality leads to system of equations
.
Turn this system into an augmented matrix and reduce:
rowreduce.
This system is inconsistent, so there is no combination of values for and that make the original matrix equality true.
Careful: It is tempting to attempt to solve the system of equations directly, without using an augmented matrix, as we clearly have from the first equation, and then we can use the third equation to solve . But if you substitute those two values into the left-hand side of the original matrix equality and recompute the matrix algebra, you will find that the matrix equality is not true for these values. The reason for this is that the original matrix equality leads to a system of three equations, and we cannot ignore any one equation in that system.
First compute the left-hand side to a single matrix:
.
In the context of the original matrix equality, the symbol on the right-hand side should represent the zero matrix, and so our simplified matrix equality is now
.
Comparing corresponding entries on each side of the matrix equality leads to homogeneous system of equations
.
Since the system is homogeneous, we reduce only with the coefficient matrix:
rowreduce.
This system is consistent (as all homogeneous systems are) but the solution requires a parameter, so in fact there are an infinite number of combinations of scalar values for that make the original matrix equality true. In particular, if we assign parameter , the general solution is
,,,.
Careful: We may have been misled by the zero matrix on the right-hand side of the original matrix equality, and decided that the solution must be to set each of to . While that is one possible solution to the matrix equation (which we would call the trivial solution), by re-interpreting the matrix equation as a system of (ordinary) equations, we were able to arrive at the complete collection of all possible solutions to the matrix equation. So while it is possible to make the matrix equality true by setting each of to , it is also possible, for example, by setting
First compute the left-hand side to a single matrix:
.
In the context of the original matrix equality, the symbol on the right-hand side should represent the zero matrix, and so our simplified matrix equality is now
.
Comparing corresponding entries on each side of the matrix equality leads to homogeneous system of equations
.
Since the system is homogeneous, we reduce only with the coefficient matrix:
rowreduce.
This system is consistent (as all homogeneous systems are) with one unique solution, the trivial one: ,, and .
Careful: We may have been misled by the zero matrix on the right-hand side of the original matrix equality, and prematurely jumped to the conclusion that the only solution must be to set each of to (the trivial solution). While that turned out to be the case this time, it is important that we went through the full process of re-interpreting the matrix equation as a system of (ordinary) equations so that we could confirm that the trivial solution was indeed the only solution.
As in the answer to Exercise 71, we have coefficient matrix
.
The two potential solutions can be expressed as column vectors
,.
Multiply each of these by the coefficient matrix:
,.
In both cases, the multiplication result matches the column of constants from the answer to Exercise 71, which verifies that each of the proposed solutions is an actual solution.
As in the answer to Exercise 72, we have coefficient matrix
.
The three potential solutions can be expressed as column vectors
,,.
Multiply each of these by the coefficient matrix in turn:
.
Each of the first two results matches the column of constants from the answer to Exercise 72, which verifies that these two proposed solution are both actual solutions. However, the third result does not match that column of constants, and so the third proposed solution is not actually a solution.
As in the answer to Exercise 73, we have coefficient matrix
.
The potential solution can be expressed as column vector
.
Multiply these together:
.
This result matches the (homogeneous) column of constants from the answer to Exercise 73, which verifies that the proposed solution is an actual (and in this case, nontrivial) solution.
In each case, express the general solution to the system in vector form. (See Subsubsection 4.4.4.2.) Then use your vector form of the solution to determine the specific solution obtained by setting all parameters to , as well as all possible specific solutions obtained by setting a single parameter to while setting all other parameters to .
Using as the variables, variable is free and we assign it a parameter: . Interpreting the first row in the reduced matrix as an equation, we can isolate
This matrix is already reduced. Using as the variables, variable is free and we assign it a parameter: . Interpreting the row in the augmented matrix as an equation, we can isolate
Using as the variables, variable is free and we assign it a parameter: . Interpreting the nonzero rows of the reduced matrix as equations, we can isolate
This is a homogeneous system with a coefficient matrix that is already reduced. Using as the variables, variables and are free and we assign them parameters: ,. Interpreting the row in as coefficients in a homogeneous equation, we can isolate
In Exercise 37 we found that by multiplying two matrices whose entries follow a particular pattern, we can effectively “isolate” the addition part of the multiply-then-add procedure of matrix multiplication. Or, put another way, this pattern reveals that there is a way to “recast” addition of scalars (numbers) as a form of multiplication of matrices.
Practise verifying matrix algebra rules. Use the provided proofs of Rule 1.b and Rule 2.c from Proposition 4.5.1 as templates/guides to carrying out your own verifications. You may also find your answers to Exercises 4.6.87–4.6.93 helpful. (Recall: that represents a zero matrix of appropriate dimensions, not the scalar .)
Suppose is a column vector that represents a specific solution to the homogeneous system , and let represent an arbitrary scalar value. Verify that the scaled vector also represents a specific solution to that same homogeneous system.
Suppose and are column vectors of the same dimensions, each of which represents a specific solution to the homogeneous system . Verify that the sum vector also represents a specific solution to that same homogeneous system.
Suppose are all column vectors of the same dimensions, each of which represents a specific solution to the homogeneous system , and let each of represent an arbitrary scalar value. Verify that the linear combination vector