Section 4.4 Examples
Subsection 4.4.1 Basic matrix operations
Here are some basic examples of matrix addition, subtraction, and scalar multiplication. For subtraction, watch out for double negatives!
Example 4.4.1. Matrix addition.
Example 4.4.2. Matrix subtraction.
Example 4.4.3. Scalar multiplication of a matrix.
Subsection 4.4.2 Matrix multiplication
Example 4.4.4. A detailed multiplication example.
Let’s compute the matrix product for
Notice that the sizes of ( ) and ( ) are compatible for multiplication in the order and that the result will be size First let’s multiply onto the columns of
Combining these two computations, we get
With some practise at matrix multiplication, you should be able to compute a product directly without doing separate computations for each column of the second matrix.
In this matrix multiplication example, notice that it does not make sense to even consider the possibility that because the sizes of and are not compatible for multiplication in the order and so is undefined!
Example 4.4.5. Matrix powers.
Since powers of matrices only work for square matrices, the power is undefined for the matrix in the previous matrix multiplication example. But we can compute for the matrix from that example.
To compute we can compute either of
and the result is the same.
Subsection 4.4.3 Combining operations
Example 4.4.6. Computing matrix formulas involving a combination of operations.
Keep in mind that operations inside brackets should be performed first, and as usual multiplication (both matrix and scalar) should be performed before addition (unless there are brackets to tell us otherwise).
Hopefully you’re not surprised that we got the same final result for both the formulas and From our familiar rules of algebra, we expect to be able to multiply inside the brackets in the first expression, and then rearrange the order of multiplication by and by However, we need to be careful — our “familiar” rules of algebra come from operations with numbers, and matrix algebra involves operations with matrices: addition, subtraction, and two different kinds of multiplication, scalar and matrix. We should not blindly expect all of our “familiar” rules of algebra to apply to matrix operations. We’ve already seen that the matrix version of the familiar rule is not true for matrix multiplication! In Subsection 4.5.1, we list the rules of algebra that are valid for matrix operations (which is most of our familiar rules from the algebra of numbers), and for some of the rules, in that same subsection we verify that they are indeed valid for matrices.
Subsection 4.4.4 Linear systems as matrix equations
Subsubsection 4.4.4.1 A first example
Example 4.4.7. A system as a matrix equation.
Let’s again consider the system from Task b of Discovery 4.5. To solve, we row reduce the associated augmented matrix to RREF as usual.
Variable is free, so assign a parameter Then we can solve to obtain the general solution is parametric form,
Let’s check a couple of particular solutions against the matrix equation that represents the system. Recall that for this system, is the column vector that contains the variables The particular solutions associated to parameter values and are
So the solution to the linear system we got by row reducing did indeed give us a vector solution to the matrix equation Let’s similarly check the solution, as in Task f of Discovery 4.5:
Again, our system solution gives us a solution to the matrix equation.
Subsubsection 4.4.4.2 Expressing system solutions in vector form
We may use matrices and matrix algebra to express the solutions to solutions as column vectors. In particular, we can expand solutions involving parameters into a linear combination of column vectors. Expressing solutions this way allows us to see the effect of each parameter on the system.
Let’s re-examine the systems in the examples from Section 2.4 as matrix equations, and express their solutions in vector form.
Example 4.4.8. Solutions in vector form: one unique solution.
We solved this system in Example 2.4.1 and determined that it had one unique solution, and In vector form, we write this solution as
Example 4.4.9. Solutions in vector form: an infinite number of solutions.
We solved this system in Example 2.4.2, and determined that it had an infinite number of solutions. We expressed the general solution to the system using parametric equations
In vector form, we expand this solution as
Notice how the solution is the sum of a constant part
and a variable part
Further notice how the constant part is a particular solution to the system — it is the “initial” particular solution associated to the parameter value
Example 4.4.10. Solutions in vector form: a homogenous system.
where is the zero column vector. We solved this system in Example 2.4.4, and determined that it had an infinite number of solutions. We expressed the general solution to the system using parametric equations
In vector form, we expand this solution as
This time, the solution is a sum of two variables parts,
since there are two parameters. And there is no constant part to the general solution, because if we set both parameters to zero we obtain the trivial solution A homogeneous system will always work out this way. (So it would be more accurate to say that the general solution to the system from Discovery 2.4 has trivial constant part, instead of saying it has no constant part.)
Example 4.4.11. Solutions in vector form: patterns for homogeneous and nonhomogeneous systems with the same coefficient matrix.
and found an infinite number of solutions, with general solution expressed parametrically as
In vector form, we express this as
This homogeneous system has the same coefficient matrix as in Example 4.4.9 above, so it is not surprising that their general solutions are related. In particular, notice that both systems have the same variable part, but that the nonhomogeneous system from Example 4.4.9 has a nontrivial constant part.
Subsection 4.4.5 Transpose
Example 4.4.12. Computing transposes.
Let’s compute some transposes.
The matrix is size so when we turn rows into columns to compute we end up with a result. Matrices and are square, so each of their transposes end up being the same size as the original matrix. But also, the numbers for the entries in and were chosen to emphasize some patterns in the transposes of square matrices. In interchanging rows and columns in notice how entries to the upper right of the main diagonal move to the “mirror image” position in the lower left of the main diagonal, and vice versa. So for square matrices, we might think of the transpose as “reflecting” entries in the main diagonal, while entries right on the main diagonal end up staying in place. Finally, we might consider this same “reflecting-in-the-diagonal” view of the transpose for except has the same entries in corresponding “mirror image” entries on either side of the diagonal, and so we end up with