Section 4.4 Examples
In this section.
Subsection 4.4.1 Basic matrix operations
Here are some basic examples of matrix addition, subtraction, and scalar multiplication. For subtraction, watch out for double negatives!
Example 4.4.1. Matrix addition.
Example 4.4.2. Matrix subtraction.
Example 4.4.3. Scalar multiplication of a matrix.
Subsection 4.4.2 Matrix multiplication
Example 4.4.4. A detailed multiplication example.
Let's compute the matrix product \(AB\text{,}\) for
Notice that the sizes of \(A\) (\(3\times 2\)) and \(B\) (\(2\times 2\)) are compatible for multiplication in the order \(AB\text{,}\) and that the result will be size \(3\times 2\text{.}\) First let's multiply \(A\) onto the columns of \(B\text{.}\)
Combining these two computations, we get
With some practise at matrix multiplication, you should be able to compute a product \(A B\) directly without doing separate computations for each column of the second matrix.
In this matrix multiplication example, notice that it does not make sense to even consider the possibility that \(B A = A B\) because the sizes of \(B\) and \(A\) are not compatible for multiplication in the order \(B A\text{,}\) and so \(B A\) is undefined!
Example 4.4.5. Matrix powers.
Since powers of matrices only work for square matrices, the power \(A^2\) is undefined for the \(3\times 2\) matrix \(A\) in the previous matrix multiplication example. But we can compute \(B^2\) for the \(2\times 2\) matrix \(B\) from that example.
To compute \(B^3\text{,}\) we can compute either of
or
\begin{align*} B^3 = B B B = B (B B) \amp= BB^2\\ \amp= \left[\begin{array}{rr} 1 \amp -2 \\ -3 \amp 4 \end{array}\right] \left[\begin{array}{rr} 7 \amp -10 \\ -15 \amp 22 \end{array}\right]\\ \amp= \left[\begin{array}{rr} 1\cdot 7 + (-2)(-15) \amp 1(-10) + (-2)\cdot 22 \\ (-3)\cdot 7 + 4(-15) \amp -3(-10) + 4\cdot 22 \end{array}\right]\\ \amp= \left[\begin{array}{rr} 37 \amp -54 \\ -81 \amp 118 \end{array}\right], \end{align*}and the result is the same.
Subsection 4.4.3 Combining operations
Example 4.4.6. Computing matrix formulas involving a combination of operations.
Let's compute both \(A (B + k C)\) and \(A B + k (A C)\text{,}\) for
Keep in mind that operations inside brackets should be performed first, and as usual multiplication (both matrix and scalar) should be performed before addition (unless there are brackets to tell us otherwise).
Hopefully you're not surprised that we got the same final result for both the formulas \(A (B + k C)\) and \(A B + k (A C)\text{.}\) From our familiar rules of algebra, we expect to be able to multiply \(A\) inside the brackets in the first expression, and then rearrange the order of multiplication by \(A\) and by \(k\text{.}\) However, we need to be careful — our “familiar” rules of algebra come from operations with numbers, and matrix algebra involves operations with matrices: addition, subtraction, and two different kinds of multiplication, scalar and matrix. We should not blindly expect all of our “familiar” rules of algebra to apply to matrix operations. We've already seen that the matrix version of the familiar rule \(b a = a b\) is not true for matrix multiplication! In Subsection 4.5.1, we list the rules of algebra that are valid for matrix operations (which is most of our familiar rules from the algebra of numbers), and for some of the rules, in that same subsection we verify that they are indeed valid for matrices.
Subsection 4.4.4 Linear systems as matrix equations
Subsubsection 4.4.4.1 A first example
Example 4.4.7. A system as a matrix equation.
Let's again consider the system from Task b of Discovery 4.5. To solve, we row reduce the associated augmented matrix to RREF as usual.
Variable \(x_3\) is free, so assign a parameter \(x_3 = t\text{.}\) Then we can solve to obtain the general solution is parametric form,
Let's check a couple of particular solutions against the matrix equation \(A\uvec{x}=\uvec{b}\) that represents the system. Recall that for this system, \(\uvec{x}\) is the \(3\times 1\) column vector that contains the variables \(x_1,x_2,x_3\text{.}\) The particular solutions associated to parameter values \(t=0\) and \(t=3\) are
and
\begin{align*} t = 3 \amp\colon \amp x_1 \amp= 2, \amp x_2 \amp= 1, \amp x_3 \amp= 3. \end{align*}Let's collect the \(t=0\) solution values into the vector \(\uvec{x}\) and check \(A\uvec{x}\) versus \(\uvec{b}\text{:}\)
So the solution to the linear system we got by row reducing did indeed give us a vector solution \(\uvec{x}\) to the matrix equation \(A \uvec{x} = \uvec{b}\text{.}\) Let's similarly check the \(t=3\) solution, as in Task f of Discovery 4.5:
Again, our system solution gives us a solution to the matrix equation.
Subsubsection 4.4.4.2 Expressing system solutions in vector form
We may use matrices and matrix algebra to express the solutions to solutions as column vectors. In particular, we can expand solutions involving parameters into a linear combination of column vectors. Expressing solutions this way allows us to see the effect of each parameter on the system.
Let's re-examine the systems in the examples from Section 2.4 as matrix equations, and express their solutions in vector form.
Example 4.4.8. Solutions in vector form: one unique solution.
The system from Discovery 2.1 can be expressed in the form \(A\uvec{x} = \uvec{b}\) for
We solved this system in Example 2.4.1 and determined that it had one unique solution, \(x = 5\text{,}\) \(y = 2\text{,}\) and \(z = 3\text{.}\) In vector form, we write this solution as
Example 4.4.9. Solutions in vector form: an infinite number of solutions.
The system from Discovery 2.2 can be expressed in the form \(A \uvec{x} = \uvec{b}\) for
We solved this system in Example 2.4.2, and determined that it had an infinite number of solutions. We expressed the general solution to the system using parametric equations
In vector form, we expand this solution as
Notice how the solution is the sum of a constant part
and a variable part
Further notice how the constant part is a particular solution to the system — it is the “initial” particular solution associated to the parameter value \(t=0\text{.}\)
Example 4.4.10. Solutions in vector form: a homogenous system.
The system from Discovery 2.4 is homogeneous, so it can be expressed in the form \(A\uvec{x} = \zerovec\) for
where \(\zerovec\) is the \(3 \times 1\) zero column vector. We solved this system in Example 2.4.4, and determined that it had an infinite number of solutions. We expressed the general solution to the system using parametric equations
In vector form, we expand this solution as
This time, the solution is a sum of two variables parts,
since there are two parameters. And there is no constant part to the general solution, because if we set both parameters to zero we obtain the trivial solution \(\uvec{x} = \zerovec\text{.}\) A homogeneous system will always work out this way. (So it would be more accurate to say that the general solution to the system from Discovery 2.4 has trivial constant part, instead of saying it has no constant part.)
Example 4.4.11. Solutions in vector form: patterns in homogeneous/nonhomogeneous systems.
In Example 2.4.5, we solved a homogenous system \(A\uvec{x} = \zerovec\) with
and found an infinite number of solutions, with general solution expressed parametrically as
In vector form, we express this as
This homogeneous system has the same coefficient matrix as in Example 4.4.9 above, so it is not surprising that their general solutions are related. In particular, notice that both systems have the same variable part, but that the nonhomogeneous system from Example 4.4.9 has a non-trivial constant part.
Subsection 4.4.5 Transpose
Example 4.4.12. Computing transposes.
Let's compute some transposes.
The matrix \(A\) is size \(2 \times 3\text{,}\) so when we turn rows into columns to compute \(\utrans{A}\text{,}\) we end up with a \(3\times 2\) result. Matrices \(B\) and \(C\) are square, so each of their transposes end up being the same size as the original matrix. But also, the numbers for the entries in \(B\) and \(C\) were chosen to emphasize some patterns in the transposes of square matrices. In interchanging rows and columns in \(B\text{,}\) notice how entries to the upper right of the main diagonal move to the “mirror image” position in the lower left of the main diagonal, and vice versa. So for square matrices, we might think of the transpose as “reflecting” entries in the main diagonal, while entries right on the main diagonal end up staying in place. Finally, we might consider this same “reflecting-in-the-diagonal” view of the transpose for \(C\text{,}\) except \(C\) has the same entries in corresponding “mirror image” entries on either side of the diagonal, and so we end up with \(\utrans{C} = C\text{.}\)