Skip to main content
Logo image

Section 4.4 Examples

Subsection 4.4.1 Basic matrix operations

Here are some basic examples of matrix addition, subtraction, and scalar multiplication. For subtraction, watch out for double negatives!

Example 4.4.1. Matrix addition.

[123456]+[0112110]=[1+02+13+14+(2)5+116+0]=[114266]

Example 4.4.2. Matrix subtraction.

[123045][0112111]=[1021310(2)4115(1)]=[1322154]

Example 4.4.3. Scalar multiplication of a matrix.

(5)[1234]=[5101520]

Subsection 4.4.2 Matrix multiplication

Example 4.4.4. A detailed multiplication example.

Let’s compute the matrix product AB, for
A=[321045 ],B=[1234].
Notice that the sizes of A (3×2) and B (2×2) are compatible for multiplication in the order AB, and that the result will be size 3×2. First let’s multiply A onto the columns of B.
[321045][13]=[31+(2)(3)11+0(3)41+5(3)]=[9119][321045][24]=[3(2)+(2)41(2)+044(2)+54]=[14228]
Combining these two computations, we get
AB=[321045][1234]=[914121928].
With some practise at matrix multiplication, you should be able to compute a product AB directly without doing separate computations for each column of the second matrix.
In this matrix multiplication example, notice that it does not make sense to even consider the possibility that BA=AB because the sizes of B and A are not compatible for multiplication in the order BA, and so BA is undefined!

Example 4.4.5. Matrix powers.

Since powers of matrices only work for square matrices, the power A2 is undefined for the 3×2 matrix A in the previous matrix multiplication example. But we can compute B2 for the 2×2 matrix B from that example.
B2=BB=[1234][1234]=[11+(2)(3)1(2)+(2)431+4(3)(3)(2)+44]=[7101522]
To compute B3, we can compute either of
B3=BBB=(BB)B=B2B=[7101522][1234]=[71+(10)(3)7(2)+(10)4151+22(3)15(2)+224]=[375481118], B3=BBB=B(BB)=BB2=[1234][7101522]=[17+(2)(15)1(10)+(2)22(3)7+4(15)3(10)+422]=[375481118],
and the result is the same.

Subsection 4.4.3 Combining operations

Example 4.4.6. Computing matrix formulas involving a combination of operations.

Let’s compute both A(B+kC) and AB+k(AC), for
A=[1234],B=[0211],C=[5522],k=3.
Keep in mind that operations inside brackets should be performed first, and as usual multiplication (both matrix and scalar) should be performed before addition (unless there are brackets to tell us otherwise).
A(B+kC)=[1234]([0211]+3[5522])=[1234]([0211]+[151566])=[1234][151757]=[25316579]
AB+k(AC)=[1234][0211]+3([1234][5522])=[24410]+3[992323]=[24410]+[27276969]=[25316579]
Hopefully you’re not surprised that we got the same final result for both the formulas A(B+kC) and AB+k(AC). From our familiar rules of algebra, we expect to be able to multiply A inside the brackets in the first expression, and then rearrange the order of multiplication by A and by k. However, we need to be careful — our “familiar” rules of algebra come from operations with numbers, and matrix algebra involves operations with matrices: addition, subtraction, and two different kinds of multiplication, scalar and matrix. We should not blindly expect all of our “familiar” rules of algebra to apply to matrix operations. We’ve already seen that the matrix version of the familiar rule ba=ab is not true for matrix multiplication! In Subsection 4.5.1, we list the rules of algebra that are valid for matrix operations (which is most of our familiar rules from the algebra of numbers), and for some of the rules, in that same subsection we verify that they are indeed valid for matrices.

Subsection 4.4.4 Linear systems as matrix equations

Subsubsection 4.4.4.1 A first example

Example 4.4.7. A system as a matrix equation.
Let’s again consider the system from Task b of Discovery 4.5. To solve, we row reduce the associated augmented matrix to RREF as usual.
[13142729]reducerow[10110101]
Variable x3 is free, so assign a parameter x3=t. Then we can solve to obtain the general solution is parametric form,
x1=1+t,x2=1,x3=t.
Let’s check a couple of particular solutions against the matrix equation Ax=b that represents the system. Recall that for this system, x is the 3×1 column vector that contains the variables x1,x2,x3. The particular solutions associated to parameter values t=0 and t=3 are
t=0:x1=1,x2=1,x3=0;
and
t=3:x1=2,x2=1,x3=3.
Let’s collect the t=0 solution values into the vector x and check Ax versus b:
LHS=Ax=[131272][110]=[13+02+7+0]=[49]=b=RHS.
So the solution to the linear system we got by row reducing did indeed give us a vector solution x to the matrix equation Ax=b. Let’s similarly check the t=3 solution, as in Task f of Discovery 4.5:
LHS=Ax=[131272][213]=[2334+7+6]=[49]=b=RHS.
Again, our system solution gives us a solution to the matrix equation.

Subsubsection 4.4.4.2 Expressing system solutions in vector form

We may use matrices and matrix algebra to express the solutions to solutions as column vectors. In particular, we can expand solutions involving parameters into a linear combination of column vectors. Expressing solutions this way allows us to see the effect of each parameter on the system.
Let’s re-examine the systems in the examples from Section 2.4 as matrix equations, and express their solutions in vector form.
Example 4.4.8. Solutions in vector form: one unique solution.
The system from Discovery 2.1 can be expressed in the form Ax=b for
A=[202110423],x=[xyz],b=[437].
We solved this system in Example 2.4.1 and determined that it had one unique solution, x=5, y=2, and z=3. In vector form, we write this solution as
x=[xyz]=[523].
Example 4.4.9. Solutions in vector form: an infinite number of solutions.
The system from Discovery 2.2 can be expressed in the form Ax=b for
A=[365243366],x=[xyz],b=[9512].
We solved this system in Example 2.4.2, and determined that it had an infinite number of solutions. We expressed the general solution to the system using parametric equations
x=22t,y=t,z=3,
In vector form, we expand this solution as
x=[xyz]=[22tt3]=[2+(2)t0+1t3+0t]=[203]+[2t1t0t]=[203]+t[210].
Notice how the solution is the sum of a constant part
[203]
and a variable part
t[210].
Further notice how the constant part is a particular solution to the system — it is the “initial” particular solution associated to the parameter value t=0.
Example 4.4.10. Solutions in vector form: a homogenous system.
The system from Discovery 2.4 is homogeneous, so it can be expressed in the form Ax=0 for
A=[3681312232458],x=[x1x2x3x4],
where 0 is the 3×1 zero column vector. We solved this system in Example 2.4.4, and determined that it had an infinite number of solutions. We expressed the general solution to the system using parametric equations
x1=2s+t,x2=s,x3=2t,x4=t.
In vector form, we expand this solution as
x=[x1x2x3x4]=[2s+ts2tt]=[2s+1t1s+0t0s+2t0s+1t]=[2ss0s0s]+[t0t2tt]=s[2100]+t[1021].
This time, the solution is a sum of two variables parts,
s[2100]andt[1021],
since there are two parameters. And there is no constant part to the general solution, because if we set both parameters to zero we obtain the trivial solution x=0. A homogeneous system will always work out this way. (So it would be more accurate to say that the general solution to the system from Discovery 2.4 has trivial constant part, instead of saying it has no constant part.)
Example 4.4.11. Solutions in vector form: patterns for homogeneous and nonhomogeneous systems with the same coefficient matrix.
In Example 2.4.5, we solved a homogenous system Ax=0 with
A=[365243366],x=[xyz],
and found an infinite number of solutions, with general solution expressed parametrically as
x=2t,y=t,z=0.
In vector form, we express this as
x=[xyz]=[2tt0]=[2t1t0t]=t[210].
This homogeneous system has the same coefficient matrix as in Example 4.4.9 above, so it is not surprising that their general solutions are related. In particular, notice that both systems have the same variable part, but that the nonhomogeneous system from Example 4.4.9 has a nontrivial constant part.

Subsection 4.4.5 Transpose

Example 4.4.12. Computing transposes.

Let’s compute some transposes.
A=[123456]B=[123504671]C=[012103230]AT=[142536]BT= [156207341]CT=[012103230]
The matrix A is size 2×3, so when we turn rows into columns to compute AT, we end up with a 3×2 result. Matrices B and C are square, so each of their transposes end up being the same size as the original matrix. But also, the numbers for the entries in B and C were chosen to emphasize some patterns in the transposes of square matrices. In interchanging rows and columns in B, notice how entries to the upper right of the main diagonal move to the “mirror image” position in the lower left of the main diagonal, and vice versa. So for square matrices, we might think of the transpose as “reflecting” entries in the main diagonal, while entries right on the main diagonal end up staying in place. Finally, we might consider this same “reflecting-in-the-diagonal” view of the transpose for C, except C has the same entries in corresponding “mirror image” entries on either side of the diagonal, and so we end up with CT=C.