Assume that each matrix below represents the augmented matrix of a linear system of equations in the variables \(x_1,x_2,\dotsc,x_n\) (where \(n\) is the number of variable columns in the matrix).
The boxed entries could be converted into leading ones through row operations. Therefore, both variables are constrained and there are no free variables. (This implies the underlying system has one unique solution.)
The boxed entries could be converted into leading ones through row operations. These two variables are constrained by the respective equations in which they are the leading variables, which leaves \(x_2\) as a free variable. (This implies the underlying system has an infinite number of solutions.)
Variable \(x_2\) already has a leading one in its column, and the other two boxed entries could be converted into leading ones through row operations. These three variables are constrained by the respective equations in which they are the leading variables, which leaves \(x_1\) and \(x_3\) as free variables. (This implies the underlying system has an infinite number of solutions.)
The leftmost column that is not all zeros is the first column, so the immediate goal here should be to obtain a leading one in the first column (StepΒ 1.a of ProcedureΒ 2.3.2). Any one of the operations \(\frac{1}{6} R_1\text{,}\)\(\frac{1}{3} R_2\text{,}\)\(\frac{1}{2} R_3\text{,}\) or \(R_2 - R_3\) may be employed to do so. However, it is usually preferable (though not necessary) to avoid introducing fractions in the matrix, so \(R_2 - R_3\) may be the best choice:
The leftmost column that is not all zeros is the first column, but we already have a leading one in the first column, so the immediate goal should be to move it to the first row (StepΒ 1.b of ProcedureΒ 2.3.2). Therefore, our choice of operation should be \(R_1 \leftrightarrow R_3\text{:}\)
The leftmost column that is not all zeros is the first column, but we already have a leading one in the first column, and that leading one is already in the first row. So the immediate goal should be to use that leading one to eliminate the other nonzero entries in the first column (StepΒ 1.c of ProcedureΒ 2.3.2). Focusing on the \(5\) in the first column, combining it with \(-5\) times the leading one will cancel to zero. So our choice of operation should be \(R_3 - 5 R_1\text{:}\)
Note. It is acceptable (and advisable!) to combine multiple operations in one calculation step when carrying out an βeliminationβ step in ProcedureΒ 2.3.2 (in this case, StepΒ 1.c). That is, normally we would simultaneously perform our chosen operation \(R_3 - 5 R_1\) as well as the operation \(R_4 + 3 R_1\) in order to eliminate both of the other nonzero entries in the first column.
This matrix already exhibits all the goals of StepΒ 1 of ProcedureΒ 2.3.2, so we move on to StepΒ 2. Ignoring the first row completely, the leftmost column that is not all zeros is the second column, and so the immediate goal should be to obtain a leading one in that column (StepΒ 2.a of ProcedureΒ 2.3.2). We can do so using either operation \(\frac{1}{3} R_2\) or operation \(-\frac{1}{6} R_3\text{.}\) Since \(\frac{1}{3}\) is a simpler fraction and creates the leading one in the second row instead of the third, weβll choose that option:
Warning. It may be tempting to attempt to avoid fractions altogether by using the \(-2\) above that leading \(3\) to obtain the leading one. But notice what happens if we instead choose operation \(R_2 + R_1\text{:}\)
Our goal was to obtain a leading one in the second column. While we have succeeded in obtaining a one in the second column, it is not a leading one, because a row operation must be carried through the entire row, including any βleadingβ zeros. The operation \(R_2 + R_1\) would reverse progress already made in reducing this matrix: in particular, it reverses the progress of StepΒ 1.c of ProcedureΒ 2.3.2 by βun-eliminatingβ an entry in the first column below that leftmost leading one. It is precisely for this reason that StepΒ 2.a of ProcedureΒ 2.3.2 advises not to use the first row to attempt to obtain a second leading one.
This matrix already exhibits all the goals of StepΒ 1 of ProcedureΒ 2.3.2, so we move on to StepΒ 2. Ignoring the first row completely, the leftmost column that is not all zeros is the third column, and we may obtain a leading one in that column (StepΒ 2.a of ProcedureΒ 2.3.2) with either operation \(-\frac{1}{5} R_2\) or \(-\frac{1}{2} R_3\text{.}\) We choose the latter operation as \(\frac{1}{2}\) is the simpler fraction:
This matrix already exhibits all the goals of StepΒ 1 of ProcedureΒ 2.3.2, so we move on to StepΒ 2. Ignoring the first row completely, the leftmost column that is not all zeros is the second column. We already have a leading one in that column but itβs in the third row, so we move it up to the second row (StepΒ 2.b of ProcedureΒ 2.3.2) with operation \(R_2 \leftrightarrow R_3\text{:}\)
This matrix already exhibits all the goals of StepΒ 1 of ProcedureΒ 2.3.2, so we move on to StepΒ 2. Ignoring the first row completely, the leftmost column that is not all zeros is the third column. We already have a leading one in that column and it is already in the second row, so we move on to using that leading one to eliminate the other entries in its column (StepΒ 2.c of ProcedureΒ 2.3.2). Working from top-to-bottom in that column, we have a \(-3\) entry in the first row that can be eliminated by combining it with \(+3\) times the leading one in the second row:
Note. It is acceptable (and advisable!) to combine multiple operations in one calculation step when carrying out an βeliminationβ step in ProcedureΒ 2.3.2 (in this case, StepΒ 2.c). That is, normally we would simultaneously perform our chosen operation \(R_1 + 3 R_2\) as well as the operation \(R_4 - 2 R_2\) in order to eliminate both of the other nonzero entries in the third column.
This matrix already exhibits all the goals of both StepΒ 1 and StepΒ 2 of ProcedureΒ 2.3.2, so we move on to StepΒ 3. Ignoring the first two rows completely, the leftmost column that is not all zeros is the fourth column. so our goal is to obtain a leading one in that column (StepΒ 3.a of ProcedureΒ 2.3.2). We could do this by introducing fractions (using either \(-\frac{1}{7} R_3\) or \(\frac{1}{2} R_4\)), but another option avoiding fractions is \(R_3 + 4 R_4\text{:}\)
Warning.StepΒ 3.a of ProcedureΒ 2.3.2 advises us to avoid attempting to use the first two rows to obtain the next leading one, for good reason. In this matrix, we should not try to use the \(1\) in the first row, fourth column, neither to obtain a leading one in the third or fourth rows, nor to attempt to eliminate the leading entries in the third or fourth rows.
This matrix is already quite simplified. In fact, it is only one operation away from RREF, as all that remains is to use the leading one in the fifth column to eliminate the entry above it:
The matrices below are all already in RREF. For each, determine whether the associated system of equations is consistent. If it is, carry out the following additional tasks.
Write out that system in variables of your choosing.
If the system has an infinite number of solutions, use your expression of the general solution to determine at least three different specific solutions.
Note. If the matrix does not have a vertical line separating the last column, interpret the matrix to be the coefficient matrix of a homogeneous system.
The system is consistent but there are no free variables, hence there is one unique solution. The associated system of equations is identical to that solution; using variables \(x_1,x_2,x_3,x_4\) we have
The system is consistent but there are no free variables, hence there is one unique solution. The associated system of equations is identical to that solution if we ignore the irrelevant \(0 = 0\) equation; using variables \(x_1,x_2,x_3\) we have
There is no leading one in the third column of the matrix, hence variable \(z\) is free. Assigning parameter \(z = t\) and isolating the two constrained variables in the equations above yields general solution
\begin{equation*}
\begin{array}{rcrcr}
x \amp = \amp -8 \amp + \amp 4 t \text{,} \\
y \amp = \amp 7 \amp - \amp 2 t \text{,} \\
z \amp = \amp \amp \amp t \text{.}
\end{array}
\end{equation*}
There is no leading one in the second column of the matrix, hence variable \(y\) is free. Assigning parameter \(y = t\) and isolating the two constrained variables in the equations above yields general solution
\begin{equation*}
\begin{array}{rcc}
x \amp = \amp -1 - 8 t \text{,} \\
y \amp = \amp t \text{,} \\
z \amp = \amp 3 \text{.}
\end{array}
\end{equation*}
The third and fourth columns in the matrix do not contain leading ones, hence variables \(x_3\) and \(x_4\) are free. Assigning parameters \(x_3 = s\) and \(x_4 = t\) and isolating for the three constrained variables in the equations above leads to general solution
\begin{equation*}
\begin{array}{rcrcr}
x_1 \amp = \amp -7 s \amp + \amp 6 t \text{,} \\
x_2 \amp = \amp -9 s \amp + \amp t \text{,} \\
x_3 \amp = \amp s \text{,} \\
x_4 \amp = \amp \amp \amp t \text{,} \\
x_5 \amp = \amp \amp 0 \text{.}
\end{array}
\end{equation*}
The system is consistent and has one free variable, \(x_3\text{.}\) Remember that the matrices above are only the coefficient matrices of a homogeneous system; the corresponding augmented matrix is
Determine all possible matrices that have two rows, two columns, and are in RREF. (In your example matrices, every entry will be one of must be one or must be zero or could be any value. Write an asterisk \(\ast\) for entries that could be any value.)
Without writing all the possibilities down, can you determine how many different possible forms of RREF matrix with four rows and four columns there are?
Based on your answers so far, can you come up with a pattern for the number of different possible forms of RREF matrix with \(n\) rows and \(n\) columns? How does the pattern change for matrices with more rows than columns? Or for matrices with fewer rows than columns?
32.Existence of a unique solution for two variables.
(a)
Suppose \(a, b, c, d\) are constant values so that \(a d - b c\) is not equal to zero. Use PropositionΒ 2.5.6 to demonstrate that every system that has coefficient matrix
\begin{equation*}
\begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}
\end{equation*}
Now consider the geometric interpretation of TaskΒ a. Suppose we have system of equations
\begin{equation*}
\begin{sysofeqns}{rcrcr}
a x \amp + \amp b y \amp = \amp k_1 \\
c x \amp + \amp d y \amp = \amp k_2
\end{sysofeqns}\text{.}
\end{equation*}
The two cases of TaskΒ b.ii and TaskΒ b.iii can be summarized into one case as βnot the case of a single unique solution.β In each of those two cases, what does the relationship between the slopes of the two lines imply about the value of \(a d - b c\text{?}\)
\begin{align*}
\amp \begin{bmatrix} a \amp b \\ 0 \amp c \end{bmatrix} \amp
\amp \begin{bmatrix} a \amp b \amp c \\ 0 \amp d \amp e \\ 0 \amp 0 \amp f \end{bmatrix} \amp
\amp \begin{bmatrix} a \amp b \amp c \amp d \\ 0 \amp e \amp f \amp g \\ 0 \amp 0 \amp h \amp i \\ 0 \amp 0 \amp 0 \amp j \end{bmatrix}
\end{align*}
Using PropositionΒ 2.5.6 for inspiration, determine a simple condition or set of conditions by which one can predict, without performing any row operations, whether a homogeneous system of equations with an upper triangular coefficient matrix will have one unique solution or will have an infinite number of solutions.