Skip to main content

Section 1.3 Concepts

Before we work to realize this goal, let's make sure we understand it.

Subsection 1.3.1 System solutions

Question 1.3.2.

What is a solution, and how do we verify solutions?

For the system consisting of the two lines in the \(xy\)-plane from Discovery 1.1 and Discovery 1.2,

\begin{equation*} \left\{\begin{array}{rcrcr} 2x \amp + \amp y \amp = \amp 3, \\ x \amp + \amp y \amp = \amp 1, \end{array}\right. \end{equation*}

the combination \(x=2\text{,}\) \(y=-1\) is a solution because both equations will be satisfied simultaneously with these values. We verify this by proper “LHS vs RHS” procedure:

\begin{align*} \text{First equation:} \quad \text{LHS} \amp = 2x + y = 2(2) + (-1) = 3 = \text{RHS},\\ \text{Second equation:} \quad \text{LHS} \amp = x + y = 2(2) + (-1) = 2 + (-1) = 1 = \text{RHS}. \end{align*}

Since LHS=RHS in both equations when \(x=2\) and \(y=-1\text{,}\) we have a valid solution to the system. However, the combination \(x=1\text{,}\)\(y=1\) is not a solution to the system, because at least one of the equations will not be satisfied by these values. Again, we can verify this by proper “LHS vs RHS” procedure:

\begin{align*} \text{First equation:} \quad \text{LHS} \amp = 2x + y = 2(1) + 1 = 3 = \text{RHS},\\ \text{Second equation:} \quad \text{LHS} \amp = x + y = 1 + 1 = 2 \neq \text{RHS}. \end{align*}

While the first equation is satisfied, the second is not, and so this combination of variable values is not a valid solution.

Remark 1.3.3.

In the example above and in Discovery guide 1.1 we have seen that systems of linear equations have geometric interpretations: intersecting lines in the \(xy\)-plane, or intersecting planes in \(xyz\)-space. We can make a similar geometric interpretation for systems with more than \(3\) variables by imagining “hyperplanes” intersecting in higher-dimensional spaces, but unfortunately our three-dimensional brains cannot actually picture such a thing.

Question 1.3.4.

How many solutions can a system have?

We have seen in Discovery guide 1.1 that there are different possibilities for the number of different solutions a particular system can have.

one unique solution
This is demonstrated by the system formed by the two lines from Discovery 1.1 and Discovery 1.2, as the two lines in these activities only intersected in a single point.
no solutions
This is demonstrated by the two lines in Discovery 1.3, as these two lines were parallel and did not intersect.
an infinite number of solutions
This is demonstrated by the system in Discovery 1.5, as any chosen value of \(z\) leads to a new solution by then solving for \(y\) and \(x\) in turn, and there are infinity of different choices of starting value \(z\text{.}\)
Question 1.3.5.

Are the possibilities considered above the only possibilities? Could there be a system that has exactly seven different solutions, say?

We will prove in Chapter 4 (Theorem 4.5.5) that for every system there are in fact only these three possibilities as encountered in Discovery guide 1.1.

In Discovery 1.4, you were asked to imagine the geometric configuration of three planes (each represented algebraically by a linear equation in three variables) to realize each of the three possibilities described above. Hopefully you can also imagine how it would be geometrically impossible for three planes to intersect in exactly seven points, no more and no fewer.

Question 1.3.6.

When a system has an infinite number of solutions, how can we express all possible solutions in a compact way? (We certainly cannot list all possible solutions.)

We can use one or more parameters to represent the choices that must be made to get to one particular solution, and then use formulas in those parameter(s) to express the patterns of similarity between the different solutions. For example, in Task f of Discovery 1.5, there did not seem to be any restriction on what values the variable \(z\) could take and still be part of a solution to the system. So \(z\) was set to be an unspecified parameter \(t\text{,}\) and then \(y\) and \(x\) could be solved for in terms of this parameter. Choosing different values of \(t\) (such as \(t=2\) or \(t=-10\text{,}\) as in the previous parts of the referenced discovery activity) leads to different particular solutions for the system. The infinity of possible solutions to this system is now represented entirely by the infinity of choices available for starting value of the parameter \(t\text{.}\)

Remark 1.3.7.

It may seem silly to trade one variable letter \(z\) for another letter \(t\text{.}\) But these letters represent different kinds of “unknown” quantities. Letter \(z\) represents a variable in an equation whose value we would like to determine, whereas letter \(t\) represents a parameter whose value we are free to choose. Remember that mathematical notation is a tool for communicating ideas: the letter \(t\) is a traditional choice for a parameter in mathematics, and so we switch from letter \(z\) to letter \(t\) to indicate to the reader (whether that is one's self or someone reading our work) this change in perspective from variable to parameter.

Subsection 1.3.2 Determining solutions

The first of two core ideas behind how we should go about determining the solutions of a system of equations is contained in Discovery 1.6. The left-hand side of a linear equation looks like a jumble of numbers and letters, but remember that it is just a formula for computing a single number, and that the result of this computation is proposed to always be equal the number on the right-hand side of the equation. So if we algebraically manipulate or combine the left-hand sides of equations in the system, as long as we perform the same manipulation or combination of the corresponding right-hand sides of those equations, then the same variable values that solve the new old system should solve the new system, and vice versa.

We need to be a little bit careful with the kinds of manipulations and combinations we allow ourselves so that our manipulations are nondestructive. For example, if we multiplied both left- and right-hand sides of an equation by \(0\text{,}\) we would lose all information the original equation contained, since we would be left with just \(0=0\text{.}\) In this case, new and old equations would not have the same solutions. This is why we restrict ourselves to the elementary row operations described in Section 1.2: to ensure our manipulations are always nondestructive.

The second core idea behind solving systems of equations is contained in Discovery 1.7. We should choose sequences of manipulations that will result in a simplified system for which it is easier to determine the solutions. Discovery 1.7 lays out a specific sequence of operations to do this; in the next discovery guide and corresponding chapter we will explore a systematic strategy for performing such simplification.

Finally, Discovery 1.7 contains another important idea: all of the crucial information in a system of equations is contained in its coefficients on variables and the constant on the right-hand side of each equation. We can get rid of the clutter of all the variable letters by turning a system of equations into an augmented matrix. We can then perform manipulations of the equations in the system by just performing the corresponding operations on the coefficients in the matrix. You should keep in mind the structure of an augmented matrix: each row represents an equation, and each column (except the last) represents a variable. See the examples below on how row operations correspond to the algebra of equations.