Investigate which pair is row equivalent
The three elementary row operations on a matrix are defined as follows. Tags: elementary row operations linear algebra matrix row equivalent. Your email address will not be published. Save my name, email, and website in this browser for the next time I comment.
Notify me of follow-up comments by email. Notify me of new posts by email. This site uses Akismet to reduce spam. Learn how your comment data is processed. The list of linear algebra problems is available here. Enter your email address to subscribe to this blog and receive notifications of new posts by email.
Email Address. Group Theory. Linear Algebra. Quiz 2. Math Spring Condition that Two Matrices are Row Equivalent. Contents Problem Elementary row operations Proof. A sequence of numbers is called a solution to a system of equations if it is a solution to every equation in the system. A system may have no solution at all, or it may have a unique solution, or it may have an infinite family of solutions.
For instance, the system , has no solution because the sum of two numbers cannot be 2 and 3 simultaneously. A system that has no solution is called inconsistent ; a system with at least one solution is called consistent.
Show that, for arbitrary values of and ,. Simply substitute these values of , , , and in each equation.
Because both equations are satisfied, it is a solution for all choices of and. The quantities and in this example are called parameters , and the set of solutions, described in this way, is said to be given in parametric form and is called the general solution to the system. It turns out that the solutions to every system of equations if there are solutions can be given in parametric form that is, the variables , , are given in terms of new independent variables , , etc.
When only two variables are involved, the solutions to systems of linear equations can be described geometrically because the graph of a linear equation is a straight line if and are not both zero. Moreover, a point with coordinates and lies on the line if and only if —that is when , is a solution to the equation. Hence the solutions to a system of linear equations correspond to the points that lie on all the lines in question. In particular, if the system consists of just one equation, there must be infinitely many solutions because there are infinitely many points on a line.
If the system has two equations, there are three possibilities for the corresponding straight lines:. However, this graphical method has its limitations: When more than three variables are involved, no physical image of the graphs called hyperplanes is possible.
Before describing the method, we introduce a concept that simplifies the computations involved. Consider the following system. Each row of the matrix consists of the coefficients of the variables in order from the corresponding equation, together with the constant term. For clarity, the constants are separated by a vertical line. The augmented matrix is just a different way of describing the system of equations.
The array of coefficients of the variables. The algebraic method for solving systems of linear equations is described as follows. Two such systems are said to be equivalent if they have the same set of solutions. A system is solved by writing a series of systems, one after the other, each equivalent to the previous system. Each of these systems has the same set of solutions as the original one; the aim is to end up with a system that is easy to solve. Each system in the series is obtained from the preceding system by a simple manipulation chosen so that it does not change the set of solutions.
As an illustration, we solve the system , in this manner. At each stage, the corresponding augmented matrix is displayed. The original system is. At this stage we obtain by multiplying the second equation by. The result is the equivalent system.
Now this system is easy to solve! And because it is equivalent to the original system, it provides the solution to that system. Observe that, at each stage, a certain operation is performed on the system and thus on the augmented matrix to produce an equivalent system. The following operations, called elementary operations , can routinely be performed on systems of linear equations to produce equivalent systems.
Suppose that a sequence of elementary operations is performed on a system of linear equations. Then the resulting system has the same set of solutions as the original, so the two systems are equivalent.
Elementary operations performed on a system of equations produce corresponding manipulations of the rows of the augmented matrix. Thus, multiplying a row of a matrix by a number means multiplying every entry of the row by. Adding one row to another row means adding each entry of that row to the corresponding entry of the other row.
Subtracting two rows is done similarly. Note that we regard two rows as equal when corresponding entries are the same.
In hand calculations and in computer programs we manipulate the rows of the augmented matrix rather than the equations. For this reason we restate these elementary operations for matrices. In the case of three equations in three variables, the goal is to produce a matrix of the form. This does not always happen, as we will see in the next section. Here is an example in which it does happen. To create a in the upper left corner we could multiply row 1 through by.
However, the can be obtained without introducing fractions by subtracting row 2 from row 1. The result is. First subtract times row 1 from row 2 to obtain. Next subtract times row 1 from row 3. This completes the work on column 1. We now use the in the second position of the second row to clean up the second column by subtracting row 2 from row 1 and then adding row 2 to row 3.
For convenience, both row operations are done in one step. Note that the last two manipulations did not affect the first column the second row has a zero there , so our previous effort there has not been undermined.
Finally we clean up the third column. Begin by multiplying row 3 by to obtain. Now subtract times row 3 from row 1, and then add times row 3 to row 2 to get. The corresponding equations are , , and , which give the unique solution. In Example 1. A matrix is said to be in row-echelon form and will be called a row-echelon matrix if it satisfies the following three conditions:.
A row-echelon matrix is said to be in reduced row-echelon form and will be called a reduced row-echelon matrix if, in addition, it satisfies the following condition:. Each leading is the only nonzero entry in its column. Entries above and to the right of the leading s are arbitrary, but all entries below and to the left of them are zero. Hence, a matrix in row-echelon form is in reduced form if, in addition, the entries directly above each leading are all zero. Note that a matrix in row-echelon form can, with a few more row operations, be carried to reduced form use row operations to create zeros above each leading one in succession, beginning from the right.
In fact we can give a step-by-step procedure for actually finding a row-echelon matrix. Observe that while there are many sequences of row operations that will bring a matrix to row-echelon form, the one we use is systematic and is easy to program on a computer. Note that the algorithm deals with matrices in general, possibly with columns of zeros. Step 2. Otherwise, find the first column from the left containing a nonzero entry call it , and move the row containing that entry to the top position.
Step 3. Now multiply the new top row by to create a leading. Step 4. By subtracting multiples of that row from rows below it, make each entry below the leading zero. This completes the first row, and all further row operations are carried out on the remaining rows. The process stops when either no rows remain at step 5 or the remaining rows consist entirely of zeros.
Observe that the gaussian algorithm is recursive: When the first leading has been obtained, the procedure is repeated on the remaining rows of the matrix. This makes the algorithm easy to use on a computer. Note that the solution to Example 1. The reason for this is that it avoids fractions. However, the general pattern is clear: Create the leading s from left to right, using each of them in turn to create zeros below it. Here is one example. Now subtract times row 1 from row 2, and subtract times row 1 from row 3.
In other words, the two have the same solutions. But this last system clearly has no solution the last equation requires that , and satisfy , and no such numbers exist. Hence the original system has no solution. To solve a linear system, the augmented matrix is carried to reduced row-echelon form, and the variables corresponding to the leading ones are called leading variables.
Because the matrix is in reduced form, each leading variable occurs in exactly one equation, so that equation can be solved to give a formula for the leading variable in terms of the nonleading variables. Every choice of these parameters leads to a solution to the system, and every solution arises in this way.
This procedure works in general, and has come to be called. There is a variant of this procedure, wherein the augmented matrix is carried only to row-echelon form. The nonleading variables are assigned as parameters as before. Then the last equation corresponding to the row-echelon form is used to solve for the last leading variable in terms of the parameters.
This last leading variable is then substituted into all the preceding equations. Then, the second last equation yields the second last leading variable, which is also substituted back. The process continues to give the general solution. This procedure is called back-substitution. This procedure can be shown to be numerically more efficient and so is important when solving very large systems. It can be proven that the reduced row-echelon form of a matrix is uniquely determined by.
That is, no matter which series of row operations is used to carry to a reduced row-echelon matrix, the result will always be the same matrix. By contrast, this is not true for row-echelon matrices: Different series of row operations can carry the same matrix to different row-echelon matrices. Indeed, the matrix can be carried by one row operation to the row-echelon matrix , and then by another row operation to the reduced row-echelon matrix. However, it is true that the number of leading 1s must be the same in each of these row-echelon matrices this will be proved later.
Hence, the number depends only on and not on the way in which is carried to row-echelon form. The reduction of to row-echelon form is. Because this row-echelon matrix has two leading s, rank. Suppose that rank , where is a matrix with rows and columns. Then because the leading s lie in different rows, and because the leading s lie in different columns. Moreover, the rank has a useful application to equations. Recall that a system of linear equations is called consistent if it has at least one solution.
Suppose a system of equations in variables is consistent , and that the rank of the augmented matrix is. The fact that the rank of the augmented matrix is means there are exactly leading variables, and hence exactly nonleading variables. These nonleading variables are all assigned as parameters in the gaussian algorithm, so the set of solutions involves exactly parameters.
Hence if , there is at least one parameter, and so infinitely many solutions. If , there are no parameters and so a unique solution. For the given linear system, what does each one of them represent? Based on the graph, what can we say about the solutions?
0コメント