For square matrices we define a useful quantity called the determinant . We define the determinant of a \(1 \times 1\) matrix as the value of its only entry
\begin{equation*}
\det \left( \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \right) \overset{\text{ def } }{=} ad-bc\text{.}
\end{equation*}
Before defining the determinant for larger matrices, we note the meaning of the determinant. An \(n \times n\) matrix gives a mapping of the \(n\)-dimensional euclidean space \({\mathbb{R}}^n\) to itself. In particular, a \(2 \times 2\) matrix \(A\) is a mapping of the plane to itself. The determinant of \(A\) is the factor by which the area of objects changes. If we take the unit square (square of side 1) in the plane, then \(A\) takes the square to a parallelogram of area \(\lvert\det(A)\rvert\text{.}\) The sign of \(\det(A)\) denotes a change of orientation (negative if the axes get flipped). For example, let
Then \(\det(A) = 1+1 = 2\text{.}\) Let us see where \(A\) sends the unit square with vertices \((0,0)\text{,}\)\((1,0)\text{,}\)\((0,1)\text{,}\) and \((1,1)\text{.}\) The point \((0,0)\) gets sent to \((0,0)\text{.}\)
The image of the square is another square with vertices \((0,0)\text{,}\)\((1,-1)\text{,}\)\((1,1)\text{,}\) and \((2,0)\text{.}\) The image square has a side of length \(\sqrt{2}\) and is therefore of area 2. See 6.6.1.
In general the image of a square is going to be a parallelogram. In high school geometry, you may have seen a formula for computing the area of a parallelogram with vertices \((0,0)\text{,}\)\((a,c)\text{,}\)\((b,d)\) and \((a+b,c+d)\text{.}\) The area is
\begin{equation*}
\left\lvert \, \det \left( \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \right) \, \right\lvert = \lvert a d - b c \rvert\text{.}
\end{equation*}
The vertical lines above mean absolute value. The matrix \(\left[ \begin{matrix}a \amp b \\ c \amp d \end{matrix} \right]\) carries the unit square to the given parallelogram.
There are a number of ways to define the determinant for an \(n \times n\) matrix. Let us use the so-called cofactor expansion . We define \(A_{ij}\) as the matrix \(A\) with the \(i^{\text{ th } }\) row and the \(j^{\text{ th } }\) column deleted. For example, if
It is sometimes useful to use a row other than the first. In the following example it is more convenient to expand along the second row. Notice that for the second row we are starting with a negative sign.
In computing the determinant, we alternately add and subtract the determinants of the submatrices \(A_{ij}\) multiplied by \(a_{ij}\) for a fixed \(i\) and all \(j\text{.}\) The numbers \({(-1)}^{i+j}\det(A_{ij})\) are called cofactors of the matrix. And that is why this method of computing the determinant is called the cofactor expansion.
Similarly we do not need to expand along a row, we can expand along a column. For any \(j\)
The determinant for triangular matrices is very simple to compute. Consider the lower triangular matrix. If we expand along the first row, we find that the determinant is 1 times the determinant of the lower triangular matrix \(\left[ \begin{matrix}5 \amp 0 \\ 8 \amp 9 \end{matrix} \right]\text{.}\) So the deteriminant is just the product of the diagonal entries:
The determinant is telling you how geometric objects scale. If \(B\) doubles the sizes of geometric objects and \(A\) triples them, then \(AB\) (which applies \(B\) to an object and then it applies \(A\)) should make size go up by a factor of \(6\text{.}\) This is true in general:
This property is one of the most useful, and it is employed often to actually compute determinants. A particularly interesting consequence is to note what it means for existence of inverses. Take \(A\) and \(B\) to be inverses, that is \(AB=I\text{.}\) Then
Neither \(\det(A)\) nor \(\det(B)\) can be zero. This fact is an extremely useful property of the determinant, and one which is used often in this book:
Théorème6.6.3.
An \(n \times n\) matrix \(A\) is invertible if and only if \(\det (A) \not= 0\text{.}\)
So we know what the determinant of \(A^{-1}\) is without computing \(A^{-1}\text{.}\)
Let us return to the formula for the inverse of a \(2 \times 2\) matrix:
\begin{equation*}
\begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}^{-1} = \frac{1}{ad-bc} \begin{bmatrix} d \amp -b \\ -c \amp a \end{bmatrix}\text{.}
\end{equation*}
Notice the determinant of the matrix \([\begin{matrix}a\amp b\\c\amp d \end{matrix} ]\) in the denominator of the fraction. The formula only works if the determinant is nonzero, otherwise we are dividing by zero.
A common notation for the determinant is a pair of vertical lines:
\begin{equation*}
\begin{vmatrix} a \amp b \\ c \amp d \end{vmatrix} = \det \left( \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \right)\text{.}
\end{equation*}
Personally, I find this notation confusing as vertical lines usually mean a positive quantity, while determinants can be negative. Also think about how to write the absolute value of a determinant. This notation is not used in this book.
ExercicesExercices
1.
Compute the determinant of the following matrices:
Let \(A = LU\text{.}\) Compute \(\det(A)\) in a simple way, without computing what is \(A\text{.}\) Hint: First read off \(\det(L)\) and \(\det(U)\text{.}\)
5.
Consider the linear mapping from \({\mathbb R}^2\) to \({\mathbb R}^2\) given by the matrix \(A = \left[ \begin{matrix}1 \amp x \\ 2 \amp 1 \end{matrix} \right]\) for some number \(x\text{.}\) You wish to make \(A\) such that it doubles the area of every geometric figure. What are the possibilities for \(x\) (there are two answers).
6.
Suppose \(A\) and \(S\) are \(n \times n\) matrices, and \(S\) is invertible. Suppose that \(\det(A) = 3\text{.}\) Compute \(\det(S^{-1}AS)\) and \(\det(SAS^{-1})\text{.}\) Justify your answer using the theorems in this section.
7.
Let \(A\) be an \(n \times n\) matrix such that \(\det(A)=1\text{.}\) Compute \(\det(x A)\) given a number \(x\text{.}\) Hint: First try computing \(\det(xI)\text{,}\) then note that \(xA = (xI)A\text{.}\)
8.2.
Compute the determinant of the following matrices: