
MATHEMATICAL EQUATIONS
47-33
Note that the last equation gives the value of
n,.
This
may be substituted in the next to the last equation to
obtain
x,-~
and
so
on.
If the
y’s
are literal as shown
above, the process will yield the inverse transformation
X
=
A-‘Y
where
A-‘
is the inverse of the original matrix. If the
y’s
are numerical, labor is saved by combining the
values
on
the right side of each equation at each step of
the algorithm.
Since the determinant of a triangular matrix is equal
to the product of the elements on the main diagonal
IAl
=
a11a22’a33’
* *
annf
where
awr
is the quantity the kth equation is divided by
in the above Gauss algorithm. This is useful for
evaluating determinants of large order since it requires
only of the order of
n3
operations.
A matrix
A
may be viewed as consisting of column or
row vectors. The largest number of linearly independent
column vectors (which is the same as the largest number
of linearly independent row vectors) is called the
rank
of the matrix,
p(A).
A
set of vectors
Vi
is linearly
independent if
2
a,vi
=
0
1
implies that
ai
=
0
for
i
=
1,
2,
* *
+
.
The rank is equal to the order of the largest nonvan-
ishing determinant of the submatrix by deleting rows
and columns of the original matrix. Consider the
matrices
and
The equations
AX
=
Y
have a solution if and only if
d-4)
=
p(B)
in which case the equations are said to be consistent.
If
p(A)
<
n
=
m,
that is,
[AI
=
0,
and if the
equations are consistent, then the Gauss algorithm will
terminate before
n
steps. That is, the coefficients of all
the
nk’s
will be zero in the remaining
n
-
p(A)
equations. Therefore, among the
xk’s
there will be
certain ones,
n
-
p(A)
in number, which may be
assigned arbitrary values. Similarly, if
m
#
n,
the
Gauss algorithm will yield an equivalent set of
p(A)
equations that has the same solution as the original set.
Again
n
-
p(A)
(possibly zero) of the
xk’s
may be
assigned arbitrary values.
It is not necessary
to
know beforehand whether the
equations are inconsistent. If they are inconsistent, the
algorithm will yield an “equation” in which the co-
efficients of the
xk’s
on the left side are zero but there is
a nonzero combination of the
yk’s
on the right side.
Since the right side of the “equation” may contain
accumulated round-off errors, an analysis
of
the error
propagation in the Gauss algorithm may be necessary to
determine whether a small right side is caused by
inconsistent equations or by round-off errors.
Eigenvectors and Eigenvalues
An eigenvector of the square matrix
A
of order
n
is
nonzero vector
X
such that
AX
=
AX
The scalar
A
is called an eigenvalue of
A,
and
X
is
called an eigenvector corresponding to or associated
with
A.
The eigenvalues may be determined from the
characteristic equation
(A
-
AI1
=
0
The corresponding eigenvectors
X
may then be found
by solving
(A
-
AI)X
=
0
The solution may be obtained by the Gauss algorithm.
If the eigenvalues
A
are distinct, an explicit solution
may be obtained by taking a nontrivial row of cofactors
from
A
-
A,I.
This is possible since the rank of
A
-
AiI
is
n
-
1
and, therefore, there exists a nonvanishing
subdeterminant of order
n
-
1.
Note that eigenvectors are determined only to within
a multiplicative constant.
Further Definitions
and
Properties
The matrix whose elements are
aji*
is called the
conjugate transpose of
A
=
[aij];
it is denoted by
A?.
The conjugate transpose of a product is
(AB)?
=
BtA?
If
A
=
At
(A
=
-At),
A
is said to be Hermitian