
2.4 Linear Independence and Basis 11
the relation ˛
1
x
1
C ˛
2
x
2
CC˛
r
x
r
D 0 implies ˛
1
D ˛
2
D D ˛
r
D 0.
The maximum number of linearly independent vectors is called the dimensionality
of the vector space, maybe is not surprising that for R
n
this number turns out to be
n.Givenn linearly independent vectors in R
n
, any other vector can be obtained as a
linear combination of these n vectors. For this reason, n linearly independent vectors
of R
n
, are called a basis of R
n
. Clearly, a basis is not uniquely determined, since any
group of n linearly independent vectors represents a basis. However, choices that
are particularly convenient are given by sets of normalized and mutually orthogo-
nal and thus independent vectors, namely, we select an “orthonormal basis”. In the
case of R
2
, orthonormal bases are for instance .1; 0/; .0; 1/,andalso.1=
p
2; 1=
p
2/,
.1=
p
2; 1=
p
2/. In fact, the latter can be obtained from the first one with a rotation,
as it is shown in Fig. 2.2.
The orthonormal basis e
1
; e
2
;:::; e
n
,wheree
k
D .0;0;:::; 1;0; :::; 0/,
that is, all components are zero except the unit kth component, is called the canon-
ical basis of R
n
. Note that it is very simple to obtain the coefficients in the linear
combination of a vector of R
n
in terms of the canonical basis: these coefficients
are simply the components of the vector (see Exercise 3 below). The choice of a
particular basis is mainly dictated by either computational convenience or by ease
of interpretation. Given two vectors x; y in R
n
, it is always possible to generate
a vector from x, that is orthogonal to y. This goes as follows: we first define the
vector y
0
D y=jjyjj and the scalar t Dhy
0
; xi, with which we form x
0
D x y
0
t.
The computed x
0
is thus orthogonal to y. Indeed, using the properties of the inner
product, hy
0
; x
0
iDhy
0
; x y
0
tiDhy
0
; xithy
0
; y
0
iDt t D 0.
Determining an orthogonal basis of a given space is a major task. In R
2
this is
easy: given any vector aD.a
1
;a
2
/, the vector bD.a
2
;a
1
/ (or cDb D .a
2
; a
1
/)
is orthogonal to a, therefore the vectors a; b readily define an orthogonal basis. In
R
n
the process is far less trivial. A stable way to proceed is to take n linearly in-
dependent vectors u
1
;:::;u
n
of R
n
, and then orthonormalize them in a sequential
manner. More precisely, we first normalize u
1
to get v
1
;wetakeu
2
, we orthog-
onalize it against v
1
and then normalize it to get v
2
. We thus continue with u
3
,
orthogonalize it against v
1
and v
2
and get v
3
after normalization, and so on. This
iterative procedure is the famous Gram-Schmidt process.
Exercises and Problems
1. Given the two vectors a D .1; 2/ and b D .3; 1/: (i) verify that a and b
are linearly independent. (ii) Compute a vector orthogonal to a. (iii) If possible,
determine a scalar k such that c D ka and a are linearly independent.
(i) In R
2
vectors are either multiple of each other or they are independent. Since
b is not a multiple of a, we have that a and b are linearly independent. (ii) The
vector d D .2; 1/ is orthogonal to a, indeed ha; diD.1/.2/C.2/.1/ D 0.
(iii) From the answer to (i), it follows that there is no such c.
2. Obtain an orthonormal set from the two linearly independent vectors: a D .2; 3/
and b D .1; 1/.