4.4 Products of Vectors 121
[
t
1
t
2
t
3
]
=
[
u
1
u
2
u
3
][
?
]
Recall from Section 4.4.1 the definition of a dot product (yielding a scalar), and
from Section 4.3.4 the definition of multiplication of a vector by a scalar (yielding
a vector). We can use these to reverse-engineer the needed matrix, which is a tensor
product of two vectors and is noted as v ⊗w, and so we have
t =
u ·v
w
=
[
u
1
u
2
u
3
]
v
1
w
1
v
1
w
2
v
1
w
3
v
2
w
1
v
2
w
2
v
2
w
3
v
3
w
1
v
3
w
2
v
3
w
3
If you multiply this out, you’ll see that the operations are, indeed, the same as those
specified in Equation 4.8. This also reveals the nature of this operation; it transforms
the vector u into one that is parallel to w:
t =
[
(u
1
v
1
+ u
2
v
2
+ u
3
v
3
)w
1
(u
1
v
1
+ u
2
v
2
+ u
3
v
3
)w
2
(u
1
v
1
+ u
2
v
2
+ u
3
v
3
)w
3
]
This operation is a linear transformation of u for the two vectors v and w because
it transforms vectors to vectors and preserves linear combinations; its usefulness will
be seen in Section 4.7. It is also important to note that the order of the vectors is
important: generally, ( w ⊗v)
T
=v ⊗w.
4.4.4 The “Perp” Operator and the “Perp” Dot Product
The perp dot product is a surprisingly useful, but perhaps underused, operation on
vectors. In this section, we describe the perp operator and its properties and then go
on to show how this can be used to define the perp dot operation and describe its
properties.
The Perp Operator
We made use of the ⊥ (pronounced “perp”) operator earlier, without much in the
way of explanation. If we have a vector v, then v
⊥
is a vector perpendicular to it (see
Figure 4.3). Of course, in 2D there are actually two perpendicular vectors (of the same
length), one at 90
◦
clockwise and one at 90
◦
counterclockwise. However, since we
have adopted a right-handed convention, it makes sense to choose the perpendicular
vector 90
◦
counterclockwise, as shown in the figure.
Perpendicular vectors arise frequently in 2D geometry algorithms, and so it
makes sense to adopt this convenient notation. In terms of vectors, the operation
is intuitive and rather obvious. But what about the matrix representation? The vector