20-100 Mechatronic Systems, Sensors, and Actuators
target, with a portion of the returned energy falling on the detector. The lateral position of the spot as
seen by the detector provides a quantitative measure of the unknown angle
φ
, permitting range deter-
mination by the Law of Sines.
The performance characteristics of triangulation systems are to some extent dependent on whether
the system is active or passive. Passive triangulation systems using conventional video cameras require
special ambient lighting conditions that must be artificially provided if the environment is too dark.
Furthermore, these systems suffer from a correspondence problem resulting from the difficulty in match-
ing points viewed by one image sensor with those viewed by the other. On the other hand, active
triangulation techniques employing only a single detector do not require special ambient lighting, nor
do they suffer from the correspondence problem. Active systems, however, can encounter instances of
no recorded strike because of specular reflectance or surface absorption of the light.
Limiting factors common to all triangulation sensors include reduced accuracy with increasing range,
angular measurement errors, and a missing parts (also known as shadowing) problem. Missing parts refers
to the scenario where particular portions of a scene can be observed by only one viewing location (P
1
or
P
2
). This situation arises because of the offset distance between P
1
and P
2
, causing partial occlusion of the
target (i.e., a point of interest is seen in one view but otherwise occluded or not present in the other). The
design of triangulation systems must include a tradeoff analysis of the offset: as this baseline measurement
increases, the range accuracy increases, but problems due to directional occlusion worsen.
Stereo Disparity
The first of the triangulation schemes to be discussed, stereo disparity (also called stereo vision, binocular
vision, and stereopsis) is a passive ranging technique modeled after the biological counterpart. When a
three-dimensional object is viewed from two locations on a plane normal to the direction of vision, the
image as observed from one position is shifted laterally when viewed from the other. This displacement
of the image, known as disparity, is inversely proportional to the distance to the object. Humans sub-
consciously verge their eyes to bring objects of interest into rough registration (Burt et al., 1992). Hold
up a finger a few inches away from your face while focusing on a distant object and you can simultaneously
observe two displaced images in the near field. In refocusing on the finger, your eyes actually turn inward
slightly to where their respective optical axes converge at the finger instead of infinity.
Most implementations use a pair of identical video cameras (or a single camera with the ability to
move laterally) to generate the two disparity images required for stereoscopic ranging. The cameras are
typically aimed straight ahead viewing approximately the same scene, but (in simplistic cases anyway)
do not possess the capability to verge their center of vision on an observed point, as can human eyes.
This limitation makes placement of the cameras somewhat critical because stereo ranging can take place
only in the region where the fields of view overlap. In practice, analysis is performed over a selected range
of disparities along the Z axis on either side of a perpendicular plane of zero disparity called the horopter
(Figure 20.77). The selected image region in conjunction with this disparity range defines a three-dimen-
sional volume known as the stereo observation window (Burt et al., 1993).
More recently there has evolved a strong interest within the research community for dynamically
reconfigurable camera orientation (Figure 20.78), often termed active vision in the literature (Aloimonos
et al., 1987; Swain & Stricker, 1991; Wavering et al., 1993). The widespread acceptance of this terminology
is perhaps somewhat unfortunate in view of potential confusion with stereoscopic systems employing
an active illumination source (see section 4.1.3). Verging ste reo, another term in use, is perhaps a more
appropriate choice. Mechanical verging is defined as the process of rotating one or both cameras about
the vertical axis in order to achieve zero disparity at some selected point in the scene (Burt et al., 1992).
There are four basic steps involved in the stereo ranging process (Poggio, 1984):
•
A point in the image of one camera must be identified (Figure 20.79, left).
•
The same point must be located in the image of the other camera (Figure 20.79, right).
•
The lateral positions of both points must be measured with respect to a common reference.
•
Range Z is then calculated from the disparity in the lateral measurements.
9258_C020_Sect_7-9.fm Page 100 Tuesday, October 9, 2007 9:09 PM