
They used a 3D laser scanner to capture range images
and associated color images of the back of the hand. The
users were instructed to place their palm flat against a
wall with uniform color and remove any rings. For each
subject out of 132, four images were captured in two
recording sessions one week apart. An additional session
was also performed a few months later with 86 of the
original subjects and 89 new subjects.
The authors used the color images to perform
segmentation of the hand from the background.
A combination of skin-color detection and edge detec-
tion was used. The resulting hand segmentation is used
to extract the hand silhouette from which the bound-
aries of index, middle, and ring fingers are detected.
Then for each detected finger a mask is constructed
and an associated normalized (with respect to pose)
range image is created.
For each valid pixel of the finger mask in the output
image, a
▶ surface curvature estimate is computed
with the corresponding range data. The principal cur-
vatures are estimated first by locally fitting a bicubic
Monge patch on the range data to deal with the noise
in the data. However, the number of pixels in the
neighborhood of each point that are used to fit the
patch has to be carefully selected, otherwise fine detail
on the surface may be lost. The principal curvatures are
subsequently used to compute a shape index, which is
a single measure of curvature.
The similarity between two finger surfaces may be
computed by estimating the normalized correlation
coefficient among the associated shape index images.
The average of the similarity scores obtained by the
three fingers demonstrated the best results when used
for classification.
Recognition experiments demonstrat ed an 95%
accuracy, falling to 85% in the case that probe and
gallery images are recorded more than one week apart.
This performance was similar with that reported by a
2D face recognition experiment. The authors managed
to cope with this decline in performance due to time
lapse by matching multiple probe images with multiple
gallery images of the same subject. Similarly, the equal
error rate obtained in verification experiments, is about
9% when a single probe image is matched against a
single gallery image and falls to 5.5% when multiple
probe and gallery images are matched.
The above results validated the assumption that 3D
finger geometry offers discriminato ry information and
may provide an alternative to 2D hand geometry
recognition. However it remains unclear how such an
approach will fair against a 2D hand geometry based
system, given the hig h cost of 3D sensor.
‘‘The main advantage of a biometric sy stem based
on 3D finger geometry is its ability to work in an
unobtrusive (contact-free) manner [2].’’ They propose
a biometric authentication scenario where the user
freely places his hand in front of his face with the
back of the hand visible from the 3D sensor. Although
the palm should be open with the finge rs extended,
small finger bending and moder ate rotation of the
hand plane with respect to the camera are allowed as
well as wearing of rings.
The acquisition of range images and quasi-
synchronous color images are achieved using a real-
time 3D sensor, which is based on the structured light
approach. Thus, data are more noisy and contain more
artifacts compared with those obtained with high-end
laser scanners. Using this setup, the authors acquired
several images of 73 subjects in two recording sessions.
For each subject, images depicting several variations in
the geometry of the hand were captured. These includ-
ed, bending of the fingers, rotation of the hand, and
presence or absence of rings (see fig. 1).
The proposed algorithm starts by segmenting the
hand from the face and torso using thresholding and
subsequently from the arm using an iterative clustering
technique. Then, the approximate center of the palm
and the orientation of the hand is detected from the
hand segmentation mask. These are used to locate the
fingers. Homocentric circular arcs are drawn around the
center of the palm with increasing radius excluding the
lower part of the circle that corresponds to the wrist.
Intersection of these arcs with the hand mask gives raise
to candidates of finger segments, which are then clus-
tered to form finger bounding polygons. This approach
avoids using the hand silhouette, which is usually noisy
and may contain discontinuities, e.g., in the presence of
rings. The initial polygon delineating each finger is
refined by exploiting the associated color image edges.
Then, for each finger two signature functions are
defined, parameterized by the 3D distance from the
finger tip computed along the ridge of each finger and
measuring cross-sectional features. Computing fea-
tures along cross-sections offers quasi- invariance to
bending. The first function corresponds to the width of
the finger in 3D, while the second corresponds to the
mean curvature of the curve that is defined by the 3D
points corresponding to the cross-section at the specific
Finger Geometry, 3D
F
417
F