
the lists of ranked class labels provided by classifiers ,
and it classifies an input pattern by its overall class
rank, that is computed summing the rank values
that classifiers assigned to the pattern for each class.
The class with the maximum overall rank is the winner.
Rank-level fusion rules are suitable for problems
with many classes, where the correct class may appear
often near the top of the list, although not always at
the top.
Measurement-Level Fusion Rules
Examples of fixed rules that combine continuous clas-
sifier outputs are: the simple mean (average), the max-
imum, the minimum, the median, and the product of
classifier outputs [1]. Linear combiners (i.e., the aver-
age and its trainable version, the weighted average) are
used in popular ensemble learning algorithms such as
Bagging [8], the Random Subspace Method [10], and
AdaBoost [1, 9], and represent the baseline and first
choice combiner in many applications. Continuous
classifier outputs can be also regarded as a new feature
space (an intermediate feature space [1]). Another
classifier that takes classifier outputs as input and out-
puts a class label can do the combination. However,
this approach usually demands very large data sets that
allow training effectively this additional classifier. The
Decision Templates method is an interesting example
of a trainable rule for combining continuous classifier
outputs [1]. The idea behind decision templates com-
biner is to store the most typical classifier outputs
(called decision template) for each class, and then
compare it with the classifier outputs obtained for
the input pattern (called decision profile of the input
pattern) using some similarity measure.
Selection of Multiple Classifiers
In classifier selection, the role of the combiner is selecting
the classifier (or the subset of classifiers) to be used for
classifying the input pattern, under the assumption that
different classifiers (or subsets of classifiers) hav e different
domains of competence. Dynamic classifier selection
rules have been proposed that estimate the accuracy of
each classifier in a local region surrounding the pattern to
be classified, and select the classifier that exhibits the
maximum accuracy [14, 15]. As dynamic selection
may be too computationally demanding and require
large data sets for estimating the local classifier accura-
cy, some static selection rules have also been proposed
where the regions of competence of each classifier are
estimated before the operational phase of the MCS [1].
Classifier selection has not attracted as much atten-
tion as classifier fusion, probably due to the practical
difficulty of identifying the domains of competence
of classifiers that make possible an effective selection.
Related Entries
▶ Ensemble Learning
▶ Fusion, Decision-Level
▶ Fusion, Rank-Level
▶ Fusion, Score-Level
▶ Multi-Algorithm Systems
▶ Multiple Experts
References
1. Kuncheva, L.I.: Combining Pattern Classifiers: Methods and
Algorithms, Wiley, NY (2004)
2. Dietterich, T.G.: Ensemble methods in machine learning, Multi-
ple Classifier Systems, Springer-Verlag, LNCS, 1857, 1–15 (2000)
3. Fumera, G., Roli, F.: A Theoretical and experimental analysis of
linear combiners for multiple classifier systems. IEEE Trans.
Pattern Anal. Mach. Intell. 27(6), 942–956 (2005)
4. Roli F., Giacinto, G.: Design of multiple classifier systems. In:
Bunke, H., Kandel, A. (eds.) Hybrid Methods in Pattern Recog-
nition, World Scientific Publishing (2002)
5. Ho, T.K.: Complexity of classification problems and compara-
tive advantages of combined classifiers, Springer-Verlag, LNCS,
1857, 97–106 (2000)
6. Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive mix-
tures of local experts. Neural Comput. 3, 79–87 (1991)
7. Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier
ensembles. Mach. Learn. 51, 181–207 (2003)
8. Breiman, L.: Bagging predictors. Mach. Learn. 24, 123–140
(1996)
9. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of
on-line learning and an application to boosting. J. Comput. Syst.
Sci. 55(1), 119–139 (1997)
10. Ho, T.K.: The random subspace method for constructing deci-
sion forests. IEEE Trans. Pattern Anal. Mach. Intell. 20, 832–844
(1998)
11. Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems
via error-correcting output codes. J. Artif. Intell. Res. 2, 263–286
(1995)
12. Roli, F., Raudys, S., Marcialis, G.L.: An experimental comparison
of fixed and trained fusion rules for crisp classifiers outputs. In:
Proceedings of the third International Workshop on Multiple
Classifier Systems (MCS 2002), Cagliari, Italy, June 2002, LNCS
2364, 232–241 (2002)
Multiple Classifier Systems
M
985
M