There are two classes of methods in this categories:
somehow get each classifier to
output a score
for eg: svm derive it from
distance from sep, nb fn of posterior probability of winning class, purity of
leave for decision trees
and there is ongoing work
(including a poster in this conference) that attempts to do this but the
success of these methods is limited.
Create a committee of classifiers ideally by sampling from the version
space defined by the limited training data.
Again in the ideal setting when all of them are consistent classifiers
they will agree on predictions outside the version space.
They will disagree on instances that fall within the confusion
region. By getting prediction on those
you get to narrow the version space.
Now we will see how to create such a committee for different
classifiers