| Class | Description |
|---|---|
| MDD |
Modified Diverse Density algorithm, with collective
assumption.
More information about DD: Oded Maron (1998). |
| MIBoost |
MI AdaBoost method, considers the geometric mean of
posterior of instances inside a bag (arithmatic mean of log-posterior) and
the expectation for a bag is taken inside the loss function.
For more information about Adaboost, see: Yoav Freund, Robert E. |
| MIDD |
Re-implement the Diverse Density algorithm, changes
the testing procedure.
Oded Maron (1998). |
| MIEMDD |
EMDD model builds heavily upon Dietterich's Diverse
Density (DD) algorithm.
It is a general framework for MI learning of converting the MI problem to a single-instance setting using EM. |
| MILR |
Uses either standard or collective multi-instance
assumption, but within linear regression.
|
| MINND |
Multiple-Instance Nearest Neighbour with
Distribution learner.
It uses gradient descent to find the weight for each dimension of each exeamplar from the starting point of 1.0. |
| MIOptimalBall |
This classifier tries to find a suitable ball in
the multiple-instance space, with a certain data point in the instance space
as a ball center.
|
| MIRI |
MIRI (Multi Instance Rule Inducer): multi-instance classifier that utilizes partial MITI trees witha single positive leaf to learn and represent rules.
|
| MISMO |
Implements John Platt's sequential minimal
optimization algorithm for training a support vector classifier.
This implementation globally replaces all missing values and transforms nominal attributes into binary ones. |
| MISVM |
Implements Stuart Andrews' mi_SVM (Maximum pattern
Margin Formulation of MIL).
|
| MITI |
MITI (Multi Instance Tree Inducer): multi-instance
classification based a decision tree learned using Blockeel et al.'s
algorithm.
|
| MIWrapper |
A simple Wrapper method for applying standard
propositional learners to multi-instance data.
For more information see: E. |
| QuickDDIterative |
Modified, faster, iterative version of the basic
diverse density algorithm.
|
| SimpleMI |
Reduces MI data into mono-instance data.
|
| TLC |
Implements basic two-level classification method
for multi-instance data, without attribute selection.
For more information see: Nils Weidmann, Eibe Frank, Bernhard Pfahringer: A two-level learning method for generalized multi-instance problems. |
| TLD |
Two-Level Distribution approach, changes the
starting value of the searching algorithm, supplement the cut-off
modification and check missing values.
For more information see: Xin Xu (2003). |
| TLDSimple |
A simpler version of TLD, mu random but sigma^2
fixed and estimated via data.
For more information see: Xin Xu (2003). |