public class ClassifierAttributeEval extends ASEvaluation implements AttributeEvaluator, OptionHandler
-L Evaluate an attribute by measuring the impact of leaving it out from the full set instead of considering its worth in isolation
-execution-slots <integer> Number of attributes to evaluate in parallel. Default = 1 (i.e. no parallelism)
-B <base learner> class name of base learner to use for accuracy estimation. Place any classifier options LAST on the command line following a "--". eg.: -B weka.classifiers.bayes.NaiveBayes ... -- -K (default: weka.classifiers.rules.ZeroR)
-F <num> number of cross validation folds to use for estimating accuracy. (default=5)
-R <seed> Seed for cross validation accuracy testimation. (default = 1)
-T <num> threshold by which to execute another cross validation (standard deviation---expressed as a percentage of the mean). (default: 0.01 (1%))
-E <acc | rmse | mae | f-meas | auc | auprc> Performance evaluation measure to use for selecting attributes. (Default = accuracy for discrete class and rmse for numeric class)
-IRclass <label | index> Optional class value (label or 1-based index) to use in conjunction with IR statistics (f-meas, auc or auprc). Omitting this option will use the class-weighted average.
Options specific to scheme weka.classifiers.rules.ZeroR:
-output-debug-info If set, classifier is run in debug mode and may output additional info to the console
-do-not-check-capabilities If set, classifier capabilities are not checked before classifier is built (use with caution).
| Constructor and Description |
|---|
ClassifierAttributeEval()
Constructor.
|
| Modifier and Type | Method and Description |
|---|---|
void |
buildEvaluator(Instances data)
Initializes a ClassifierAttribute attribute evaluator.
|
java.lang.String |
classifierTipText()
Returns the tip text for this property
|
double |
evaluateAttribute(int attribute)
Evaluates an individual attribute by measuring the amount of information
gained about the class given the attribute.
|
java.lang.String |
evaluationMeasureTipText()
Returns the tip text for this property
|
java.lang.String |
foldsTipText()
Returns the tip text for this property
|
Capabilities |
getCapabilities()
Returns the capabilities of this evaluator.
|
Classifier |
getClassifier()
Get the classifier used as the base learner.
|
SelectedTag |
getEvaluationMeasure()
Gets the currently set performance evaluation measure used for selecting
attributes for the decision table
|
int |
getFolds()
Get the number of folds used for accuracy estimation
|
java.lang.String |
getIRClassValue()
Get the class value (label or index) to use with IR metric evaluation of
subsets.
|
boolean |
getLeaveOneAttributeOut()
Get whether to evaluate the merit of an attribute based on the impact of
leaving it out from the full set instead of considering its worth in
isolation
|
int |
getNumToEvaluateInParallel()
Get the number of attributes to evaluate in parallel
|
java.lang.String[] |
getOptions()
returns the current setup.
|
java.lang.String |
getRevision()
Returns the revision string.
|
int |
getSeed()
Get the random number seed used for cross validation
|
double |
getThreshold()
Get the value of the threshold
|
java.lang.String |
globalInfo()
Returns a string describing this attribute evaluator.
|
java.lang.String |
IRClassValueTipText()
Returns the tip text for this property
|
java.lang.String |
leaveOneAttributeOutTipText()
Tip text for this property
|
java.util.Enumeration<Option> |
listOptions()
Returns an enumeration describing the available options.
|
static void |
main(java.lang.String[] args)
Main method for executing this class.
|
java.lang.String |
numToEvaluateInParallelTipText()
Tip text for this property.
|
java.lang.String |
seedTipText()
Returns the tip text for this property
|
void |
setClassifier(Classifier newClassifier)
Set the classifier to use for accuracy estimation
|
void |
setEvaluationMeasure(SelectedTag newMethod)
Sets the performance evaluation measure to use for selecting attributes for
the decision table
|
void |
setFolds(int f)
Set the number of folds to use for accuracy estimation
|
void |
setIRClassValue(java.lang.String val)
Set the class value (label or index) to use with IR metric evaluation of
subsets.
|
void |
setLeaveOneAttributeOut(boolean l)
Set whether to evaluate the merit of an attribute based on the impact of
leaving it out from the full set instead of considering its worth in
isolation
|
void |
setNumToEvaluateInParallel(int n)
Set the number of attributes to evaluate in parallel
|
void |
setOptions(java.lang.String[] options)
Parses a given list of options.
|
void |
setSeed(int s)
Set the seed to use for cross validation
|
void |
setThreshold(double t)
Set the value of the threshold for repeating cross validation
|
java.lang.String |
thresholdTipText()
Returns the tip text for this property
|
java.lang.String |
toString()
Return a description of the evaluator.
|
clean, doNotCheckCapabilitiesTipText, forName, getDoNotCheckCapabilities, makeCopies, postExecution, postProcess, preExecution, run, runEvaluator, setDoNotCheckCapabilitiesequals, getClass, hashCode, notify, notifyAll, wait, wait, waitmakeCopypublic java.lang.String globalInfo()
public java.util.Enumeration<Option> listOptions()
listOptions in interface OptionHandlerlistOptions in class ASEvaluationpublic void setOptions(java.lang.String[] options)
throws java.lang.Exception
-L Evaluate an attribute by measuring the impact of leaving it out from the full set instead of considering its worth in isolation
-execution-slots <integer> Number of attributes to evaluate in parallel. Default = 1 (i.e. no parallelism)
-B <base learner> class name of base learner to use for accuracy estimation. Place any classifier options LAST on the command line following a "--". eg.: -B weka.classifiers.bayes.NaiveBayes ... -- -K (default: weka.classifiers.rules.ZeroR)
-F <num> number of cross validation folds to use for estimating accuracy. (default=5)
-R <seed> Seed for cross validation accuracy testimation. (default = 1)
-T <num> threshold by which to execute another cross validation (standard deviation---expressed as a percentage of the mean). (default: 0.01 (1%))
-E <acc | rmse | mae | f-meas | auc | auprc> Performance evaluation measure to use for selecting attributes. (Default = accuracy for discrete class and rmse for numeric class)
-IRclass <label | index> Optional class value (label or 1-based index) to use in conjunction with IR statistics (f-meas, auc or auprc). Omitting this option will use the class-weighted average.
Options specific to scheme weka.classifiers.rules.ZeroR:
-output-debug-info If set, classifier is run in debug mode and may output additional info to the console
-do-not-check-capabilities If set, classifier capabilities are not checked before classifier is built (use with caution).
setOptions in interface OptionHandlersetOptions in class ASEvaluationoptions - the list of options as an array of stringsjava.lang.Exception - if an option is not supportedpublic java.lang.String[] getOptions()
getOptions in interface OptionHandlergetOptions in class ASEvaluationpublic java.lang.String leaveOneAttributeOutTipText()
public void setLeaveOneAttributeOut(boolean l)
l - true if each attribute should be evaluated by measuring the impact
of leaving it out from the full setpublic boolean getLeaveOneAttributeOut()
public java.lang.String numToEvaluateInParallelTipText()
public void setNumToEvaluateInParallel(int n)
n - the number of attributes to evaluate in parallelpublic int getNumToEvaluateInParallel()
public void setIRClassValue(java.lang.String val)
val - the class label or 1-based index of the class label to use when
evaluating subsets with an IR metricpublic java.lang.String getIRClassValue()
public java.lang.String IRClassValueTipText()
public java.lang.String evaluationMeasureTipText()
public SelectedTag getEvaluationMeasure()
public void setEvaluationMeasure(SelectedTag newMethod)
newMethod - the new performance evaluation metric to usepublic java.lang.String thresholdTipText()
public void setThreshold(double t)
t - the value of the thresholdpublic double getThreshold()
public java.lang.String foldsTipText()
public void setFolds(int f)
f - the number of foldspublic int getFolds()
public java.lang.String seedTipText()
public void setSeed(int s)
s - the seedpublic int getSeed()
public java.lang.String classifierTipText()
public void setClassifier(Classifier newClassifier)
newClassifier - the Classifier to use.public Classifier getClassifier()
public Capabilities getCapabilities()
getCapabilities in interface CapabilitiesHandlergetCapabilities in class ASEvaluationCapabilitiespublic void buildEvaluator(Instances data) throws java.lang.Exception
buildEvaluator in class ASEvaluationdata - set of instances serving as training datajava.lang.Exception - if the evaluator has not been generated successfullypublic double evaluateAttribute(int attribute)
throws java.lang.Exception
evaluateAttribute in interface AttributeEvaluatorattribute - the index of the attribute to be evaluatedjava.lang.Exception - if the attribute could not be evaluatedpublic java.lang.String toString()
toString in class java.lang.Objectpublic java.lang.String getRevision()
getRevision in interface RevisionHandlergetRevision in class ASEvaluationpublic static void main(java.lang.String[] args)
args - the options