Author: | | M. Anthony <m.anthony{[at]}lse.ac.uk> and J. Ratsaby <ratsaby{[at]}ariel.ac.il> |
Category: | | Classification |
Date: | | 2025-3-26 |
Depends: | | weka (>=3.8.4) |
Description: | | Large-Width (LW) algorithm is a type of lazy learning for data classification. It operates with the aim of maintaining a large sample width (which is a notion similar to sample margin). Given a sample of labeled and unlabeled examples it iteratively picks the next unlabeled example and classifies it while maintaining a large distance between each labeled example and its nearest unlike-prototype (a prototype is either a labeled example or an unlabeled example which has already been classified). In this iterative process, the algorithm gives a higher priority to unlabeled points whose classification decision `interferes' less with the labeled sample. Compared to other large-margin learning algorithms such as the ubiquitous SVM kernel methods, where a kernel function needs to be defined and be an inner product in some higher dimensional space, the LW algorithm can use any distance function, which does not have to satisfy anything other than non-negativity (it need not even satisfy the triangle inequality). This makes LW easy to apply in classification learning problems on general distance spaces. In the current WEKA implementation, the distance function is Euclidean hence the data needs to be numeric. It has been tested with the WEKA Explorer and Experimenter and also with remote engines. |
License: | | Free to copy and modify. Please cite the paper "M. Anthony, J. Ratsaby. Large–width machine learning algorithm, Progress in Artificial Intelligence, Vol. 9, pp.275–285, 2020." |
Maintainer: | | J. Ratsaby <ratsaby{[at]}ariel.ac.il> |
OSArch: | | aarch64,x86_64,amd64 |
OSName: | | Windows,Mac,Linux |
PackageURL: | | https://github.com/ratsaby/LwWekaPackage/raw/refs/heads/main/LW-ver-1-0.zip |
URL: | | https://github.com/ratsaby/LwWekaPackage/blob/main/README.md |
Version: | | 1.0 |