Supervised versus Unsupervised Binary-Learning by Feedforward Neural Networks

Author: Nathalie Japkowicz

Abstract: Binary classification is typically achieved by supervised learning methods. Nevertheless, it is also possible using unsupervised schemes. This paper describes a connectionist unsupervised approach to binary classification and compares its performance to that of its supervised counterpart. The approach consists of training an autoassociator to reconstruct the positive class of a domain at the output layer. After training, the autoassociator is used for classification, relying on the idea that if the network generalizes to a novel instance, then this instance must be positive, but that if generalization fails, then the instance must be negative. When tested on three real-world domains, the autoassociator proved more accurate at classification than its supervised counterpart, MLP, on two of these domains and as accurate on the third. The paper seeks to generalize these results and concludes that, in addition to learning a concept in the absence of negative examples, 1) autoassociation is more efficient than MLP in multi-modal domains, and 2) it is more accurate than MLP in multi-modal domains for which the {\it negative class} creates a particularly strong need for specialization or the positive class creates a particularly weak need for specialization. In multi-modal domains for which the positive class creates a particularly strong need for specialization, on the other hand, MLP is more accurate than autoassociation.