Nonlinear Autoencoding is not Equivalent to PCA

Authors: Nathalie Japkowicz, Stephen J. Hanson and Mark A. Gluck (Rutgers University)

Abstract: A common misperception within the Neural Network community is that even with nonlinearities in their hidden layer, autoassociators trained with Backpropagation are equivalent to linear methods such as Principal Component Analysis (PCA). The purpose of this paper is to demonstrate that nonlinear autoassociators actually behave differently from linear methods and that they can outperform these methods when used for latent extraction, projection and classification. While linear autoassociators emulate PCA and thus exhibit a flat or unimodal reconstruction error surface, autoassociators with nonlinearities in their hidden layer learn domains by building error reconstruction surfaces that, depending on the task, contain multiple local valleys. This particular interpolation bias allows nonlinear autoassociators to represent appropriate classifications of nonlinear multi-modal domains, in contrast to linear autoassociators which are inappropriate for such tasks. In fact, autoassociators with hidden unit nonlinearities can be shown to perform nonlinear classification and nonlinear recognition.