Compression Net-Free Autoencoders

Authors: Mohamad Hassoun and Agus Sudjinato (Wayne State University)

Abstract: Autoassociate networks typically consist of "compression" and "decompression" parts. The former acts to reduce the dimensionality of the training patterns while the later attempts to reconstruct the training patterns with minimum error. During the training phase, the reconstruction error is propagated from the output layer through the hidden layers to adjust connection weights. As an alternative, the training of the compression and the decompression parts may be detached. This paper focuses on the development of this training approach for the decompression part (typically a one-hidden-layer net) for which the input pattern (compressed state vector) is unknown and need to be learned in addition to the synaptic weights. When the relationships among components in the training patterns are linear a stochastic gradient descent minimization of the reconstruction error, both in the compressed state vector space and weight space, leads to the formulation of an Alternating Hebbian Algorithm (AHA) which realizes Principal Component Analysis (PCA). The extension of the AHA learning rule to capture nonlinear relationships among the components in the training patterns may be realized by backpropagation learning. If needed, the compression part of the net can be trained (via backprop) using the learned compresed state vectors/patterns as the output targets. The proposed methodology is illustrated using both simulated and real world data.