Authors: Andreas Weingessel, Horst Bischof and Kurt Hornik (Vienna University of Technology)
Abstract: Autoassociative neural networks compress input data by reducing the dimensionality of the input by projecting it to a lower dimensional subspace. By this projection information orthogonal to this subspace is lost, so reconstructing the original data yields some error. The original data can be completely recovered by knowing the lower-dimensional projection and the reconstruction error. This reconstruction error is stored in a quantized form in order to still allow a good compression of the original data. We will present a network combination where the autoassociative NN and the quantization of the error of the autoassociative NN are combined in one common framework. We will show that this method yields lower error than the sequential application of an autoassociator and a quantizer.