Introduction to autoencoders

30th January 2019


What is an Autoencoder?

Autoencoders are neural networks used for unsupervised learning. They try to match their output to the input fed during the training phase.

The autoencoder neural network is designed to uncover hidden patterns in data.

How does an Autoencoder work?

Autoencoders are neural networks capable of learning an efficient representation of input data called codings without any supervised training.

They have lower dimensionality than the input data. This makes autoencoders useful for dimensionality reduction.

Autoencoders work by simply learning to copy their inputs to their outputs. It takes some input data passes it through hidden layers and generates output. Autoencoders aims for the output to be identical to the input.

Autoencoders are not pure unsupervised learning algorithm. They are self-supervised learning algorithm.

Where can Autoencoder be used?

  • Feature Detection: autoencoders act as powerful feature detectors, and they can be used for unsupervised pre-training of deep neural networks
  • Data Generation: autoencoders are capable of randomly generating new data that looks very similar to the training data
  • Recommendation system: autoencoders can be used to create powerful recommendation systems.

Efficient Data Representations

Which of below sequences is easier to memorise?

  • 40, 27, 25, 36, 81, 57, 10, 73, 19, 68
  • 50, 25, 76, 38, 19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20

At first glance, the first, shorter sequence seems to be easier to memorise. However, the second sequence has a simple pattern. The even numbers are followed by half and odd numbers by triple plus one (this sequence is known as hailstone sequence). When the pattern is noticeable the sequence it becomes easier to memorise it.

Learning long sequence is hard to memorise. As thus, finding a pattern makes it always easier. Similarly, autoencoders try to discover and exploit patterns in the data. They looks at the inputs, convert them to an efficient internal representation, then output something similar to input.

There are two parts that compose an autoencoder:

  • An encoder that converts the input to an internal representation.
  • A decoder that converts the internal representation to the outputs.

Autoencoder Architecture

Autoencoder has the same architecture as a Multi-Layered Perceptron with the difference that the number of neurons in the input layer have to be equal to the output layer.

In the above architecture, there are two hidden layers:

Encoder layer, consisting of three neurons. Decoder layer consisting of three neurons.

Output layer is also known as reconstructions because autoencoder tries to reconstruct the inputs. The cost function of autoencoder has a parameter called reconstruction loss which penalises the model when the reconstructed output is different from the input.

The internal representation of autoencoder has lower dimensionality than input data. It is called undercomplete autoencoder. An undercomplete autoencoder must find a way to reconstruct a similar output to the inputs. This forces the autoencoder to learn the important features and drop unimportant features.

 

Read also: Introduction to generative adversarial networks

Tagged with:

Manish Prasad

An experienced data scientist with a passion to work on new challenges

To give you the best possible experience, this site uses cookies. Continuing to use jamieai.com means you agree on our use of cookies.
Accept