The field of artificial neural networks is often just called neural networks or multi-layer perceptrons after perhaps the most useful type of neural network.
A Perceptron is a single neuron model that was a precursor to larger neural networks. It is used to solve difficult computational tasks like the predictive modeling tasks we see in machine learning.
The building block for neural networks is artificial neurons. These are simple computational units that have weighted input signals and produce an output signal using an activation function.
How does multi-layer Perceptron learn?
The power of neural networks comes from their ability to learn the representation in your training data and how to best relate it to the output variable that you want to predict. The predictive capability of neural networks comes from the hierarchical or multi-layered structure of the networks.
Networks of Neurons
Neurons are arranged into networks of neurons. A row of neurons is called a layer and one network can have multiple layers. The architecture of the neurons in the network is often called the network topology.
Input or Visible Layers
The layer that takes input from the dataset is called the visible layer because it is the exposed part of the network. Often a neural network is drawn with a visible layer with one neuron per input value or column in your dataset.
Layers after the input layer are called hidden layers because they are not directly exposed to the input. Given increases in computing power and efficient libraries, very deep neural networks can be constructed. Deep learning can refer to having many hidden layers in your neural network.
The final hidden layer is called the output layer and it is responsible for outputting a value or vector of values that correspond to the format required for the problem. The choice of activation function in the output layer is strongly constrained by the type of problem that you are modeling.
Read also: Understanding the Recommender Systems