History of Neural Networks

Samhith Vasikarla
2 min readJan 12, 2022

Before going to the history of neural networks we will discuss how the biological neuron actually works

Biological neuron

The neuron gets the information from dendrites. The information which we get from the dendrites is then processed in the cell nucleus and that information is passed through the axon terminal which could be input to the connected neuron.

In biology for every neuron there could some thick dendrite and thin dendrites. The information from the thick dendrites is more important to that neuron than the information from the thin dendrites

Now coming to the mimic of the neuron.

We consider each dendrite as one of the input and we pass these inputs to the summation function and then pass to the function whether the artificial neuron should be activated or not. This function to tell whether the activation function should be activated or not is called Activation function.

Artificial neuron

In biological science it is proved that neurons wont exists independently but there would be network of neurons connected to each other where the output of 1 neuron can be input to the other neuron.

So computer scientists want to mimic this network of neurons. In 1986 Hilton came with the backpropagation method where he could train neural network of depth 2.

2 depth neural network

When he wanted to train deep neural networks of depth ≥3 the training of neural networks failed. But for better algorithms we want large neural networks.

In 2006 Hilton came up with the research paper on how to train deep neural networks. In 2012 Image net competition (dataset to detect of images of object) deep neural networks performed very well when compared to other machine learning models. From then deep learning has started and gave applications like voice assistant, self driving car, image detection algorithms etc.

reference: https://sciencefest.indiana.edu/build-a-neuron-with-playdough/

http://neuralnetworksanddeeplearning.com/chap1.html

--

--