The history of neural networks dates back to 1943 when McCulloch and Pitts developed some simple models of neural networks as binary devices with fixed thresholds to implement simple Boolean logic functions.
In 1958, Rosenblatt developed a system called as Perceptron consisting of three layers with the middle “Association Layer” and the system could learn to associate a given input to a random output unit.
Widrow and Hoff of Stanford University developed a system called ADALINE (ADAptive LInear Element) in 1960 which was an analog electronic device and was based on the Least-Mean-Squares (LMS) learning rule.
The growing popularity of Neural Networks led to many other developments like, the principle of Heterostasis by A. Henry Klopf in 1972, the Back propagation learning Method by Paul Werbos in 1974 and COGNITRON , the step wise multilayered and trained neural network to interpret handwritten characters developed by F Kunihiko in 1975.
Since then, significant advancements have been accomplished in the field of neural networks.
Purpose of using neural networks:
Following advantageous features of neural networks have made their use extensive:
- Adaptive learning: The capacity to learn how to do tasks based on the data provided.
- Self-Organizing Ability: An ANN can organize or represent the information by itself, which it receives during learning time.
- Real Time and Parallel Operation: Multiple ANN computations can be carried out in parallel.
- Fault Tolerance through Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage.
Filed Under: Recent Articles