JavaScript EditorFree JavaScript Editor     Ajax Editor 

Main Page
  Previous Section Next Section

Biological Parallels

With the definition from artificial intelligence covered, it may finally be time to explain the biological analogy. This is not important—one might even say unnecessary—to understand neural networks, but it explains the terminology used and puts current research into perspective.

Original inspiration was drawn from the visual cortex, forming the frontal part of the brain. Rosenblatt's perceptron in fact modeled the connections right from the retina to the brain. The simulation was oversimplified from the neurobiological knowledge at the time, and since then, it has proven even more inaccurate.

MLPs have been significantly enhanced over the years by mathematically convenient techniques (for instance, sigmoid activation functions). Although some claim that such aspects are "biologically plausible," these are clearly hacks to get perceptrons to work at all. Today, all that's left of biology in MLP is a metaphor, a crude approximation, but more prominently, the trace of shattered dreams and illusions.

Perceptrons and MLPs no longer form the bulk of neural network research for this reason. They are classed as old connectionism; and despite the success and popularity of their applications, they do not generate the same hope and enthusiasm that they once did.


The main parallel with biology takes place at a cellular level. The processing units of perceptrons and MLPs can be compared to individual neurons, from which the inspiration for the model was drawn (see Figure 19.3).

Figure 19.3. A single biological neuron with a soma, a dendritic tree, and an axon.


A neuron is a single cell in the brain. It consists of a main body (soma). The main function of the soma is that it fires a signal when it has been stimulated enough. Many dendrites lead into the body forming a tree-like structure. The role of the dendrites is to collect surrounding information: stimuli of preceding neurons. The dendrites transmit two kinds of signals: inhibitory ones (which prevent the body from being stimulated), and excitatory ones (which contribute to stimulating the body). Finally, there is usually one axon, which projects out of the main body.

When the body fires, it releases an electromagnetic impulse down the axon. There are indications that many other things happen when a neuron fires (for instance, release of gases), but some of these still remain to be investigated—and even revealed.

Conceptually, there are some parallels to be made with processing units. The inputs can be compared to the dendrites, the soma represents the actual processing unit, and finally the axon is analogous to the output.

The perceptrons model is extremely simplified. Known properties of neurons are not modeled; there are regular spikes of activity released by the body, and neurons are also susceptible to be triggered by gases. Some of these ideas are modeled in other kinds of neural networks, but not perceptrons. There are undoubtedly many other unknown properties about neurons that perceptrons fail to capture as well.


The complexity in a biological brain arises from the combined power of individual neurons working in parallel. These are connected by synapses, linking the axon of previous neurons to dendrites of the next. The underlying functioning of a synapse is extremely complex, but conceptually, this mechanism essentially transmits the electromagnetic impulses between neurons. This is one way neurons communicate, as shown in Figure 19.4.

Figure 19.4. A set of interconnected neurons, similar to those found in biological brains.


This brain structure can be compared to an MLP—using a bit of imagination. The processing units are reminiscent of interconnected neurons. However, few of the abilities of biological brains can be seen in perceptrons. The reason for this once again is oversimplification. Scientific knowledge about the brain has improved, but this increased knowledge is not reflected in such old connectionist models.

The main limitations lie in the feed-forward restrictions. Arbitrary connections and sparse networks can be used, but the lack of efficient automated methods to establish these exotic topologies seriously reduce their appeal. Finally, the perceptrons have no spatial structure, and remain virtual weights stored in memory. Using an organization in 3D space would allow the propagation of gases to be simulated, for example.


The important aspect of MLPs does not lie in their biological inspiration. This is merely good for marketing purposes, because neural networks undeniably have a certain aura associated with them! The important property of MLP lies in their mathematical foundations, which have been thoroughly researched and proven over the years. Such understanding has been emphasized in this chapter; as AI engineers, we should make the effort to see beyond the biological metaphors.

      Previous Section Next Section

    JavaScript EditorAjax Editor     JavaScript Editor