Rebel Science News
11/28/2012
Jeff Hawkins Is Close to Something Big
 
8/26/2012
The Myth of the Bayesian Brain
 
8/23/2012
The Second Great AI Red Herring Chase
 
8/15/2012
Rebel Speech Recognition Theory
 
8/8/2012
Rebel Speech Update
 

Temporal Intelligence


There Is a Time for Everything

 

 

Rebel Science Home

Temporal Intelligence
Animal
Perceptual Learning
Perceptual Network

Memory

Motivation

Motor Learning

Something Different
Contact Me

 

 

Abstract
The Failure of GOFAI
  Fifty Years of Failure
  Symbolic Delusion
  The Redefinition Game
The Real AI Problem
    The Power of Simplicity
  Emergent Common Sense
  Multiple Integrated Subnetworks
Temporal Intelligence
  The Importance of Timing
  The Operational Closure of the Nervous System
  Universality
  The Anticipatory Brain
Experimental Network

Abstract: This is an ongoing project to emulate biological intelligence in a computer. The chosen approach is based on the notion that animal intelligence is essentially a discrete signal-processing phenomenon. The experimental setup used is a spiking or pulsed neural network that learns to play chess through trial and error. The network starts out as a tabula rasa, i.e., it has no prior knowledge of chess or anything else. It bases its actions solely on the discrete temporal patterns of sensory and proprioceptive signals.

The Failure of GOFAI

Fifty Years of Failure

Most approaches to AI, especially the various knowledge representation schemes advanced by the GOFAI (good old-fashioned AI) community over the last fifty years, are missing the point about intelligence. Neither symbolic representation nor the current crop of artificial neural networks (ANN) have much to do with intelligence. The only intelligence we know is animal and human intelligence. Biological intelligence is what we should be trying to emulate in our machines. Yet it seems as if the GOFAI community has made it its mission to ignore every significant advance in neurobiology and psychology that has occurred over the last one hundred years. Even their ANNs bear little resemblance to biological neurons. Needless to say, they have failed to deliver on their original goal which was to create a machine with the intelligence of a human being.

By 1982, when GOFAI's failure to deliver on its promises could no longer be denied, Dr. Marvin Minsky of MIT, one of the leading luminaries of GOFAI from its very beginning, was saying that "The AI problem is one of the hardest science has ever undertaken." This has been the working assumption in GOFAI circles ever since. We are now regularly being warned by the artificial intelligentsia that progress in AI will be a slow incremental process and that it will most likely be another twenty or forty years before true human-level AI becomes a reality. Is there any reason at this late time to take any pronouncement from the GOFAI crowd at face value? Fifty years of failure is not what most of us would call a good track record.

Symbolic Delusion

Certainly, solving the AI problem is hard if one has no clue as to what the problem is in the first place. If the assumption is that one must understand human cognition in order to develop human-level AI, then, of course, the problem is extremely hard. This is because the interconnectedness of human cognition is so astronomically complex as to be intractable to formal approaches. This realization immediately makes the use of symbolic knowledge representation approaches to creating human-like common sense in a machine look rather silly. Some AI researchers (e.g., Doug Lenat) are interested only in knowledge, the sort of knowledge that can be expressed linguistically. Their goal is to represent this knowledge in a machine using symbols and symbol manipulation algorithms. The idea that intelligence is based on symbol manipulation is totally bankrupt, in my opinion. It is symbol manipulation that requires intelligence, not the other way around.

The Redefinition Game

Amazingly, GOFAI proponents are trying to make a comeback. Not too long ago, MIT Technology Review published an article (registration required) titled "AI Reboots" in which the author argues that "the focus of artificial intelligence today is no longer on psychology but on goals shared by the rest of computer science: the development of systems to augment human abilities." This begs the question, when was GOFAI ever focused on psychology? Indeed, what do game theory, expert systems, the LISP programming language, inference engines and knowledge engineering have to do with psychology? The truth is that GOFAI scientists, having failed to deliver human-level intelligence and knowing all too well that they have no chance of ever doing so, are now trying to salvage what is left of their lost glory by pasting the AI label on every computer program that suits their agenda. This way they can claim successes (and secure more funding) even though none of what they are doing has anything to do with intelligence. Science by redefinition is not progress. It is more like a con game. Intelligence, artificial or otherwise, is what psychology and neuroscience define it to be, period.

The Real AI Problem

The Power of Simplicity

The goal of the sensible AI researcher is not to develop a theory of cognition, but to discover the fundamental principles that give rise to intelligent behavior. In other words, we must try to understand and replicate the basic neural mechanisms that will allow general intelligence to gradually emerge on its own. To do so, we must take a bottom-up approach and obtain as many clues as possible from neurobiology and from behavioral research in classical and operant conditioning. Above all, we must come to understand that fundamental principles are simple by virtue of being fundamental and that generality and complexity stem from simplicity. Those who doubt the power of simplicity should examine the work of Stephen Wolfram and Edward Fredkin.

Emergent Common Sense

AI researchers should focus their efforts on figuring out how the brain builds its knowledge, i.e., how the evolution of sensory stimuli conspires to induce brain connectivity. They should refrain from trying to come up with representational ways to store knowledge about the world (e.g., fruits, plants, animals, etc...) in a computer. There is a lot more to knowledge than the classification of namable objects and their relationships. There is a huge amount of knowledge that cannot be formalized with symbols. For examples, manual dexterity, recognizing a subtle fragrance, a face or a musical tune, finding one's way around an unfamiliar neighborhood. In other words, the sort of emergent common sense knowledge that can only be acquired through direct sensory interaction with the environment.

Multiple Integrated Subnetworks

Even though a lot is known about the detailed operation of many types of neurons and the architecture of various cell assemblies, there is no overarching theory to explain the brain's operation. Neurobiologists know that the brain processes signals but they cannot explain how signal processing gives rise to intelligent behavior. What does the brain really do? This is the question that I will attempt to answer in these pages. I will introduce several general principles that an intelligent system designer can use to build a machine that uses sensors and effectors to learn from its environment and coordinate the selection of its actions. The principles are designed to be simple, scalable and applicable to any learning task. Why not a single principle of intelligence? Because one of the lessons that neurobiology has taught us is that an intelligent system is not a homogeneous block with a single architecture, but a tightly integrated collection of signal-processing subnetworks, each with its own function and corresponding architecture.

Temporal Intelligence

The Importance of Timing

One of the underlying premises of my research is that animal intelligence is essentially a discrete signal processing phenomenon. The biological evidence is clear on this issue: neurons generate and transmit discrete spikes or signals. One often hears the phrase 'spatio-temporal intelligence' bandied about. The problem with characterizing intelligence as being anything other than temporal is that it overlooks the significance of one of the most important discoveries of neurobiology in the last century: all sensory phenomena are converted into discrete spikes prior to being processed by the brain. A spike is just a temporal marker that indicates that something just happened. There is nothing spatial about a neural spike.

The temporal nature of intelligence has been known for quite some time. In 1949, Donald Hebb proposed a temporal learning rule for neurons and cell assemblies that has exerted a strong influence on theories of neural learning. Psychologists have developed an entire science of operant behavior based on the timing of stimuli and responses. However, it was not until the latter part of the twentieth century, with the groundbreaking work of people like Terrence Sejnowski and Henry Markram, that neurobiologists began to appreciate the extent to which the brain's operation is dependent on the precise timing of neural spikes.

The Operational Closure of the Nervous System

True intelligence is domain-independent. That is to say, it makes no assumption about either the origin (sensors) or the destination (effectors) of signals. Psychologists have a name for this: they call it the operational closure of the nervous system. It means that the brain operates on its own states, not on some imagined model of the external world. There is nothing in a neural spike that identifies its origin from the point of view of a receiving neuron. Nervous stimuli are unspecific, i.e. non-symbolic. In other words, whether auditory, visual, tactile or olfactory, all stimuli produce the same kind of signals. The only things that distinguish one spike from another are its time of arrival and the path that it takes. The path itself has nothing to do with spatial location. It is part of a classification method used by the nervous system to separate signals into distinct but unspecified categories. Thus spatiality and other modalities only have to do with the sensor and/or effector side of an intelligent system, i.e., with their physical types, distribution, position, etc..., not with the way the network itself processes signals.

There is a common misconception among GOFAI proponents that the brain somehow creates a model of the external world and uses this model to navigate and calculate outcomes. This could not be further from the truth. Consider that, in order to create a model of the world, the brain would have to first see the world. The problem is that it cannot see the world unless it first creates a model of it. This creates an infinite regress. The truth is that the brain only creates temporal correlations among sensory signals. That is all! What the brain sees is not the world but its own states. I realize that the illusion of seeing a world "out there" is very powerful but it is false. The world we see is not "out there" but "in there." All we can do is infer that there is a world "out there." We never see it.

Universality

Temporal intelligence is based on a single unifying concept: the relative arrival times of discrete signals. Its power is in its simplicity. At the heart of this approach is the claim that all knowledge, regardless of type, consists of patterns of discrete signals, a pattern being defined as a set of temporal relationships. Although the number of possible patterns is unlimited, all patterns can be expressed in terms of only two fundamental relationships: signals can be either concurrent or sequential. This observation, if true, will have universal consequences with regard to the organization of the nervous system. Indeed, one of the most striking things about the brain is the uniformity of its cell assemblies. The brain uses similar neurons to process signals originating from vastly different sensory organs. In light of this universality, it is not surprising that human beings are so adept at making analogies involving seemingly disparate concepts. Concepts are easy to compare if they are all expressed in a common temporal "language." 

The Anticipatory Brain

Without a doubt, the most important attribute of biological intelligence is the ability to anticipate or predict the evolution of various sensory phenomena. Without this ability, no intelligent system could survive in its environment, nor could it have goals or motivation. It goes without saying that an animal's anticipatory mechanism is based on its ability to learn the temporal order of sensory events. The mechanism necessarily uses probabilistic principles. As an example, the probability that a given action will be followed by either pain or pleasure determines whether or not the action is taken. Hence every action is given a value depending on visceral associations learned from experience. This is also part of the mechanism of action selection and attention. Behavioral psychologists have known this for a long time: classical and operant conditioning work only because animals are able to anticipate the order of arrival of specific stimuli. Both appetitive and aversive behaviors are predicated on the ability to predict the outcomes of events.

The act of anticipating is performed by some sort of memory retrieval mechanism. This mechanism works by comparing incoming sensory patterns with previously learned sequences. Remark that the brain continually receives incomplete or otherwise corrupted information originating from the senses. For examples, objects in the visual field are often partially occluded by other objects and speech is frequently interspersed with noise or spoken with an unfamiliar accent. In order to make sense of it all, the brain must use its anticipatory mechanism to fill in the gaps. Without this ability, we would have a hard time recognizing and understanding a large percentage of the things that we see, hear or feel. The temporal approach to artificial intelligence is ideally suited to the goal of emulating anticipation in a machine. 

Experimental Network

In these pages, I will introduce a model of sensory perception, memory, anticipation, concept formation and motor learning driven entirely by the timing of signals. I define signal processing as follows:

The guiding or steering of sensory signals through various pathways according to their temporal relationships.
The controlled generation of non-sensory signals for anticipation, action selection, motor activation, attention, motivation and other purposes.

The experimental setup that I use for my project is a chess learning program called Animal. Chess is a complex enough causal environment that can be easily simulated on a computer without much expense in time and money. A neural network that can learn to play chess from scratch through trial and error would certainly be proof of intelligence. Still, I would rather conduct my research using a multi-legged, spider-like robot. A real-world robot with many degrees of freedom and a full complement of sensors (visual, auditory, tactile, etc...) is an ideal platform with which to demonstrate true general intelligence. Learning to navigate through a changing environment while coordinating multiple legs in real time is a very complex problem that cannot be solved using a deterministic and/or symbolic approach. Unfortunately, such a project is beyond what I can afford at this time.

Animal is written in C++ for the MS Windows® operating system. Feel free to download the zipped executable and play with it. Eventually I will make the entire source code available for downloading. I suggest you read Animal's specifications before reading the theoretical pages. The section that describes the spiking network is especially important. Note that this is an ongoing research project, the ultimate goal of which is to build a general, scalable, adaptive, intelligent machine. I would appreciate sensible suggestions and comments from readers.

Next: Animal

Microsoft® and Windows® are registered trademarks of Microsoft Corporation.

 

©2002-2006 Louis Savain

Copy and distribute freely

Revised: 8/19/2005