Artificial Intelligence From the Bible
I always had a love of science even though I was never trained as a scientist. I obtained my high school diploma (around 1970) by passing the GED test while in the U.S. Army. After I got out, I earned a two-year associate degree in electronics technology from the New York City Community College. That is the extent of my formal education. In retrospect, this was a blessing because I was not fully indoctrinated into the mainstream scientific world view. I had a different way of looking at things. I have always been attracted to fundamental issues. I believed that no one could have a true understanding of any complex subject unless one first understood its fundamentals. Scientists, for their part, seemed mostly interested in writing math equations to predict observable phenomena. Theirs is a world of mathematical models used in making predictions based on trends. I grew deeply dissatisfied with that approach because it merely scratched the surface. In my opinion, math does not offer an explanation as to why things are the way they are. For example, neither the equations of Newton nor those of Einstein explain why things fall, a million physicists claiming otherwise notwithstanding.
Deep down, I knew that underneath all the surface complexity of the universe lied a fundamentally simple reality where things are simple by virtue of being fundamental. Nature has no choice but to use a few basic principles at every level of abstraction. Physics has its universal laws and elementary particles. Living organisms are based on DNA. I thought, why should intelligence be any different? Intelligence, too, must have its fundamental principles and elementary processors.
Even before the advent of the microcomputer in the seventies, I had often thought about the possibility of building a truly intelligent machine. It was not until around 1980, however, when I bought my first microcomputer (a Radio Shack TRS80) that I began to think about the AI problem in earnest. If intelligent robots were to become a reality, computers would have to serve as their brains. I bought a few technical books and I began to learn assembly language programming. Initially I loved it but I never became what one would call a great programmer. I never developed an abiding passion for it. I tend to lose interest in a subject as soon as I understand its underlying principles. I am always ready to move on to the next mystery at the earliest opportunity. Life is short and, besides, I am a slow learner. I suffer from a mental handicap whereby parts of my short-term memory suddenly fall asleep without warning. I often find myself losing awareness of part of what is going on around me even though I may be fully aware of other things.
In those days, I had absolutely no clue as to the nature of intelligence. In fact, like many others at the time, I naively thought that it had something to do with formal logic. I was bitten by the Boolean bug, so to speak. This was unfortunate but what originally attracted me to logic as a possible basis for intelligence was its inherent duality. After all, Boolean logic is based on true and false states. I was convinced that all of reality, including intelligence, was based on a yin-yang type of duality. I decided that any solution to the problem of intelligence would have to involve some sort of complementarity. I was wrong about the logical basis of intelligence, but, as it turned out, I was right about the yin-yang duality part.
Like everyone else with an interest in AI, I was fascinated by the optimistic pronouncements of early AI researchers like Herbert Simon, Alan Newel, John McCarthy, Marvin Minsky and many others. I shared their enthusiasm and I was thrilled by their early successes, especially in the field of computer board games like chess and checkers. I even wrote an Othello® (Reversi) program for the VIC-20 and the Commodore 64, early popular microcomputers manufactured by the long defunct Commodore International Corporation. Life was fun in those days and I had used my understanding of programming to land a job in the burgeoning microcomputer industry.
The good times did not last long. Subsequently, I went on to reject most of what GOFAI (good old fashioned AI) experts had to say (and continue to say) on the subject of intelligence. Most of it, I felt, was just utter nonsense and a complete waste of my time. I knew right out of the gate that the symbolic approach espoused by AI scientists had no chance whatsoever of succeeding. One could use it to build a few simple toy systems but that was about it. There was no way it could lead to human-level or even animal-level intelligence. Contrary to what the experts were espousing at the time, intelligence is not the result of symbol manipulation. The exact opposite is true: it is symbol manipulation that requires intelligence.
It was obvious to me from the very beginning that the brain had to operate on a handful of simple principles. Why else would it have such vast yet uniform collections of cells? Yet, some of the leaders in the AI community, seemingly out of ignorance but more likely out of a need to retain and secure funding for their projects while maintaining their leadership status in the community, went out of their way discourage those who thought that intelligence was an emergent phenomenon. One of their favorite attack lines was "there is no such thing as a free lunch." This annoyed me to no end because I held the exact opposite view: solving the intelligence problem was precisely a search for free lunches. It dawned on me that I was searching for simplicity while the AI community was in love with with complexity. I thought, why become proficient in a discipline that was so hopelessly headed in the wrong direction? There had to be a better way.
Over the years, I developed a love-hate relationship with computers. I loved their fundamental simplicity but, at the same time, I was frustrated by their brittleness and awkwardness. My frustration increased over the years. I watched helplessly as the computer science community made a complete mess of software engineering. They turned it into a veritable tower of Babel with hundreds of programming languages to choose from. Various camps sprung up touting specific methods and languages while disparaging others.
I always felt that the linguistic/algorithmic approach to software construction was the wrong way to go about it. It made programming hard to learn, tedious and prone to errors. I knew that what was needed was a parallel approach whereby simple synchronous objects communicated with one another via discrete signals. In nature, all objects are parallel and synchronous. Why should software objects be any different? Even though computers are sequential machines, they are so fast that parallelism can be easily simulated. In fact, many applications such as neural networks, video game and modeling software did exactly that. Furthermore, why should anyone have to learn English or any other language just to program a computer?
I understood early in my programming career that software could be easily constructed by using a graphical user interface to connect elementary synchronous objects together in order to form more complex, plug-compatible components. Components should snap together automatically. Just drag and drop. This approach, I knew, would open software development to the masses and would improve reliability and productivity by at least an order of magnitude. But as important as the adoption of a correct software engineering methodology was, I decided that I had neither the time nor the energy to spend on it (*). It would have to take a back seat to my interest in AI. That was my real passion (and physics), but only because I did not understand it. I was fascinated by the robust and complex way in which humans and animals interacted with their environments. It is awe-inspiring but I saw no reason that it could not be duplicated in a machine. I just did not know how. Even though software engineering was a mess and I longed to do something about it, I figured it was good enough for my AI experiments. Little did I suspect at the time that my quest to understand intelligence would last more than two dozen long years.
I was disappointed with GOFAI and software engineering but I did not give up. All along, I was driven by a mysterious force that kept me searching. I discovered that the only group of people who had a clue about the true nature of intelligence were the psychologists. Psychologists, in general, are not given to wild speculations. Theirs is a rather disciplined field based on solid evidence, plausible hypotheses and controlled experiments. I felt that their understanding of human and animal behavior was second to none. With the advent of the microcomputer, psychologists had finally found an affordable way to test and model some of their ideas. Soon afterwards, the internet opened the way to instantly accessible online repositories of published papers. The result was that ideas began to cross-pollinate at a rapid pace giving birth to brilliant insights.
Psychologists knew exactly what they were searching for and how to go about it. They wanted to understand the mechanism of human and animal intelligence by reverse-engineering, not the brain, but observable behavior. They have discovered many important principles (Pavlovian and operant conditioning, causal learning, short and long term memory, etc...) and created interesting models along the way. Most of these models could be converted into software and used to simulate various observable behaviors. They heartily embraced the rapidly growing field of neuroscience and began constructing biologically plausible models. Above all, psychologists, unlike most AI scientists, seem to have understood the critical importance of timing to perception, behavior and to intelligence in general. Theirs was a world of observable events, of stimuli and responses. I was impressed.
Meanwhile, the glamourous new field of AI had fallen in love with Alan Turing, language understanding, problem solving algorithms, fuzzy logic, expert systems and the like. There was no doubt in my mind which side I had to be on. I decided that I would conduct neural network experiments and compare my results with known psychological findings. I made slow but steady progress over the years. I figured out a number of things, things which turned out to be crucial later in my research. I felt strongly and correctly that the neural network approach, combined with findings in psychology and neuroscience, was the only one that had any chance of leading to human-level intelligence.
I soon found out, however, that the artificial neural network (ANN) crowd were almost as clueless as the symbol manipulation crowd. They completely overlooked the importance of timing to perceptual learning and motor behavior. Instead, they concentrated on static pattern recognition which was really just statistical techniques masquerading as AI. This is the sort of things that happens when a group of experts isolates itself in its own little world, forgets its original calling, and ignores progress in other disciplines. It is a form of intellectual incest. ANN researchers seemed to have had a genuine aversion to what was happening in neuroscience and psychology. They just did not seem to care. They had their little toys and that is all that mattered to them. They made almost no headway toward creating a truly intelligent machine. But they had one good thing going in that many among them had resolutely abandoned the symbolic approach in favor of what had become known as emergent intelligence.
Neuroscientists, for their part, were mired in an ocean of biological complexity. They could not see the forest for the trees. This is not to say that neuroscientists are clueless. Far from it. They, like the psychologists, are acutely aware of the importance of timing (see the work of people like Henry Markram, Terrence Sejnowski and others). They understand a lot about the detailed biological workings of individual neurons and they have a more or less good grasp of the function of some of the cell assemblies such as the cerebellum and the visual cortex. They just cannot seem to come up with a coherent picture of the function of the brain as a whole. They have theories and speculations but they do not really understand the principles involved. Going through the existing literature is painful and frustrating, kind of like searching for a needle in a haystack. I knew that I had to invent my own neural hypotheses.
Some time around 1992, while continuing to earn a living as a computer programmer, I decided to use my spare time to develop an experimental computer program which I called Animal. I needed an adequate test bed for my hypotheses. Not being able to afford a robot, I had concluded that the game of chess was a complex enough environment with which to conduct my neural network experiments. Any neural program, I thought, that could learn to be a competent chess player starting from scratch, would certainly constitute proof of intelligence. By this time, I had already rejected the mainstream ANN approach as completely wrongheaded. Unlike most ANNs, my neural network used discrete signals and consisted of multiple integrated subnetworks, each with its unique type of neurons and unique principle of operation. In addition, the artificial neurons in my model were not analog summation devices a la ANN, but simple discrete temporal signal processors.
Most AI researchers are interested only in representing knowledge in a computer, the sort of knowledge that can be expressed symbolically. In my research, I adopted the exact opposite approach. I wanted to understand how the brain builds its knowledge, i.e., how the evolution of sensory stimuli induces brain connectivity. I believed that an intelligent system had to be able to detect and process changes in its environment and in itself. I inferred that sensors were merely change detectors. Sensory signals, regardless of their origin, are all alike. A signal is just a temporal marker indicating that an event just happened. The only two things that differentiate one signal from another are its path and time of arrival. A receiving neuron has no way of distinguishing between auditory and visual signals. So I understood the anonymous nature of signals, or as psychologists would put it, the "operational closure of the brain." It was clear to me that an intelligent system is a discrete signal processing mechanism and that timing is an essential part of its operation. I eventually incorporated all of these ideas into the design of Animal. Initial results were very promising but, by 1996, I had run into a brick wall. I had no concrete idea how perception, attention and memory worked and I could not think of an effective neural mechanism that would generate goal-driven behavior. I was stuck in a miserable rut which lasted six long years.
I was at a point where I had become totally obsessed with understanding intelligence. I thought about it almost all the time. I thought about it before going to sleep and when I woke up. I would even think about it in my sleep. It was consuming my life and I was desperate for a breakthrough. But it was not until around the beginning of December of 2002 (almost twenty three years after I began my quest), while pondering the question of why humans are capable of retaining only seven items in short-term memory, that I got the idea of taking a closer look at some passages in the Bible. At the time, I was unemployed due to the internet recession and so I took the opportunity to spend more time on my research. Being a Christian, I had long suspected that the Bible contained major scientific secrets couched in metaphors. In fact, I had previously found a strong correlation between my own understanding of fundamental physics (my other passion) and various passages in the books of Ezekiel, Isaiah and Revelation. I remember asking myself: "If physics, why not also the brain? Why not intelligence?"
It was a tantalizing thought. I began to look for biblical parallels. Prior to this, I had observed a direct analogy between sensory signal processing and the biblical prophecy concerning the prophet Elijah's arrival just before the Messiah "to prepare a straight path for the Lord". I had many times wondered why preparing a path just before the arrival of the Messiah was so important. As with so many other things in the Bible, there had to be symbolic meaning to it. I knew that the path taken by a signal was just as important as its time of arrival. The path of a signal is the signal's identity. I also knew that signals in an incoming sensory stream (e.g., from a retinal ganglion cell) had to be separated and channeled into specific paths. This is the reason that one million nerve fibers coming from the eye ultimately synapse onto approximately four hundred million cells in the input layer of the human visual cortex, a four hundred to one ratio.
Discrete signal separation is not rocket science. In fact it is rather simple. Every signal is routed to a specific path according to its temporal correlation with a preceding signal in a parallel stream. Thus signal separation necessitates a predecessor (prophet) and a successor (Messiah). If the Messiah is not preceded by the prophet, the path is not taken. In my mind, the analogy was direct and inescapable. But that was not nearly enough to solve the intelligence puzzle. The rest of the puzzle, I knew, had to be in there somewhere. But where? I kept searching.
What finally made the connection for me was the enigmatic number seven. Indeed, why only seven items in short-term memory? Having read the Bible, I knew that the number seven figures prominently in biblical symbology, e.g., the seven days of creation (Genesis), the seven spirits or eyes of the Lord (Zechariah and Revelation), the seven lampstands (Revelation), the seven churches of Asia (Revelation), etc... Could the Bible possibly have anything profound to say about intelligence and the brain? I was willing to wager anything that it did. To anyone else, even to most Christians, this would seem like a long shot, one of those crazy ideas we sometimes get when all else has failed. Any scientist would dismiss it as pure crackpottery. But I had my convictions and I was never one to stick to the beaten path. I never did fit into any particular mold. All my life, I made it a point to closely examine anything rejected by the mainstream. Besides, I had become disenchanted with so many aspects of mainstream science that being viewed as a crackpot somehow felt like a badge of honor.
At the time, I did understand a few things about the brain's principles of organization. As I wrote earlier, I knew that the timing of signals was critical and I had a good understanding of several neural cell assemblies. I correctly understood that intelligence is a signal processing phenomenon and I knew that signals had to be discrete (not analog). I understood the role of retinal ganglion cells (RGC) and I had a passable understanding of the function of the sensory cortex, the basal ganglia and the cerebellum. I knew that the hippocampus was involved with both short and long-term memory and that the amygdala was the seat of emotions and motivation.
As I mentioned previously, I was convinced that a sort of yin-yang duality permeated it all. All of my best ideas had come from applying this duality to various problems. As an example, I deduced that the sensory layer had to be the inverse of the motor layer and vice versa. The two are complementary opposites. Just as a sensory stimulus had a beginning and an end, every motor action also had to have a beginning and an end. This was obvious enough. I subsequently theorized (wrongly, as it turned out) that there had to be separate (originating from different areas of the motor cortex) motor commands for starting and terminating an action. It never occurred to me that the two commands had to be intimately related so as to form a yin-yang pair (I should have known better, being that I am a staunch believer in duality). So I ended up wasting several years working under a false assumption. The important thing, however, is that I had a model of sorts percolating in my brain. It was neither complete nor correct but it was a good beginning.
So around the time of Christmas 2002, I began to reexamine the message of the seven churches in the book of Revelation. I had one question in mind: does it have anything at all to do with the brain? I will never forget what happened next. As I read the text, I was suddenly engulfed by an overwhelming realization. At first, I could not think straight due to the overpowering excitement that welled up in me. I remember pacing around the room and repeating over and over with tears in my eyes: "Oh my God, this is it!." My heart was racing and I was trembling with emotion.
What happened was that I had immediately noticed a striking parallel between some of the symbolic churches described in the text and the cell assemblies in my own model of the brain. The meaning of many of the metaphors escaped me and I was not sure of the exact details of the principles, but the analogical correspondence was unmistakable. I saw, for example, that "the sharp sword with two edges" represents the corrective mechanism responsible for motor coordination, that "walking" signifies sending motor commands and that "go back and do the first things" has to do with signal feedback. The flip side is that I completely misintepreted the meaning of "Fornication" and "eating meat sacrificed to idols". As I continued to study the text, it became increasingly clear to me what every Church symbolized. Ephesus and Smyrna together formed the sensory cortex; Pergamum was the motor cortex; Thyatira stood for the amygdala; Sardis symbolized the hippocampal system; Philadelphia represented the basal ganglia; and Laodicea was the cerebellum.
I cannot remember having ever been so exhilarated before in my life. It was my first major breakthrough. But soon afterwards, as I began to delve deeper into the more exact meaning of the metaphors, I realized that my search was not yet over. Excitement and optimism quickly gave way to perplexity. I was still far from success. I was going to need several more breakthroughs.
Knowing that the message of the seven churches is a symbolic description of brain organization is certainly a big step forward. But decoding the meaning of the individual metaphors turned out not to be as easy as I originally surmised. The reason is that it requires an understanding of not only the brain but also of what the metaphors are intended to represent. I was facing a chicken and egg dilemma: on the one hand, I could not understand the metaphors unless I first understood the brain and on the other, I could not understand the brain unless I first understood the metaphors. It seemed almost hopeless.
I had come a long way and I was not about to give up. At that point, my confidence in my eventual success was unshakable. There was no doubt in my mind that I had found the real thing, the mother lode, so to speak. I felt it deep down in my soul. It was only a matter of time before I figured it all out. I just did not know when. Besides, the situation was not altogether hopeless because I had a Rosetta stone of sorts, in the form of my limited grasp of some the mechanisms of the brain and of part of the biblical symbolism. Sure, that was not enough to fully open the door but it allowed me to gain a foothold.
Based on my initial understanding of the brain, I devised a series of plausible interpretations for the metaphors. Using Animal as an experimental framework, I wrote corresponding algorithms and began testing my hypotheses one by one. As could be expected, most of my initial interpretations were wrong because they were based on incomplete understanding. There were those bothersome loose ends that did not fit properly. It was one discouraging dead end after another. Several times, ideas that I had previously rejected proved to be correct after revision.
For some reason, during this period, it took a considerable time for my thoughts and ideas to coalesce into a consistent model. I especially had a hard time figuring out the correct meaning of the Church of Sardis. It was as if a dark and evil cloud hung over me, bogging me down, suppressing my normal thought processes. And all along, I was pestered by the realities of life. The need to survive conflicted with my irrepressible urge to continue searching. I was miserable but I could discern a silver lining. Slowly but surely, one thing led to another and my understanding grew. As always, I persevered. What follows are some of the milestones I reached in the last few months.
Sunday, April 18, 2004
The single lampstand described by the prophet Zechariah (Zech. 4:2) in 518 B.C. more than six hundred years before John received his own vision on the isle of Patmos, is none other than the seven spirits of God mentioned in the message to the Church of Sardis (see Rev. 3:1, 4:5 and 5:6). They are the seven eyes and the seven lamps of Zechariah's lampstand. Joshua the high priest (Zech. 3:8) represents a motor cell in Pergamum (Rev. 2:17).
Tuesday, May 3, 2004
Several items of crucial importance: The two olive trees of Zechariah (Zech. 4:11) and Revelation (Rev. 11:4) seem essential to anticipatory behavior. The temple of God being measured and the court outside the temple being given to the nations for forty two months have to do with thinking and reasoning (the two witnesses) and automatic behavior (Laodicea or the nations).
So now, here I stand. I still do not understand the true operation of the church of Sardis and how it relates to the book of Zechariah. I know it has to do with short and long term memory and with anticipation. I know that Sardis sends its outputs to Pergamum and Thyatira. I am almost there. I know it. I can feel the sweet odor of sucess. It is just a matter of time.
He who has an ear, let him hear what the Spirit says to the churches.
©2004-2006 Louis Savain
Copy and distribute freely