WEAPON was my first try at fiction. I'd always been interested in the possibility of Artificial Intelligence, so that's what I wanted to write about. Solo is a robot designed to be one thing, a weapon. During tests in Costa Rica, it becomes obvious that Solo has a mind of his own. And emotions. And has no desire to ever be "human." (Marvin Minsky of MIT, the orginator of the term "artificial intelligence", once told me Solo was his favorite robot, and that I explained AI better than he did. He's wrong of course, but it was a terrific compliment!) In 1995, WEAPON was made into the movie "SOLO" starring Mario Van Peebles. The best thing I can say about the movie is that it's nothing like the book.
Five years ago [this was written in 1988] it became obvious to me that the fantasy of talking to an intelligent machine might actually happen. People involved in the artificial intelligence (AI) field believe it's possible and are trying right now to build machine beings.
The philosophical problems of AI turn out to be more difficult to solve than building the appropriate hardware (disregarding size). AI philosophy struggles with the problem of how to present the world to a computer and how the computer (if mobile) would learn to move about and act in that world. If the world is much constrained, as is a mathematical or logical one, it's relatively simple. It is much easier to program a computer to solve problems involving logic, math or knowledge than it is to program a computer to construct an arch with a set of children's building blocks. To define every step--what an arch is and what pieces to use and where to put them--is to lead to a set of fixed solutions and a lack of flexibility in changing circumstances (How many different colors, sizes, shapes, and textures of blocks are there anyway? Are two blocks leaning together an arch?).
Hans Moravec, in Robotics , describes a robot, Uranus, built by the Carnegie-Mellon University. Uranus is designed to solve such problems as "Roll down the hallway, find the third doorway, go inside and get a cup." The instructions Uranus received, via a computer program, were:
Wake up Door-Recognizer with instructions
On Finding-Door Add 1 to Door Number
Set Door-Number to 0
While Door-Number < 3 Wall-Follow
If Door-Open THEN Go-Through-Opening
Set Cup-Location to result of Look-for-Cup
Travel to Cup-Location
Pickup-Cup at Cup-Location
Travel to Door-Location
IF Door-Open THEN Go-Through-Opening
Travel to Start-Locationd
This is high-level computer talk for, "Go down the hall to the third door. Go inside that room without breaking down the door and bring the cup back here."
Moravec describes a problem that occurs when Uranus trundles down the hall counting doors. The second door has been completely covered with gaudy posters making it unrecognizable to the robot. Uranus rolls past the third door thinking it's the second, stopping at the fourth door. When Uranus opens the fourth door, its the entrance to a stairwell, mortal danger to Uranus. Fortunately, there's a concurrent program running within Uranus called Detect-Cliff . The program is always calculating the likelihood of encountering a drop-off based on feedback from various sensors. A companion program, Deal-with-Cliff is also running continuously, but with low priority. When Detect-Cliff is activated, Deal-with-Cliff takes over as the highest priority and Uranus backs away from the edge of the world.
"Now," says Moravec, "there's a curious thing about this sequence of actions. A person seeing them, not knowing about the internal mechanisms of the robot, might offer this interpretation: 'First the robot was determined to go through the door, but then it noticed the stairs and became so frightened and preoccupied it forgot all about what it had been doing.' Knowing what we do about the programming of the robot, we might be tempted to scold this poor person for using such sloppy anthropomorphic concepts as determination, fear, preoccupation and forgetfulness in describing the actions of a machine. We could do so, but it would be wrong."
"I think the robot would come by the emotions and foibles described as honestly as any living animal. An octopus in pursuit of a meal can be diverted by hints of danger in just the way Uranus was. An octopus also happens to have a nervous system that evolved entirely independently of our own vertebrate version. Yet most of us feel no qualms about ascribing concepts like passion, pleasure, fear, and pain to the actions of the animal.. We have in the behavior of the vertebrate, the mollusk, and the robot a case of convergent evolution. The needs of the mobile way of life have conspired in all three instances to create an entity that has modes of operation for different circumstances, and that changes quickly from mode to mode on the basis of uncertain and noisy data prone to misinterpretation."
The reluctance of humans to believe that machines could ever possess a Self or exhibit emotions arises from the illusion that we understand how our own minds work. Actually, we only know how to use our minds (at least we think we do). An analogy might be that we all know how to use televisions, but very few of us know how televisions work.
The question of how intelligence can emerge from nonintelligence can be answered with an example: ourselves. Many scientists believe that our minds are constructed of many little parts, each mindless by itself. In his book Society of Mind, Marvin Minsky, co-founder of MITs Artificial Intelligence Lab says,
"I'll call 'Society of Mind' this scheme in which each mind is made of many smaller processes. These we'll call agents. Each mental agent by itself can only do some simple thing that needs no mind or thought at all. Yet when we join enough of them we can explain the strangest mysteries of mind."
One of the mysteries is the concept of Self (Self is always capitalized in these discussions). Minsky says, "The ordinary views are wrong that hold that Selves are magic, self-indulgent luxuries that enable our minds to break the bonds of natural cause and law. Instead, those Selves are practical necessities." In a similar way, Minsky believes that emotions are necessary organizational agencies that help guide us along complicated paths to goals:
"No long-term project can be carried out without some defense against competing interests, and this is likely to produce what we call emotional reactions to the conflicts that come about among our most insistent goals. The question is not whether intelligent machines can have any emotions, but whether machines can be intelligent without any emotions....It is probably no accident that the term 'machinelike' has come to have two opposite connotations. One means completely unconcerned, unfeeling, and emotionless, devoid of any interest. The other means being implacably committed to some single cause. Thus each suggest not only inhumanity, but also stupidity. Too much commitment leads to doing only one single thing; too little concern produces aimless wandering."
Emotions establish priorities, form purpose, and communicate our desires to others. Emotions would have to be a part of any truly intelligent machine. Indeed, in the appropriate machine, emotions will probably arise as a natural consequence of thinking and trying to solve problems.
A new kind of computer, called a parallel processor, comprising thousands (now) or millions (soon) of individual and interconnected smaller computers, each capable of being an agent of specialized interest or ability is now being built. One of these, the Connection Machine, built by Thinking Machines Technology of Cambridge, Massachusetts, contains over 64,000 individual and interconnected small computers. Parallel processing computers were inspired, in part, by the goal of creating intelligence in a machine. Furthermore, electronic miniaturization has made building such machines possible. (Our present single-processor personal and mainframe computers are structured the way they are because of the expense of building the original vacuum-tube computers like the UNIVAC. Miniaturization has made possible desktop personal computers with hundreds of times the computing power of the million-dollar, room-sized UNIVAC.) Many people including the Defense Department believe that with large parallel processors (large in capacity, not size) containing enough individual interconnected co-processors assigned as various Minsky agents, truly complicated human-like thinking can occur.
When the thousands of specialized areas of our brains learn to work together, becoming able to solve a variety of changing problems, we call the result Common Sense. Our mind has learned through an often painful childhood how to coordinate its agencies to achieve different goals. The current goal in artificial intelligence is to construct machines similarly arranged which would learn to coordinate learned and supplied agencies and develop their own form of common sense.
Having few of our biological imperatives, machine beings would most certainly be different than human beings. Although a machine being would share, in a general way, many of our own goals like food acquisition, shelter, reproduction, defense, health maintenance and so on, it would view the specifics of these problems very differently. In Weapon , one of Solo's goals is to produce electricity for his own consumption; the villagers grow beans.
A recent New York Times article (August 16, 1988) discusses an artificial neural network built by Terrence J. Sejnowski of Johns Hopkins. His program, known as NetTalk consists of about 300 neurons arranged in three layers, connected by 18,000 adjustable synapses. "At first these volume controls [of the synapses] are set at random and NetTalk is a structureless, homogenized tabula rasa. Provided with a list of words, it babbles incomprehensibly. But some of its guesses are better than others, and they are reinforced by adjusting the strengths of the synapses according to a set of learning rules."
"After a half day of training, the pronunciations become clearer and clearer until NetTalk can recognize some 1,000 words. In a week, it can learn 20,000."
Although the program is not provided with specific rules for how different letters are pronounced in different circumstances ( like the "c" in "carrot" and "certify" or the "p" in "put" and "phone"), as it evolves, "it acts as though it knows the rules. They become implicitly coded in the network of connections, though [Dr. Sejnoski] had no idea where the rules were located, or what they looked like."
"Using mathematical analysis, he is beginning to uncover this hidden knowledge. 'It turned out to be very sensible," he said. 'The vowels are represented differently from the consonants. Things that sound similar are clustered together."
One of the major tenets among those that challenge the possibility of artificial intelligence, is that computers can only do what they are programmed to do. And in the linear, step by step, programming style of traditional artificial intelligence attempts, this is true. With the advent of the technique of artificial neural network mapping, the machines are learning their own way. Like their biological counterparts.
At present, the development of machines that think is well funded (The Pentagon bought the first Connection Machine) and is accelerating. Compact new parallel processors containing a million miniaturized co-processors combined with new theories like Minsky's on the nature of thinking will one day make possible an encounter with a machine that will claim that it is an "I" and which will exhibit what we call emotions. Further, it will not be artificially intelligent. It will be a different kind of thinking being. It will be more alien than any biological extraterrestrial.
Weapon is a forecast of that encounter. Almost all the money spent in the research for artificial intelligence is supplied by the government, especially the Defense Department through the Defense Advanced Research Projects Agency or DARPA. The goal is a general-purpose, mobile robot that can use human tools (everything from crowbars to F-16 fighters) and perform combat missions. It is hoped that through proper "education" a machine being capable of performing these missions would also not be a threat to its builders. The development of a Solo-type machine is an ongoing project at DARPA, and because an actual Solo could be a very effective weapon, neither I or anyone else can say that they have not already accomplished their goal.
For those readers interested in further information on the subject of artificial intelligence, I include this short bibliography:
Braitenberg, Valentino. Vehicles. MIT. 1984
An essay in which Braitenberg demonstrates how very simple machines can evolve to solve difficult tasks and exhibit emotions.
Delbruck, Max. Mind From Matter ? Blackwell Scientific Publications. 1986
Famous physicist's theory on the phenomena.
Eccles, Sir John, Editor. Mind & Brain. Paragon House. 1982
Good collection of essays on the subject.
Gardner, Howard. The Mind's New Science. Basic Books. 1985
A history of the cognitive revolution.
Hofstadter, Douglas, Editor. The Mind's I. Basic Books. 1981
Collection of essays about Self and Soul.
Hofstadter, Douglas. Godel, Escher, Bach. Basic Books. 1979
Hard-to-read but excellent book on AI as it relates to self-referential systems
Hofstadter, Douglas. Metamagical Themas. Basic Books. 1985
"Questing for the Essence of Mind and Pattern"
Minsky, Marvin, Editor. Robotics. Anchor Press Doubleday. 1985
Good general-reader source of the latest theories about robotics and thinking machines.
Minsky, Marvin. The Society of Mind. Simon and Shuster. 1986
A detailed exploration of human intelligence. Minsky is co-founder of the Artificial Intelligence Lab at MIT
Poundstone, William. The Recursive Universe. Morrow. 1985
Among other interesting things, this book goes into automata, or how very simple systems are capable of very complex actions.
Prigogine, Ilya. Order Out Of Choas. Bantam. 1984
How things like atoms got together in the first place. Very technical.
Shurkin, Joel. Engines of the Mind. Norton. 1984
Excellent general history of the computer.
High Springs, Florida