by Robert Mason |
||||||||||||
WEAPON was my first try at fiction. I'd always been interested in the possibility of Artificial Intelligence, so that's what I wanted to write about. Solo is a robot designed to be one thing, a weapon. During tests in Costa Rica, it becomes obvious that Solo has a mind of his own. And emotions. And has no desire to ever be "human." (Marvin Minsky of MIT, the orginator of the term "artificial intelligence", once told me Solo was his favorite robot, and that I explained AI better than he did. He's wrong of course, but it was a terrific compliment!) In 1995, WEAPON was made into the movie "SOLO" starring Mario Van Peebles. The best thing I can say about the movie is that it's nothing like the book. | ||||||||||||
|
||||||||||||
(from the book) |
||||||||||||
Five years ago [this was written in 1988] it became obvious to me that the fantasy of talking to an intelligent machine might actually happen. People involved in the artificial intelligence (AI) field believe it's possible and are trying right now to build machine beings.
The philosophical problems of AI turn out to be more difficult to solve than building the appropriate hardware (disregarding size). AI philosophy struggles with the problem of how to present the world to a computer and how the computer (if mobile) would learn to move about and act in that world. If the world is much constrained, as is a mathematical or logical one, it's relatively simple. It is much easier to program a computer to solve problems involving logic, math or knowledge than it is to program a computer to construct an arch with a set of children's building blocks. To define every step--what an arch is and what pieces to use and where to put them--is to lead to a set of fixed solutions and a lack of flexibility in changing circumstances (How many different colors, sizes, shapes, and textures of blocks are there anyway? Are two blocks leaning together an arch?). Hans Moravec, in Robotics , describes a robot, Uranus, built by the Carnegie-Mellon University. Uranus is designed to solve such problems as "Roll down the hallway, find the third doorway, go inside and get a cup." The instructions Uranus received, via a computer program, were: Moravec describes a problem that occurs when Uranus trundles down the hall counting doors. The second door has been completely covered with gaudy posters making it unrecognizable to the robot. Uranus rolls past the third door thinking it's the second, stopping at the fourth door. When Uranus opens the fourth door, its the entrance to a stairwell, mortal danger to Uranus. Fortunately, there's a concurrent program running within Uranus called Detect-Cliff . The program is always calculating the likelihood of encountering a drop-off based on feedback from various sensors. A companion program, Deal-with-Cliff is also running continuously, but with low priority. When Detect-Cliff is activated, Deal-with-Cliff takes over as the highest priority and Uranus backs away from the edge of the world. "I think the robot would come by the emotions and foibles described as honestly as any living animal. An octopus in pursuit of a meal can be diverted by hints of danger in just the way Uranus was. An octopus also happens to have a nervous system that evolved entirely independently of our own vertebrate version. Yet most of us feel no qualms about ascribing concepts like passion, pleasure, fear, and pain to the actions of the animal.. We have in the behavior of the vertebrate, the mollusk, and the robot a case of convergent evolution. The needs of the mobile way of life have conspired in all three instances to create an entity that has modes of operation for different circumstances, and that changes quickly from mode to mode on the basis of uncertain and noisy data prone to misinterpretation." The question of how intelligence can emerge from nonintelligence can be answered with an example: ourselves. Many scientists believe that our minds are constructed of many little parts, each mindless by itself. In his book Society of Mind, Marvin Minsky, co-founder of MITs Artificial Intelligence Lab says, One of the mysteries is the concept of Self (Self is always capitalized in these discussions). Minsky says, "The ordinary views are wrong that hold that Selves are magic, self-indulgent luxuries that enable our minds to break the bonds of natural cause and law. Instead, those Selves are practical necessities." In a similar way, Minsky believes that emotions are necessary organizational agencies that help guide us along complicated paths to goals: Emotions establish priorities, form purpose, and communicate our desires to others. Emotions would have to be a part of any truly intelligent machine. Indeed, in the appropriate machine, emotions will probably arise as a natural consequence of thinking and trying to solve problems. A new kind of computer, called a parallel processor, comprising thousands (now) or millions (soon) of individual and interconnected smaller computers, each capable of being an agent of specialized interest or ability is now being built. One of these, the Connection Machine, built by Thinking Machines Technology of Cambridge, Massachusetts, contains over 64,000 individual and interconnected small computers. Parallel processing computers were inspired, in part, by the goal of creating intelligence in a machine. Furthermore, electronic miniaturization has made building such machines possible. (Our present single-processor personal and mainframe computers are structured the way they are because of the expense of building the original vacuum-tube computers like the UNIVAC. Miniaturization has made possible desktop personal computers with hundreds of times the computing power of the million-dollar, room-sized UNIVAC.) Many people including the Defense Department believe that with large parallel processors (large in capacity, not size) containing enough individual interconnected co-processors assigned as various Minsky agents, truly complicated human-like thinking can occur. When the thousands of specialized areas of our brains learn to work together, becoming able to solve a variety of changing problems, we call the result Common Sense. Our mind has learned through an often painful childhood how to coordinate its agencies to achieve different goals. The current goal in artificial intelligence is to construct machines similarly arranged which would learn to coordinate learned and supplied agencies and develop their own form of common sense. Having few of our biological imperatives, machine beings would most certainly be different than human beings. Although a machine being would share, in a general way, many of our own goals like food acquisition, shelter, reproduction, defense, health maintenance and so on, it would view the specifics of these problems very differently. In Weapon , one of Solo's goals is to produce electricity for his own consumption; the villagers grow beans. A recent New York Times article (August 16, 1988) discusses an artificial neural network built by Terrence J. Sejnowski of Johns Hopkins. His program, known as NetTalk consists of about 300 neurons arranged in three layers, connected by 18,000 adjustable synapses. "At first these volume controls [of the synapses] are set at random and NetTalk is a structureless, homogenized tabula rasa. Provided with a list of words, it babbles incomprehensibly. But some of its guesses are better than others, and they are reinforced by adjusting the strengths of the synapses according to a set of learning rules." "After a half day of training, the pronunciations become clearer and clearer until NetTalk can recognize some 1,000 words. In a week, it can learn 20,000." Although the program is not provided with specific rules for how different letters are pronounced in different circumstances ( like the "c" in "carrot" and "certify" or the "p" in "put" and "phone"), as it evolves, "it acts as though it knows the rules. They become implicitly coded in the network of connections, though [Dr. Sejnoski] had no idea where the rules were located, or what they looked like." "Using mathematical analysis, he is beginning to uncover this hidden knowledge. 'It turned out to be very sensible," he said. 'The vowels are represented differently from the consonants. Things that sound similar are clustered together." One of the major tenets among those that challenge the possibility of artificial intelligence, is that computers can only do what they are programmed to do. And in the linear, step by step, programming style of traditional artificial intelligence attempts, this is true. With the advent of the technique of artificial neural network mapping, the machines are learning their own way. Like their biological counterparts. At present, the development of machines that think is well funded (The Pentagon bought the first Connection Machine) and is accelerating. Compact new parallel processors containing a million miniaturized co-processors combined with new theories like Minsky's on the nature of thinking will one day make possible an encounter with a machine that will claim that it is an "I" and which will exhibit what we call emotions. Further, it will not be artificially intelligent. It will be a different kind of thinking being. It will be more alien than any biological extraterrestrial. Weapon is a forecast of that encounter. Almost all the money spent in the research for artificial intelligence is supplied by the government, especially the Defense Department through the Defense Advanced Research Projects Agency or DARPA. The goal is a general-purpose, mobile robot that can use human tools (everything from crowbars to F-16 fighters) and perform combat missions. It is hoped that through proper "education" a machine being capable of performing these missions would also not be a threat to its builders. The development of a Solo-type machine is an ongoing project at DARPA, and because an actual Solo could be a very effective weapon, neither I or anyone else can say that they have not already accomplished their goal. |
||||||||||||
|
||||||||||||
The Documentary | Original Editions for Sale | Guestbook |