Cerebrum Article

Robots: Re-evolving Minds at 107 Times Nature’s Speed

Back in the 1950s, when the room-sized Univac was the computer Americans knew best, many thought computers and robots would share the same bright future, changing our world and working in ways scarcely imaginable. That is exactly what computers have done, but what happened to the robots?

Hans Moravec, an inventor at Carnegie Mellon University’s Robotics Institute, argues that the robot builders vastly underestimated the computing capacity needed for even the simplest brain—biological or robot. It took the computer revolution of the 1980s and 1990s to deliver the power needed to match the brain of a guppy. At this rate, compact computing power to equal the human brain could be available by 2050. How will that Artificial Intelligence be packed into a robot brain—and with what result?

Published: April 1, 2001

As computers pervade everyday life, worming their way into our gadgets, homes, clothes, even bodies, a nagging question arises: If computing automates most of our information needs, will it leave untouched our even vaster (and sometimes nastier) array of essential physical tasks? Will housecleaning, say, remain in weary human hands? Garbage pick-up? Construction? Industrial security patrolling? Transport?

Once upon a time, the obvious answer (at least in science fiction) was that robots would do the dirty work. In home, university, and industrial laboratories, would-be inventors of robots tinkered with that challenge for most of the 20th century. But, while mechanical bodies adequate for manual work can be built, artificial brains for truly autonomous servants have been frustratingly out of reach—despite the arrival of powerful computers.

The first electronic computers in the 1950s did the work of thousands of clerks. When those superhuman behemoths were programmed to reason, however, they could barely match the capacity of one person and could perform only razor-narrow tasks. Computers programmed to control robot eyes and arms took pathetic hours to find and grasp—unreliably—a few wooden blocks. The situation stayed pretty much that way for decades; robots even became passé in science fiction.

In the 1990s that began to change. It turns out that robots, to become complex entities acting in the world, must have something more like a brain than can come from any computer we know today. The challenge of creating even the simplest autonomous robot servant enormously heightens our respect for what evolution has achieved in humans. Only the revolution in computing power of the past decade has enabled us to think seriously about how to build a robot’s brain. The answer may be evolution, but faster than nature—about 107 times faster.

A Trick of Perspective

Robot tasks that would have been wildly impossible in the 1970s and 1980s began to work experimentally in the 1990s. Robots can now map and navigate unfamiliar office suites, and robot vehicles drive themselves, mostly unaided, across entire countries. Vision systems locate textured objects and track and analyze faces in real time. Personal computers recognize text and speech. Why suddenly now?

The short answer is that, starting about 1990, after decades of hovering near 1 MIPS (million instructions—or calculations—per second), the computer power available to develop robots shot through 10, 100, and now 1,000 MIPS. In 1960, computers were a new and mysterious factor in the Cold War. Even outlandish possibilities like Artificial Intelligence were deemed to justify significant investment. Artificial Intelligence programs ran on that era’s supercomputers, similar to those used for physical simulations by weapons physicists and meteorologists. By the 1970s, however, the promise of Artificial Intelligence had faded. The effort limped for a decade on old hardware. Weapons labs, by contrast, upgraded repeatedly to new supercomputers. In the 1980s, university departmental computers gave way to smaller project computers, then to individual workstations and personal computers. Prices fell at each transition, but power per machine stayed about the same. Only after 1990 did prices stabilize and computer power grow.

Conventional wisdom in Artificial Intelligence labs held that, with the right program, readily available computers could duplicate any human skill. That seemed obvious in the 1950s, when computers did the work of thousands; it remained defensible in the 1970s, as certain game-playing programs performed at modest human levels. But the perception was entirely different in the upstart subfields of computer vision and robotics. On 1 MIPS computers, single images crammed memory; simply scanning an image consumed seconds, and serious image analysis took hours. Human vision performed far more elaborate functions many times each second.

In hindsight, it is easy to explain this discrepancy. Computers do arithmetic using as few gates and switching operations as possible. Human calculation, by contrast, is a laboriously learned, ponderous, awkward behavior. Tens of billions of neurons in our vision and motor systems strain to process a digit a second. That is a flimsy exhibition test of our brain power. If a mad computer designer with a future surgical tool could rewire our brains into 10 billion arithmetic circuits, each doing 100 calculations a second, we would outcompute early computers a millionfold. The illusion of computer power would be exposed. Robotics, in fact, was just such an exposé.

Though spectacular underachievers at the wacky new stunt of longhand calculation, we humans are veteran overachievers at perception and navigation. Our ancestors, across hundreds of millions of years, prevailed by being front-runners in the competition to find food, escape danger, and protect offspring. Existing robot-controlling computers are far too feeble to match the resulting prodigious perceptual inheritance. About how big is this shortfall?

The retina of the vertebrate eye is well-enough understood to be a kind of Rosetta stone for roughly equating a certain amount of nerve tissue with a certain computing power. Besides light detectors, the retina contains edge-and motion-detecting circuitry, packed into a little one-tenthmillimeter-thick, two-centimeter-wide patch (much smaller than a piece of rice) that simultaneously reports on a million image regions about 10 times a second via the optic nerve. In robot vision, similar detections, well coded, each require the execution of a few hundred computer instructions, making our retina’s detections worth more than 1,000 MIPS. In a risky extrapolation, which must serve until something better emerges, it would take about 50,000 MIPS (50 billion calculations per second) to imitate a rat-brain’s gram of neural tissue, and almost 100 million MIPS (or 100 trillion instructions per second) to emulate the 1,500-gram human brain. Personal computers in 1999 matched insect nervous systems, but fell short of the human retina—and even of a goldfish’s 0.1 gram brain. They were a millionfold too weak to do the job of a human brain.

art_v3n2moravec_2
Image courtesy of Hans Moravec

Dispiriting, perhaps, but the deficit does not warrant abandoning the goals of Artificial Intelligence pioneers. In the 1990s, computer power for a given price began roughly to double each year, after doubling every 18 months in the 1980s, and before that every two years. Two or three decades more at the present pace would close the millionfold gap to the power of a human brain. But sufficiently useful robots would not need full human-scale brainpower. For many useful tasks, a bird brain, for example, will do nicely.

Re-Evolving Brain

The incremental growth of computing power implies an incremental approach to developing the robot brain. Since our only model in this case is the biological brain, perhaps the development of the robot brain will parallel evolution of the human brain— but on a very fast track.

Unlike other approaches, this track demands no great theories or insights (helpful though they can be). Natural intelligence evolved in small steps through a chain of viable organisms. Artificial Intelligence can do the same. Nature performed evolutionary experiments at an approximately steady rate, even when evolved traits such as brain complexity grew exponentially. Similarly, a steady engineering effort should be able to support exponentially growing robot complexity (especially as ever more of the design search itself is delegated to increasingly powerful computers). The journey should be much easier the second time around. We have a guide, after all, with directions and distances, in the history of vertebrate nervous systems.

Readers may be surprised to learn that general industrial development in electronics, materials, manufacturing, and other fields already has carried us well along the road. A series of notable experimental animal-like robots, built using their day’s best techniques, stand like mileposts along the road to ever-greater Artificial Intelligence.

  1. Around 1950, the Bristol University psychologist William Grey Walter, a pioneer brain-wave researcher, built eight electronic “Tortoises,” each with a scanning photo tube eye and two vacuum tube amplifiers that drove relays, which in turn switched steering and drive motors. Unprecedentedly lifelike, they danced around a lighted recharging hutch until their batteries ran low, then trundled inside. Simple bacteria show equally engaging patterns of action, or tropisms.
  2. In the early 1960s, the Johns Hopkins University Applied Physics Lab built the corridor-cruising, wall-outlet-recharging “Beast.” Using specialized wall-ranging sonars, outlet-seeking photocell optics, and a wall-plate-feeling arm—all orchestrated by several dozen transistors—the Beast’s multiple coordinated behaviors resemble those of a large nucleated cell—say, a bacteria-hunting paramecium’s.
  3. Big mobile robots, radio-controlled by huge computers, appeared around 1970 at Stanford University and Stanford Research Institute (SRI). While the 1950s Tortoise’s actions followed directly from two or three light-and-touch discriminations, and the 1960s Beast’s simply from a few dozen signals, Stanford’s “Cart” and SRI’s “Shakey” used TV images with thousands of pixels to choose actions after making millions of calculations. The Cart could adapt and make predictions in order to track dirty white lines in ambient light. Shakey, more ambitiously but less reliably, identified and reasoned about large blocks. The introduction of computer control vastly expanded the complexity of what robots could do, just as the advent of multicellular animals with nervous systems in the Cambrian explosion some 550 million years ago blew away the limits on the complexity of biological behavior.
  4. By 1980, a slightly faster computer and a more complex program allowed Stanford’s Cart, using stereoscopic vision, to map (sparsely) and negotiate obstacle courses, taking five hours to cover 30 meters. In complexity and speed, the performance was sluglike.
  5. Several research robots in the early 1990s navigated and mapped (in two dimensions) corridor networks as they went along. Some of these robots improved their interpretation of data from the sensor through a learning process. On-board and off-board 10-MIPS microprocessors conferred brainpower approximately equal to that of the tiniest fish or middling insect.
  6. In 2000, a guppylike 1,000 MIPS and hundreds of megabytes of memory enabled robots in our laboratory at Carnegie Mellon University’s Robotics Institute to build dense, almost photorealistic three-dimensional maps of their surroundings. Navigation techniques built around this core spatial awareness will suffice, I believe, to guide mobile utility robots reliably through unfamiliar surroundings, suiting them for jobs in hundreds of thousands of industrial locations and eventually hundreds of millions of homes. Such abilities have so long eluded us that today only a few dozen small research groups are still pursuing them. The number of robot developers will no doubt balloon once a vigorous commercial industry emerges. The continued evolution of “robotkind” will then become a driver rather than a mere beneficiary of general technical development.

Simplicity Versus Complexity

Can we devise a valid comparison of the rate of development in biological systems and in technological systems? In both realms, evolving designs can range from the very simple to the very complex. The new designs that emerge in either realm are as likely to increase simplicity as complexity, but the simple end of the range tends to be crammed with competitors; the complex end is the stepping-off point into endless, unexplored potential for design.

Organisms or products that are slightly more complex than any before them sometimes succeed in the ecology (or the market-place), raising the upper limit. Paleontological and historical records can be combed to identify this upper limit, which mostly rises over time. There are reversals in both records: notably, mass extinctions and the collapse of civilizations. After a recovery period, progress resumes, often faster than ever. One reason is that, even if complex entities succumb to disaster, many of their component innovations may survive somewhere. Classical learning weathered the collapse of Roman civilization in the remote Islamic world; some inactive DNA sequences seem to be archives of ancestral traits. Extinct large organisms may leave much of their heritage behind in smaller relatives, who can rapidly recover lost traits by simple mutations in regulator genes. The re-expression of old good ideas in odd combinations often initiates an explosion of innovation. This happened culturally in the Renaissance and biologically in the Paleocene, when birds and mammals ran riot in the post-dinosaur world.

Though creative explosions, catastrophic losses, and stagnant periods in both biology and technology—and ups and downs of investing in Artificial Intelligence—perturb the long-term trends, compare the growth of the biggest nervous systems since the Cambrian with the information capacity of common big computers since World War II:

  • Wormlike animals with perhaps a few hundred neurons evolved early in the Cambrian, over 570 million years ago. The first electromechanical computers, with a few hundred bits of telephone relay storage, were built around 1940.
  • The earliest vertebrates, very primitive fish with nervous systems probably smaller than the modern hagfish’s, perhaps 100,000 neurons, appeared about 470 million years ago. Computers acquired 100,000 bits of rotating magnetic memory by 1955.
  • Amphibians with perhaps a salamander’s few million neurons crawled out of the water 370 million years ago. Computers with millions of bits of magnetic core memory were available by 1965.
  • Small mammals showed up about 220 million years ago, with brains ranging up to several hundred million neurons, while enormous dinosaurs around them bore brains with several billion neurons, a situation that changed only slowly until the sudden extinction of the dinosaurs 65 million years ago. Our small primate ancestors arose soon after, with brains ranging up to several billion neurons. By 1975, many computer core memories had exceeded 10 million bits. By 1985, 100 million bits was common, though large mainframe computers were being largely displaced by small workstations and even smaller PCS.
  • Hominid apes with 20 billion-neuron brains appeared about 30 million years ago. By 1994, larger computer systems had several billion bits.
  • Humans have approximately 100 billion neurons. In 2000, some ambitious PC owners equipped their systems with tens of billions of bits of RAM. In five years, 100 billion bits of RAM will be standard in computers.

Plot these juxtaposed geologic and technology dates against one another (the alignment of bits to neurons is arbitrary, and can be shifted without affecting the overall picture) and you will discover that large computers’ capacities grew each decade about as much as the large nervous systems grew every hundred million years. In short, we seem to be re-evolving mind (in a fashion) at 10 million times (107) the original speed.

Earning a Living

Commercial mobile robots, which must perform reliably, have tended to use techniques developed about a decade earlier in the laboratory. The smartest robots, barely insect-like at 10 MIPS, have found few jobs. A paltry 10,000 work worldwide, and the companies that made them are now struggling or defunct. The largest class, Automatic Guided Vehicles, transport materials in factories and warehouses. Most follow buried signal-emitting wires and detect endpoints and potential collisions with switches— techniques introduced in the 1960s. It costs hundreds of thousands of dollars to install guide wires under concrete floors, and the routes are then fixed, making the robots economical only for large, exceptionally stable factories. Some robots made possible by the advent of microprocessors in the 1980s track softer cues, like patterns in tiled floors, and use ultrasonics and infrared proximity sensors to detect and negotiate their way around obstacles.

The most advanced industrial mobile robots to date, developed since the late 1980s, are guided by occasional navigational markers—for instance, bar codes that can be sensed by a laser—and by pre-existing features like walls, corners, and doorways. The hard-hat labor of laying guide wires is replaced by programming, which is carefully tuned for each route segment. The small companies who developed the robots discovered that many industrial customers were eager to automate transport, floor cleaning, security patrol, and other routine jobs. Alas, most buyers lost interest as they realized that installation and route changing required time-consuming and expensive work by experienced route programmers not easily available. Technically successful, the robots fizzled commercially. But in failing, they revealed the essentials for success.

First, one needs reasonably priced physical vehicles to do various jobs. Fortunately, existing Automatic Guided Vehicles, forklift trucks, floor scrubbers, and other industrial machines designed for human riders (or to follow wires) can be adapted for autonomy. Second, the customer should be able without assistance to put a robot to work rapidly, where needed. Floor cleaning and other mundane tasks cannot bear the cost and time of expert installation, or the limited availability of expertise. Third, the robots must work for at least six months between missteps. Customers routinely rejected robots that, after a month of flawless operation, wedged themselves in corners, wandered away lost, rolled over an employee’s foot, or fell down a flight of stairs. Six months without problems earned the machines a sick day.

Robots that work faultlessly for years do exist. They have been perfected through a reiterative process that identifies and fixes common failures and reveals successively rarer problems and corrects them. But, alas, this reliability has been achieved only for prearranged routes. An insectlike 10 MIPS is just enough to track a few hand-picked landmarks on each path segment. Such robots are easily confused by minor surprises —a shifted bar code or blocked corridor— not unlike ants on scent trails or moths guided by the moon, which can be trapped by circularized trails or streetlights.

Developing a Sense of Space

In the mid-1990s, robots that chart their own routes emerged from laboratories worldwide, made possible by microprocessors that reached 100 MIPS. Most of these robots construct two-dimensional maps from sonar or laser rangefinder scans and can locate and route themselves. The best seem able to navigate office hallways, sometimes for days between confusions, but to date fall far short of the six-month commercial criterion. Too often different locations in coarse two-dimensional maps resemble one another to the robot. Or the same location, scanned at different heights, looks different. Or small obstacles or awkward protrusions are overlooked. But sensors, computers, and techniques are improving; success is in sight.

My small laboratory is in this race. In the 1980s, we devised a way to distill large amounts of noisy sensor data into reliable maps. This is done by accumulating statistical evidence of emptiness or occupancy in each cell of a grid that represents the surroundings. The approach worked well in two dimensions and now guides some of the robots just mentioned.

Three-dimensional maps, a thousand times richer, promised to be even better, but for years seemed computationally out of reach. Then, in 1992, we found economies of scale and other tricks that reduced three-dimensional grid costs a hundredfold. We now have a test program that accumulates thousands of measurements from stereoscopic camera glimpses in order to map a room’s volume down to centimeter scale. With 1,000 MIPS, the program digests over a glimpse per second, adequate for slow indoor travel. This same 1,000 MIPS is just now appearing in high-end personal computers, but in a few years it will be found in smaller, cheaper computers fit for robots. We have begun an intensive, three-year project to develop a prototype commercial product. Highlights of this robot will be automatic learning processes to create the best evidence-weighing guidelines; programs to find clear paths, locations, floors, walls, doors, and other objects in the three-dimensional maps; and sample application programs that orchestrate discrete basic skills into tasks like delivery, floor cleaning, and patrol. The initial prototype is a small, mobile robot with trinocular cameras. Inexpensive digital camera chips promise to be the cheapest way to get the millions of measurements needed for dense maps.

As a first commercial product, we plan a basketball-sized “navigation head” to retrofit onto existing industrial vehicles. The head would have multiple stereoscopic cameras; 1,000 MIPS; generic mapping, recognition, and control software; an application-specific program; and a hardware connection to vehicle power, controls, and sensors. Head-equipped vehicles with transport or patrol programs could be taught new routes simply by leading them through once. Floor cleaning programs would be shown the boundaries of their work area. Introduced to a job location, the vehicles would understand their changing surroundings competently enough to work at least six months without debilitating mistakes. Ten thousand Automatic Guided Vehicles, a hundred thousand cleaning machines, and, possibly, a million forklift trucks are candidates for retrofit, and robotic autonomy may greatly enlarge those markets.

Income and experience from spatially aware industrial robots would set the stage for smarter yet cheaper ($1,000 rather than $10,000) consumer products, starting probably with small, patient robot vacuum cleaners that would automatically learn their way around a home, explore unoccupied rooms, and clean whenever needed. I imagine a machine low enough to fit under some furniture, with an even lower extendable brush. The machine would return to a docking station to recharge and disgorge its dust load. Such machines could open a true mass market for robots, with a hundred million potential customers.

Evolutionary Speedway

Commercial success will provoke competition and accelerate investment in manufacturing, engineering, and research. Vacuuming robots should beget smarter cleaning robots with dusting, scrubbing, and picking-up arms, followed by larger, multifunction utility robots with stronger, more dexterous arms and better sensors. Programs will be written to make such machines perform an array of chores, such as picking up clutter; storing, retrieving, and delivering things; taking inventory; guarding homes; opening doors; mowing lawns; and playing games. New applications will expand the market and, when robots fall short in acuity, precision, strength, reach, dexterity, skill, or processing power, spur new innovations. Capability, number of sales, engineering and manufacturing quality, and cost effectiveness will increase in a mutually reinforcing spiral.

Perhaps by 2010 the process will have produced the first broadly competent “universal robots,” as big as people but with lizard-like 5,000-MIPS minds that can be programmed for almost any simple chore. Like competent but instinct-ruled reptiles, first-generation universal robots will handle only contingencies explicitly covered in their current application programs. Unable to adapt to changing circumstances, they will often perform inefficiently or not at all. Still, so much physical work awaits them in businesses, streets, fields, and homes that commercial robotics could begin to overtake pure information technology.

A second generation of universal robots with a mouse-like 100,000 MIPS will adapt as the first generation does not, and even be trainable. Besides application programs, the robots would host a suite of software “conditioning modules” that would generate positive and negative reinforcement signals in predefined circumstances. Application programs would have alternatives for every step, small and large (grip under-/overhand, work in-/outdoors). As jobs were repeated, alternatives that had resulted in positive reinforcement would be favored, those with negative outcomes shunned. With a well-designed conditioning suite (for example, positive for doing a job fast or keeping the batteries charged and negative for breaking or hitting something) a second-generation robot will slowly learn to work better and better.

A monkey-like 5 million MIPS will enable a third generation of robots to learn very quickly from mental rehearsals in simulations that model physical, cultural, and psychological factors. Physical properties include shape, weight, strength, texture, and appearance of things and how to handle them. Cultural aspects include a thing’s name, value, proper location, and purpose. Psychological factors, applied to humans and to other robots, include goals, beliefs, feelings, and preferences. Developing the simulators will be a huge undertaking involving thousands of programmers and experience-gathering robots. The simulation would track external events, and tune its models to keep them faithful to reality. It should let a robot learn a skill by imitation, and afford a kind of consciousness. Asked why there are candles on the table, a third-generation robot might consult its simulation of house, owner, and self to reply honestly that it put them there because its owner likes candle-lit dinners and it likes to please its owner. Further queries would elicit more details about a simple inner mental life concerned only with concrete situations and people in its work area.

Fourth-generation universal robots with a human-like 100 million MIPS will be able to abstract and generalize. The first-ever Artificial Intelligence programs reasoned abstractly almost as well as people, albeit in narrow domains, and many existing expert systems outperform us. But the symbols these programs manipulate are meaningless unless interpreted by humans. For instance, a medical diagnosis program needs a human practitioner to enter a patient’s symptoms, and to implement a recommended therapy. Not so a third-generation robot, whose simulator provides a two-way conduit between symbolic descriptions and physical reality. Fourth-generation machines result from melding powerful reasoning programs to third-generation machines. They may reason about everyday actions by referring to their simulators, just as Herbert Gelernter’s 1959 geometry-theorem prover examined analytic-geometry “diagrams” to check special-case examples before trying to prove general geometric statements. Properly educated, the resulting robots are likely to become intellectually formidable.

Passing the Torch

Barring cataclysms, the near-term development of intelligent machines is inevitable. Every technical step toward intelligent robots has a rough evolutionary counterpart, and each step will benefit its creators, manufacturers, and users. Each advance will provide intellectual satisfactions, competitive advantages, and new wealth. Each can make the world a nicer place to live.

At the same time, by performing better and cheaper, the robots will displace humans from essential roles. If their capacities come to include self-replication (and why not?), they may displace us altogether. That was the pattern of our own emergence in competition with other human species, including Neanderthal Man and Homo Erectus. Personally, I am not alarmed at this; these future machines will be our progeny, our mind children, built in our image and likeness, ourselves less flawed, more potent. Like biological children of previous generations, they will embody humanity’s best chances for long-term survival. It behooves us to give them every advantage and, when we have passed evolution’s torch, bow out.

As with biological children, however, we probably can bargain for some consideration in our retirement. Good children like to see their parents comfortable in their later years. “Tame” super intelligences could be created and induced to protect and support us, at least for awhile. The relationship, however, requires advance planning and diligent maintenance. It is not too early to start paying attention.

Tags

AI, History