Share This Page

Computer Vision and the Dream of the Cyborg
The only electrical engineer who would agree to participate in the first project on computer-assisted human sight “couldn’t see any clear reason why it was totally impossible.” As the years of work rolled by, often the most that could be said about that successful interaction of brain and computer was that he still had not been shown to be wrong. This is an account of the present-day reality of the dream of a cyborg, or man-machine, that has intrigued and frightened writers, scientists, philosophers, and moralists for generations. The project’s goal is specific: to bypass the diseased rods and cones of the human retina, sending computer signals right to the nervous system. Underlying that goal, however, is the challenge of creating direct communication between computer signals and human consciousness.
For decades, science fiction writers have explored the consequences of a general integration of the two great information-processing systems on the planet: brains and computers. From Philip K. Dick, author of the classic Do Androids Dream of Electric Sheep? to the screenwriters of The Matrix, many have sensed that the moment of integration is coming and that the implications will be momentous, ranging from true artificial intelligence to cloned minds to distributed consciousness to virtual immortality (by backing up minds on disks and downloading them into new bodies) to such dystopian possibilities as thought spam.
All of these writers placed their stories in the future, when the distinctions among the various species of intelligence—natural, artificial, animal, copied, and augmented— have been erased, or nearly so. (Science fiction author Verner Vinge refers to this point as “the Singularity.”) None of these writers say much about how the integration they project was actually achieved. But when we realize that they are imagining an engineering project that seems likely to change the nature of our species to its roots, and forever, it is difficult to read these stories without wondering: What was the first big step? Who took it? What were they thinking?
As good an answer as any might be the day in 1986 when Joseph Rizzo, M.D., a neurosurgeon at Massachusetts General Hospital, in Boston, cut a nerve in the retina of a rabbit and was immediately struck by two thoughts. Then and now, Rizzo had no thought for Singularity fantasies like mind cloning; his focus was how to treat diseases of the retina. For two years, he had been exploring the therapeutic potential of transplanting cells from the retina of a healthy animal to a diseased one. The hope was that if the operation were done in just the right way, those cells might thrive and start to function, restoring sight. Progress ranged from slow to nonexistent, but the surgeon persevered. What alternative did he have?
The protagonist of this story, the retina, runs around the back half of the eyeball, where it discharges two responsibilities: measuring the intensities of incoming light at about a hundred million points and compressing those measurements so they can be conveyed more efficiently to the brain. It achieves this compression by recognizing changes in light intensity in three dimensions: changes in space, in time, and across spectrum. If the retina spoke English, you might hear it whispering: “a small change across space in area #123, a medium change across spectrum in area #234,” and so on. Upon receiving these messages, the visual cortex unpacks and reconstitutes the images as necessary in order for the scene to appear in our consciousness.
On the day in question, Rizzo, although working on rabbits, was thinking about the human diseases he wanted to address. One was retinitis pigmentosa (RP), the leading cause of inherited blindness. Despite its name, RP is more a disease of the rods and cones than the whole retina; the neural tissue, the processing circuitry, typically remains normal for a long time after onset of RP, although it deteriorates eventually. At one point, the surgeon snipped an axon. It was healthy, because the rabbit was healthy. It struck Rizzo that even if the rabbit had had RP, the nerve would still be healthy, because RP does not attack axons. That meant that even if retinal cell transplantation could be made to work, it would involve cutting perfectly healthy nerves—a step to which good neurosurgeons, Rizzo included, have a visceral aversion.
As it happens, the circuitry responsible for the retina’s summarizing function lies on top of the photosensors, not under them. The reason most often advanced for this arrangement is that rods and cones need lots of blood. Placing the circuitry over the receptors allows an industrial-strength circulatory infrastructure unobstructed access to the receptors from underneath. So, if you picture the retina as a seven-story building (because it has seven tissue layers), the vascular plumbing would be in the basement, the rods and cones on floor one, and the processing cells stacked on the floors above. A million tiny wires (axons) carry the final results of all this computation up and out onto the roof, converging toward a hole in its middle. When they reach the hole, they bend down into it, wrap themselves into a cable, drop through the retina, and run out rearwards to the brain. This cable is of course the optic nerve. It was the axons crossing the roof whose sacrifice Rizzo was regretting.
Then, a second thought exploded behind those regrets: All the circuitry was there, directly under his hand. He was looking right at it. If the neurons, the ganglia, and their axons were healthy, why not bypass the retina as the signal-receiver and stimulate the neurons directly?
Over the next year, Rizzo did his best to interest engineers in building his device. He imagined it as having two parts. A camera-computer-transmitter would be worn outside, as a pair of glasses, and a receiver-electrode array would be permanently implanted in the eye. The camera would look at the world, transform the visual field into signals, and beam the signals to the implant. The implant would receive, reprocess, and distribute the signals over a two-dimensional array of electrodes, which would broadcast an electric field, exciting the nearby ganglion cells. These excitations, he hoped, would restore retinal function.
Not Obviously “Totally Impossible”
Probably none of the engineers he spoke with used the actual words “piece of cake,” but, with one exception, they thought that Rizzo had the right idea at the right time. People had already done something that sounded similar for deaf patients, with a technology called cochlear implants. What could be so tough about extending the principle to vision? It would be like television was to radio. “They had no doubt” the device could be built, the surgeon recalls.
Oddly, no one actually offered to take on the project, a reluctance that in retrospect might have raised some flags. But their reactions, plus Rizzo’s native optimism, kept him knocking on doors until eventually he reached the office of John Wyatt, Ph.D., a professor of electrical engineering at Massachusetts Institute of Technology. Here, Rizzo encountered the exception.
At the time, Wyatt was one of the few electrical engineers in America who had hands-on laboratory experience with retinal research. As a graduate student in Berkeley, he had become interested in how the retina works as a signal-processing system and joined the laboratory of Frank Werblin, Ph.D., one of the country’s top retinal neurophysiologists. “In the end,” Wyatt recalls, “the amount of hard work per unit idea was more than I could imagine for my career, and I went back and did my doctorate in electrical engineering.”
This background left him with a deep appreciation of the engineering quality of natural computational and communications systems and an abiding sense of how really far manmade technology was from equaling it. For one thing, devices made by humans require far more energy, and therefore dissipate more heat, than anything in nature delivering comparable performance. That incompetence imposes drastic limitations on what a neural prosthesis made by humans can accomplish. For another, for all our boasting about miniaturization, function for function human machines still weigh much more than a layer of cells. What would happen when a patient with an implant bolted to the inside of his or her eyeball ran up and down a flight of stairs, week after week, month after month? Wouldn’t the implant just rip out? Or tear the whole retina off the eyeball? Spontaneously detached retinas are already a health problem, even without weighing the retina down with a backpack of electronics.
But suppose you could wave a magic wand to solve all these problems. Today’s technology cannot hope to replicate the out put of more than the tiniest fraction of the eye’s natural receptors. Would that be enough? Would all this effort result in anything that blind people found useful? Wyatt found no encouragement in the success of cochlear implants, which was in any case fairly limited, because the retinal environment was both mechanically and chemically far more demanding than the inner ear, and the data management challenge was far more complex.
There might have been a kind of logic in the decision of those engineers who had been so sanguine but chose not to get involved. The observation that nothing is impossible for the man who does not have to do it himself probably dates to the pyramids. And perhaps it also makes sense that it was Wyatt, who saw nothing but immense problems ahead, who signed up for the actual engineering. As he dryly commented several years later: “I couldn’t see any clear reason why it was totally impossible.” Over the next few years, Wyatt and Rizzo pulled together the project’s infrastructure—grants, staff, facilities—while they organized preliminary research on Wyatt’s concerns.
The immediate question was whether enough power could be pushed into the eye to do anything useful without cooking the eye. Signals in the natural retina are carried by individual nerves to specific ganglions, one at a time, exactly as needed. That level of nanotechnology is far out of reach for today’s technology, which meant that Rizzo’s electrodes had to work wirelessly, using electric fields to broadcast a sphere of energy. This is a much less efficient system. Fields affect everything: everything sucks up a bit of their power, which means that you have to pipe in more energy and therefore dissipate more heat to get a given result. In the worst—but perfectly possible—case, so much heat would be released in the name of getting a useful product that the cells doing the work would die.
No published research bore on the question. There was no protocol, no laboratory equipment specialized for the task, and no one to turn to with experience in that type of work. Rizzo did know an experimenter at the Southern College of Optometry in Memphis with a reputation for exceptionally meticulous research in retinal physiology, Ralph Jensen, Ph.D. That was probably as close as he was going to get, so, in the late 1980s, what had now become the Boston Retinal Implant Project cut a deal with Jensen to figure out whether the project had a future.
The Ultimate Complication: Generating Consciousness
Jensen’s day would start with cutting a bit of retina, including the optic nerve, from a rabbit’s eye, laying it in a nutrient bath, and placing a monitoring electrode as near to the nerve as possible. He would then point a tightly focused light at the excised retina and move it around until he got some activity in the monitoring electrode. This told him the light had found the particular ganglion cell attached to the axon running closest to the monitoring electrode. He then inserted a stimulating electrode at that position and turned it on. Once the setup was working, Jensen would adjust the intensity of the field, move the source of stimulation around, and record the effects. The work needed all his meticulousness. Retinas are sticky, so if the electrode got a bit too close, it would glue itself to the underlying tissue. (This was called “dragging the retina.”) However, if the electrode got just a bit too far from the tissue, the connection would break; once a connection was broken, it was hard to be certain of reconnecting with the same axon. Jensen worked on a vibration-isolation table, a surface that automatically senses and compensates for ambient motions, but, even so, it was usually just a matter of time before any given experimental session collapsed.
Unfortunately, the more Jensen worked, the more complications he found. Doubling the stimulation charge did not double the activation rate. A charge that gave you an effect in cell A would not necessarily give you an effect in cell B. Proximity to a stimulating electrode did not guarantee response. How a ganglion cell processed stimulation changed over time. Each new set of data revealed new complications; each complication required more experiments. Thousands of measurements, requiring years of laboratory time, were necessary before Wyatt felt that Jensen had enough for even a preliminary set of calculations.
The calculations yielded good news and bad news: Stimulating a cell with an electric field took a depressing amount of power (several hundred times more than the retina needs to perform the same function in its own way), but it did seem as though a prosthesis might be able to drive a modest array of electrodes at 30 frames a second without doing too much damage. In real life, inefficiencies and glitches would keep the levels of resolution even lower, but at least the implant project continued to pass the Wyatt test: it was not obviously, patently, totally impossible.
Jensen’s results had cleared the way for Rizzo and Wyatt to grapple with by far the hardest problem in man-made computation: while biology can make what we call consciousness, human engineers do not have the faintest idea how to begin to do so. Consciousness, characterized as awareness or what are sometimes called “looks and feels,” is different from anything else in the known universe in that nobody has even a bad idea as to where it comes from or how to generate it. Nobody knows how to test for its presence, let alone measure its magnitudes or classify its variations, if any, the way you can test for heat or gravity or color. Nor is any solution on the horizon: No one knows what a consciousness meter would even look like, so there is no target, no development path to follow.
For decades, in fact, engineers derided philosophers trying to think about this problem for wasting their time on “metaphysics,” but the integration project has put at least a few engineers into the trenches. No matter what an engineer might want to do on this frontier—build an artificial hand, an eye that can see in the infrared part of the spectrum, a memory prosthesis, a cell phone patched into the nervous system to add telepathy to our communications options—that device will have to register its signal outputs in consciousness. The designing engineers will have to lose the attitude and grapple with the mystery.
In the context of the retinal project, imagine that a camera connected to a brain implant is looking at a T shape out in the world—perhaps a tree. Imagine further that the electrode array is a rudimentary 3by-3 grid (just to keep things simple). The implant device receives the camera’s output and orders electrodes #1, #2, #3, #5, and #8 to fire, coarsely replicating the T shape, such as the following:
123 4 56 7 89
The retina records this pattern of stimulation, compresses it, and mails its summary to the cortex. What does the person see? What is the patient conscious of?
What eventually emerges in consciousness depends less on what the retina “saw” than on what the cortex, after consulting its vast memory stores and powers of inference and imagination, thinks the retina saw. The cortex edits, and it edits in depth. So one possibility is that the cortex decides the retina has “lost it” and junks the report entirely, in which case the patient sees nothing. Recall that the project team was using electric fields as their stimulating agents, as opposed to wired connections, and fields by their nature broadcast. Thus, a given field might easily stimulate several ganglion cells at once. If you think of the natural retina as playing one note at a time, an artificial retina would play chords. A cortex might well dismiss a report composed of such “chords” as so much static. (Indeed, the retina might do the same, depending on how strict its in-house quality control might be.)
Even if the cortex accepted the retina’s message, the cortex might jump to any one of a variety of conclusions about what the retina meant. In our example, the cortex might show a crude tree, or a giant T, or something totally mystifying—something with no obvious “Tness” at all. If the patient did see a T, it might be organized out of single lines, or stripes, or solid bars, or something weirder, maybe a crucifix the patient once glimpsed in childhood. Nothing could be ruled out. Rizzo had shown that if you monitored a rabbit’s visual cortex (by means of electrodes taped to its skull) while its retina was stimulated, the cortex did indeed spike. So apparently the rabbit was seeing something. But what?
None of the usual methods employed to solve engineering problems worked here. The researchers could not infer from a circuit diagram what percepts, if any, would arise in consciousness. No formulas were available to make the calculation. There was no way of simulating the experiment on a computer. No matter how intelligent a computer may seem, you cannot learn about consciousness from it, because there is no way of excluding the possibility that it is “just” a smart zombie claiming to be conscious. You could not ask the rabbits.
“What Do You See?”
Medical research has a general prejudice against cutting up people who do not require surgery just to advance basic research. Here, there seemed to be no alternative. By the late 1990s, the project recruited five people with retinitis pigmentosa and one with a cancer that would require a normal eye to be removed, who were willing to let their eyes be cut open and have instruments, including a 4-by-4 electrode array, placed on or next to their retinas. All the researchers devoutly hoped to find a direct, consistent, one-to-one correlation between the pattern of stimulation and the contents of the visual field: one stimulation, one point of light in consciousness; two stimulations, two points of light with the same orientation, like a pair of headlights. With results like these, a working methodology would snap into place: Use patterns of stimulations to “spell out” the significant visual elements in the landscape, the way light-emitting diodes (LED) spell out graphics in a sign.
The time available for the sessions involving the first two volunteers (you could not keep poking in a person’s eye forever) was consumed in learning how to do the experiments, the first of their kind. The third subject, a 68-year-old woman, had been legally blind for 15 years. She was exposed to 50 single simulation trials spread over a bit more than two hours. In 33 of those trials, she saw nothing. On various other trials, she saw small clusters of two or three faint, flashing images, long straight lines, and “something real dim.” She saw a dot or point precisely once. The other subjects had slightly more encouraging results. Volunteer five, a 28-year-old man blind for 11 years, had 178 trials and reported a percept 109 times, of which 38 were dots. The champ was volunteer six, a 47-year-old man, legally blind 15 years: 88 trials, 59 percepts, 50 dots. More complicated patterns of electrode activation were tried, but the results made no sense. Certainly nobody saw any geometry, no T’s or L’s or X’s.
The bad news, as Wyatt pointed out, was that these results had no common thread. They offered no handle on how to do better, how to improve the odds of a patient’s seeing what he should be seeing. Differences among the volunteers might account for the differences in the results, but going down that path just shrank the pool of patients who would benefit from the procedure. The good news was that the bad news was not worse. At least the subjects saw some detail (the worst possible outcome would have been a blank, undifferentiated field), including some dots, and the results, although baffling, were generally reproducible in a given subject.
Rizzo, as always, was optimistic. Look at this from the point of view of the cortex, he urged. For years, their retinas were silent and then one day some strange, garbled messages started flooding in—signals like nothing the cortex had seen even when the system was normal—only to stop after an hour or two. How would you perform under those conditions? The neurosurgeon’s experience left him with great faith in the adaptability of the brain; give it a consistent connection between a percept and the outside world and you could trust it to do the rest. All the cortex needed was time to figure out what was going on, then it would take charge of its own adaptation. After all, people with cochlear implants kept improving their word recognition for years. It might take as long to get a grip on the potential of retinal implants.
This was reasonable enough in the abstract, but awful from the perspective of project management. Academic engineering research is not the kind of engineering you read about in the newspaper—building a bridge or a consumer product. Typically, academic engineering research involves a small team, often just a senior professor and two or three graduate students, checking out the basic questions surrounding some innovative approach to an important problem and then writing up their results in what is called a “proof-of-concept” article. If they actually do a physical demonstration, it is usually a last-minute lash-up that no one really expects to work.
The millions of mind-numbing details that have to be addressed in making something real—otherwise known as implementation dreck—are usually left to a big company with lots of resources or a start-up firm with no mission but to make that one idea work. And the reality, of course, is that even those companies do not get all the details right—initially, or sometimes ever. Typically, the first few versions of a new technology are crude, fragile, and error- and accident-prone. Only after a few back-and-forths with large numbers of real users does the product begin to work as envisioned by that professor and his students when they began their conceptual research years ago. For every critic who says that companies “abandon” unfinished products to the market out of greed and impatience, an engineer can be found to argue that you just can’t do it any other way, that the complexity and variety of the experiences you get back from the market are essential to making a design of any complexity work dependably. To engineers, this model is about as basic to innovation as a paved road to a car.
Pushing Ahead
This model, however, was not available to the retinal project. If the project was going to implant machines into people’s eyes, those machines had to be close to perfect the first time they went in. You could not keep cutting into people’s eyes to tweak the prototype, especially not if the goal was to learn how the brain adapted to extended exposures. Not only must the experimental implants be built to a much higher standard of engineering than usual in academic engineering, but also they must actually be built better than most new medical devices made by big companies for sale to patients. And that level of perfection was required just to get a handle on an issue—the nature of the link between electrode stimulation and consciousness—probably central to the basic design itself. In effect, the device had to be built to the highest imaginable standards of engineering without fully understanding the device’s function.
Still, there was no real doubt, by now, that the project would roll on. For one thing, by the end of the 1990s several other retinal prosthesis projects had started. In 1993, two presentations were made on the topic at the annual meeting of the Association for Research in Vision; in 1999, there were 33. Not all of these were retinal implants projects; some contemplated passing stimulating electrodes right through the skull directly into the visual cortex, bypassing the retina and optic nerve entirely. Direct cortical implants have the theoretical advantage of being able to address any and all diseases of the visual system, but they have the practical disadvantage of requiring the physical disruption of a poorly understood and highly critical set of tissues. Besides, it is not at all clear how people will get around in daily life—dancing, jogging, bouncing over potholes—with needles implanted in their brains.
With all the other teams suiting up, the retinal prosthesis business was beginning to feel like a nascent discipline, including a discipline’s competitive dynamics. Dropping out now would mean something worse than that the goal would not be reached; it would mean that if it were reached, it would be reached by someone else. Neither the Massachusetts Institute of Technology nor Massachusetts General Hospital became powerhouses by hiring people comfortable with dropping out of a hot competition. Besides, ratcheting up the standard of academic research engineering 10 notches might be difficult, but there was no obvious reason why it was flatly impossible. The project continued to squeak past the Wyatt test.
Wyatt and Rizzo turned themselves into grant-writing machines. The project became a multimillion-dollar enterprise, with staff drawn from eight institutions, including the renowned Cornell Nanofabrication Facility. Major financial support comes from the Veterans Administration, the National Science Foundation, and, not surprisingly, the Foundation for Fighting Blindness.
Can We Build a Consciousness Driver?
Since the early days of the retinal implant project, research on brain-computer integration has flowered throughout the world, with projects underway today ranging from memory, sensory prostheses, and thought-actuated machinery to large-scale neural simulations. It no longer stirs comment when a university launches a center for neural engineering. Research on the retinal prosthesis still lies at the heart of the entire integration project, however, because it directly addresses an essential aspect: technology reporting to consciousness. Regardless of what you want from integration— a retinal prosthesis, a new sensory modality, a better memory—the outputs are useless unless the user can be made aware of them.
The underlying goal might be characterized as building a consciousness driver. Drivers are programs that take general commands from many sources and translate them into instructions that activate specific devices. A printer driver can take a command to “print A” and calculate how much ink has to be sprayed by this particular brand and make of printer to form that specific letter in a specific font on a piece of paper. The consciousness driver would work on the same principle: Give it the name of a color or a shape, and it would know how to make that color or shape appear in a person’s awareness, or “inner eye.” A prosthesis for people with retinal diseases is one important application of the consciousness driver, but there will be a host of others.
Building a consciousness driver will require field work—trial and error—since we have no universally accepted theory of consciousness to guide us. And, because we can never be sure that either animals or computers are conscious, most of that field work will have to be done on humans. The current goal of the Boston team is to build a device that can sit inside the head, maintaining a working connection to the nervous system, for very long periods. This would yield a set of basic data about how consciousness works, how it changes over time, how it interacts with other mental contents, the degree to which consciousness can control itself, how much we can expect from what sort of training, and the range in individual variation among consciousnesses—all things a consciousness driver will have to take into account.
An optimist would point out that we know a driver is possible, because we have the “existence proof” of biology right in front of us. But alas, at present we do not know how to imitate most biological solutions, even though we can observe them functioning. We are a little like 19th-century telephone engineers trying to hook up their technology to the 21st century Internet. For instance, as Boston project member Luke Theogarajan (perhaps the only experienced circuit designer in the world with a concentration in organic chemistry) points out, nature communicates with neurotransmitters, not electrons. The virtue of this approach is that it allows you to mix specific messages addressed to more than one kind of cell (there are dozens of neurotransmitters) in the same channel, whereas electrons affect everything indiscriminately. But this system depends on exact dosages; even very slight errors can cause serious mental disturbances. You cannot just add chemicals; you either have to replicate the whole system, including all the regulation and uptake aspects, or find some midway approach more compatible with what we are capable of doing now.
The project has made enough progress that it hopes to have a functioning wireless test device implanted in an animal by spring 2005. “This will enable animal tests and perhaps further design revisions while we seek FDA approval for experimental implantation in a number of blind human volunteers a year or two thereafter,” Wyatt says.
A pessimist would point out, however, that the success of all these integration projects, including the prosthesis, depends on the differences between brains being minimal, so that we do not have to create a separate driver for every single person. We know these differences exist; they are the core of our individuality. The question is how important they are. If we find that every brain talks to its own body really differently, and that only a few of us, if any, are flexible enough to learn another “language,” the entire integration agenda will come to a screeching halt. We would have a simple reason why these projects cannot work. They would have flunked the Wyatt test.