The Fledgling Science of Consciousness: An Interview with Christof Koch

An Interview with Christof Koch

by Jim Schnabel

June 26, 2008

Christof Koch - Thumbnail
Cristof Koch 
Christof Koch is the Troendle Professor of Cognitive and Behavioral Biology at the California Institute of Technology, where he manages a large neurobiology and  engineering laboratory known as the “klab.” A leading proponent of the idea that consciousness—subjective experience—is a neuroscientific problem, not merely a metaphysical one, Koch has been conducting research in this area for two decades. His publications include the 2004 book The Quest for Consciousness, as well as many research and review papers written with Francis Crick, the co-discoverer of DNA who turned to the study of consciousness late in life.

In broad terms, what can be said now about the brain regions that are necessary to conscious experience?

First of all, I think a great step that has been taken over the past twenty years is that people have moved this question from the domain of philosophical, armchair theorizing into the experimental domain, using brain-scan technology and implanted electrodes and so on. That in itself is a significant move forward.

Now, as to the details, many people have believed intuitively that the entire brain must be responsible for consciousness, that you cannot locate consciousness in a particular region because it’s a holistic property of the whole central nervous system.

But empirically we can ask, for example, whether you really need your spinal cord to be conscious? That’s part of the central nervous system, right? Well, it turns out that people who have lost the connection to the spinal cord, such as quadriplegics—think of the paralyzed actor Christopher Reeve—clearly are conscious. So you don’t need a spinal cord to be conscious.

Now, what about the cerebellum? Do you need it to be conscious? Well, it turns out that if you lose it, you won’t be doing ballet, you won’t be playing the piano, but your consciousness doesn’t seem to be impaired in any major way.

What about visual consciousness? Do you need your eyes? You might say yes, clearly you need them to see. But every night you dream, and you’re certainly conscious of your dreams, and yet your eyes are closed, and even in the daytime you can close your eyes and visualize things.

Francis Crick and I argued back in 1995 that even V1, the primary visual cortex, is not directly correlated to consciousness. Last year a paper by Lee, Heeger and Blake in Nature Neuroscience demonstrated a dissociation between what you see and fMRI BOLD activity in primary visual cortex. In other words, the primary visual cortex may be involved with consciousness but it doesn’t seem to be the site where consciousness is actually generated.

The same may be true for all sensory cortices. Imaging studies done on patients in persistent vegetative states show that strong auditory or somatosensory stimuli produce activity in the auditory or somatosensory cortex—but in these unconscious people the activity remains confined to the sensory cortices; it doesn’t spread from there.

So it may well be that none of the sensory cortices gives rise directly to subjective sensation, but merely contribute information to the places where consciousness happens.

So what part of cortex is actually required? Is the front of the cerebral cortex needed, or can consciousness arise purely in cortical regions posterior to the central fissure, says in the parietal cortex? We don’t know. And, of course, the parts of the thalamus and the claustrum that are associated with these cortical regions must also be critical in generating conscious percepts. Right now a lot of research is targeted at this question.

Of course, we need to ask ever more specific questions. Not just whether area X is necessary for, say, visual consciousness, but what circuits or cell types in area X are important to express visual consciousness? All neurons? Only the pyramidal neurons? Only a particular subset of pyramidal neurons that project into another part of the cortex? Ideally we’ll develop tools, for example, to selectively impair neural features to see if we can transiently block consciousness in animals—and ultimately perhaps in human subjects.

What sort of limits are you running up against now, in terms of research tools? MRI and PET don’t yet have very good resolution, either in space or in time.

Yes, MRI is useful but it’s very crude. Remember, per cubic millimeter of cortex there are 100,000 cells. As the typical image voxel in an fMRI protocol is more than ten cubic millimeters large, it covers on the order of a million neurons. And all you see is the hemodynamic response, the blood flow response. So we need to supplement such data with recordings from implanted electrodes, rarely in humans but more commonly in animals.

Also, we want to do more than just observe the system. We don’t want to say only that whenever you’re conscious, area X lights up, and when you’re not conscious area X doesn’t light up. We want to perturb area X. We want to inactivate area X and then see what happens. And of course we can’t do that in a human, ethically. So once again, this requires invasive experiments that are very difficult to do in humans but that we can do in animals using electrodes, pharmacology, genetic or optical tools.

Unfortunately with animals you can’t directly interrogate them about their conscious experiences.

But neither can you talk to a baby, or to someone in a persistent vegetative state, yet you can still study their conscious perception. You infer that a dog feels pain because what he does is similar to what you would do—he vocalizes, he crawls away under the bed, he licks his paw and so on. If I give him an aspirin it seems to help him just as it helps me. So from indirect signs like these we can study his conscious experiences. That is so even if the dog doesn’t lie there and think, hmm, I’m a dog and someday I’m going to die. Dogs like most animals live in the here and now. They don’t have a lot of self-consciousness, of reflexive consciousness. But they’re clearly conscious.

What we really need is a fundamental theory of consciousness. We need to know what systems are conscious, what functional advantage that gives them, and how we can measure that. We need to know whether consciousness depends on a cerebral cortex. Squid and octopus, for example, have quite complicated nervous systems. Their behaviors hint that they have even a theory of mind, but they don’t have a cortex. And I don’t see any reason why you need a six-layered structure such as you find in mammals. But does consciousness occur in less complicated organisms such as Drosophila or C. elegans? Could I build it into a computer? Without an empirically verifiable, fundamental theory of consciousness we’re just groping for isolated facts.

Which of the theories out there do you like?

Giulio Tononi at the University of Wisconsin, a well-known sleep researcher, has an interesting theory of consciousness. He argues that it arises from a particular type of integrated, highly differentiated information-bearing system and that it is a fundamental aspect of reality, that any system that possesses at least some amount of integrated information, has some experiences, has some conscious states [see also sidebar story at right]. According to this theory, such a system wouldn’t have to be biological; it could be structured in silicon, so that you could build an artificial device that has experience—what philosophers call qualia.

It’s way too early to tell whether this theory is the right one, but I think at least the direction is the right one. Ever since I started working on this in the late 1980s, I’ve suspected that the answer lies in an information-theory approach, and that certain types of information-bearing systems can have qualia.

Such theories take a panpsychic view of consciousness in the sense that any system, even a sea anemone or plant, has some minimal amount of conscious experience. So consciousness is almost everywhere. And it implies that we can build machines that have it. And I suspect that one day we will—and those will be interesting times.

Right now what do your instincts tell you about the role of qualia? Are they adaptive in an evolutionary sense? Are they mere spooky epiphenomena? Or are they the keys to the universe, as some would say?

Well, in a sense they are the keys to the universe, because the only way we experience the universe is through feeling, right? Otherwise we would move and procreate and whatever but we wouldn’t have any sensation. We’d be zombies.

And I suspect too that the qualia of consciousness have some advantage, some function in evolutionary terms. Otherwise our consciousness wouldn’t be so detailed, so rich, so consistent from day to day. It would just be mind-boggling if it evolved without a function.

With respect to consciousness, there are two quite opposing views on its origin in the physical world. Either it’s an emergent property, so that a single cell doesn’t have it, but if you put, say, twenty billion cells together they have it. The other possibility is that it is a fundamental property that certain types of information-bearing systems all have.

Either way, it doesn’t need to have metaphysical or religious overtones. But it is somewhat like the idea of a soul. With one big difference: When the system that carries this soul dies, this soul dies with it.