Share This Page
Q&A with Terrence J. Sejnowski, Ph.D.
Institute of Neural Computation, University of California, San Diego
Computational Neurobiology Laboratory and Howard Hughes Medical Institute, Salk Institute for Biological Sciences
Member, Dana Alliance for Brain Initiatives
You recently co-authored a review paper for Science entitled “Foundations for a New Science of Learning.” What do you mean by “new science of learning?”
Terrence J. Sejnowski: The new science of learning describes an emerging discipline that is applying sophisticated computational models to more traditional approaches to understanding learning, with the ultimate goal of improving educational practice.
The old science of learning has been largely based on animal behavior and child development. Neuroscientists have uncovered the brain mechanisms underlying several different kinds of learning, including motor learning, implicit learning, and how knowledge of facts and events is attained. We’ve parsed the different memory systems and are now beginning to understand how they work together synergistically to affect behavior, including how it affects children’s ability to learn efficiently – or, as in some cases, not learn well.
Over the last ten years, we’ve also witnessed the unexpected emergence of a theory of learning, a mathematical framework that encompasses learning in machines and people. Machine learning is an area of mathematics and engineering that has been made possible by the great advances in digital computers, which have incredibly fast processors and enormously large memory capacities. This gives us, for the first time, the capability to sort through enormous amounts of data collected over decades and explore constraints on how systems could learn. Now we can use that same approach, including the very same theories and algorithms, to try to understand how biological learning fits in, and specifically, how we might improve learning in the classroom.
There has been a real convergence of a very powerful body of mathematics and theories inspired by psychology. Initially developed to solve engineering problems, the convergence of the two fields was then applied to understanding biological mechanisms in human brain. That’s what we mean by the new science of learning. I think this is one of the great success stories in all of neuroscience and engineering over the last decade.
When we talk about “machine learning,” it’s easy to envision how engineering advances have contributed, but what role is neuroscience playing?
Neuroscience has inspired some of the most effective learning algorithms in machine learning. I am the president of the Neural Information Processing Systems Foundation, which organizes the annual NIPS conference. Now in its 23rd year, this is the premier machine-learning conference in the world. The impetus for the meeting started with neuroscientists, psychologists, physicists, mathematicians, and engineers trying to understand learning in neural networks. For example, brain researchers asked how networks of neurons could represent information, learn associations, and control complex objects, such as our limbs, in space. The difficult problem that animals solve is learning from masses of data in high-dimensional spaces in real time and in dynamic environments. In thinking about this domain, new questions were asked and the field took on a life of its own, developing a new theoretical framework for thinking about learning and building devices that learn.
One of the most exciting of these approaches is reinforcement learning, a thriving field that emerged from developing computational models of classical conditioning. It describes the process by which an individual learns to predict which actions it must take to maximize reward in a given environmental situation. This field has flourished both as an engineering discipline and as a way to interpret biological data.
One of the milestones in the convergence of these fields occurred about 15 years ago, when two post-doctoral students in my lab, Peter Dayan, Ph.D., and Read Montague, Ph.D., used an algorithm called temporal differences – a mathematical method for predicting future rewards in reinforcement learning – to try to understand how honeybees learn to forage. They were specifically interested in the roles played by a type of neuron in the bee brain that links odor processing to reward. Using this algorithm, they were able to very accurately model honeybee foraging and explained risk aversion in bees, a behavioral phenomenon in which uncertain food sources are avoided in favor of a lower amount of payoff that is steady and predictable. That was the first model of a neurobiological system based on temporal difference learning.
We then realized that we could also apply this method to dopamine neurons in mammals in order to understand the reward-learning system, which is the system that guides behavior in all vertebrates and is hijacked by drugs of addiction like cocaine or alcohol. The very same algorithm for temporal differences turned out to be the key to understanding the properties of dopamine neurons, which were discovered by Wolfram Schultz, M.D., Ph.D., and how they contribute to behavior. This is an example of how understanding something about a biological system that is a fundamental driver of human behavior was aided by a theory from engineering.
In what ways has the study of how children learn been used to solve engineering problems?
Children’s brains are still developing and we need to understand how that helps them to learn. One example is imitation learning, which has been studied by Andrew Meltzoff, Ph.D., at the University of Washington in Seattle, who is trying to understand what makes children such effective learners. Babies and children are really good at imitation. Right out of the womb, babies can imitate facial expressions. If you stick out your tongue, a baby who can barely see will repeat your action. Children have fantastic abilities to mimic actions and behaviors. They learn a lot simply by observing and mimicking, and they will try to repeat not only the action itself – say, reaching out with the arm – but the purpose of the action – say, picking up a ball.
This is something humans do much more effectively than any other animal.
Engineers, having seen that imitation is highly effective in humans, combined imitation learning with reinforcement learning to boost the performance of control systems. In apprenticeship learning, for example, a powerful computer tracks the actions of an expert human controlling a complex system, and then programs the reinforcement system to imitate and learn the very complex motor commands that the human makes. Engineers are now able to reproduce human skills that were previously thought beyond the reach of machines. For example, Andrew Ng, Ph.D., at Stanford has used apprenticeship learning with reinforcement to automatically control helicopters that do stunts like flying upside down.
How is this kind of reinforcement learning in machines being applied to childhood education?
What is emerging out of this is a new generation of robots that interact with humans on their own terms. We call them social robots. Javier Movellan, Ph.D., at the University of California, San Diego’s Institute for Neural Computation is building social robots that interact with 18-month-old toddlers in the Early Child Education Center on campus. The current prototype is called “Rubi.”
The goal of this research program is to try to understand what gets the attention of toddlers. At that age, they are extremely distractible and it’s hard to hold their attention, so a lot of the teachers’ time is spent just keeping track of the children, making sure they don’t do any harm as they run around and interact with each other. The social robots are an experiment to see what it would take for a child that age to be able to interact with the robots and whether or not they would learn from them.
I think this is one of the great success stories in all of neuroscience and engineering over the last decade.
The first problem Javier encountered upon introducing Rubi the robot into the preschool environment was that the children pulled off the robot’s arms. The children saw it as a toy and wanted to see if they could pull it apart. To solve that problem, he programmed the robot to cry whenever the pressure reached a certain threshold. So the child would back off, because crying is a social signal for “hurt.” If the robot kept crying, the child would hug it, because that’s the way that children communicate with adults when they are hurt. Even at this young age, the child is able to pick up social cues, especially emotional ones.
The second step was to get the children to engage in a dialogue with the robot. They discovered that there is a very narrow window in which the child expects a response from the robot. If the robot’s response is too quick, the child ignores it. If the response is too late – if it is delayed by more than one to two seconds – the child loses interest and turns away. But if an appropriate response occurs within one or two seconds, then magic happens. The child looks at the robot and utters another sound. If the robot repeats it, the child begins to think about this robot as if it were someone the child can play with.
Once that interaction is established, the child can play a game with the robot, a game of shared attention. Shared attention is another important milestone in human development. It describes a situation in which children and adults focus on a third object, let’s say a ball. If the mother takes a ball and bounces it up and down, the child will look at the ball, see it bouncing up and down, then will reach for the ball and try to make it bounce up and down.
Rubi the robot can also share attention. When the child turns its attention to an object, like a ball, Rubi will turn its head and look at the ball. Then if the child turns and points to a clock on the wall, Rubi will turn and point to the clock, and so on. The child now has established rapport with the robot. In other words, the robot has been able to communicate with the child. (To see this in action, visit the Proceedings of the National Academy of Sciences.)
Finally, once the child is engaged, more sophisticated interactions can occur. Rubi has a touch-sensitive computer screen on its belly and it can teach the child colors and songs and lots of wonderful things. The children really love this. They come in every day and expect to see Rubi, and are upset if Rubi is not there. They have accepted Rubi as a normal part of their lives. This is magic.
What are the next steps in this research?
What has been shown to date is a proof of the principle that it is possible to create robots that engage in social interactions with human beings, at least at the preschool age. Now the really important work begins, which is how to use that rapport to help the child learn and understand new concepts – in short, to perform individualized educational instruction. Social interactions will be absolutely critical to achieving that goal.
The possibilities are staggering. Using machine learning, this robot will be able to record every answer that the child gives and track it over time to determine what the child has mastered and what he or she is having trouble with. It will then be able to craft a teaching schedule that is optimal for that child. In this way, the robot is essentially creating an individualized curriculum based on all the information about that child that is being fed into the robot’s internal model of the child.
That’s what good teachers do. But if a teacher has 30 students in the class, the teacher can’t do that for every single child. The goal here is to provide each student with their own personal robot, which will become the expert on that `child. Having this apprentice or teacher’s helper will in turn free up the time of the human teacher, who now will be able to concentrate on more important issues that come up – for instance, to work with children who have special needs. This greatly enhances what a single teacher can do.
One-on-one human tutoring can improve the performance of a child by two standard deviations (a measure of difference from the mean). This is an enormous enhancement over what can be taught in a large classroom, where often the best that a teacher can do is to try to teach to students who are in the middle range of the class in terms of achievement. Very often, that means that students who are below average may be lost while those who are above average may be bored. It’s one of the most difficult tasks of being a teacher. But by having an individual tutor – a social robot – for every student, you could have the best of both worlds.
Is the ultimate goal, then, to develop a battalion of Rubi the robots that could be deployed in classrooms?
There are already powerful computers sitting in classrooms of many schools, but they are not really being used effectively – the teacher may use them for delivering tests or for some other rote application. So really, the big change that is being advocated with the new science of learning is to turn the computer into a social robot. A face on the screen that exhibits emotional expressions and effectively interacts with the user could be useful for adults too.
Is there resistance to the idea of children being taught by robots?
In our experience, the children who have interacted with Rubi love her. The teachers find Rubi to be a helpful assistant. It may be that not every child will work well with a social robot, or you may need to individualize the form of the robot to the child. But these three fundamental principles that have been identified so far – shared attention, social imitation, and empathy – seem to be critical for getting young children to respond and be open to learning. As the child matures, there will likely be additional things that will be needed, such as attention to motivational issues. The task is likely to get considerably more complicated as we start developing these kinds of applications for older children, but based on the popularity of computer games among children, we think the possibilities for such applications are enormous.
Why are the social aspects of these robots seemingly so critical to their success?
Learning is gated by social interaction. It’s clear, for example, from work by a group at the University of Washington at Seattle lead by Patricia Kuhl, Ph.D., that learning languages is gated by the interaction between the mother (or nanny) and child. She has shown that a Chinese-speaking nanny is able to teach a child some sounds in Chinese, but if you play a video of that same nanny to another child, the child doesn’t learn the sounds. So unless you have a human being actually there interacting with the child, the learning does not take place. Yet, so long as a robot passes these tests of interaction and the child’s brain is socially engaged – which involves a lot of things, including facial expressions that one might not be consciously aware of – then magic happens and learning occurs. When these social cues are missing, everything falls flat.
This is true for the teacher as well as for the student. I once taped a lecture for a class. I thought it would be easy, but it turned out to be one of the most difficult teaching experiences I ever had. I knew how bad it was going to be when I told my first joke and no one laughed. As humans we need that social feedback. If you don’t get it, you have no idea what is getting through. Teaching is all about getting that feedback and adjusting things accordingly. It’s a two-way street; it’s just as essential that the teacher gets feedback from the child as it is that the child gets feedback from the teacher.