Centering Human-Centered Artificial Intelligence

Report from Neuroscience 2019
Ann Whitman
October 21, 2019

Fei-Fei Li gives the Dialogues Between Neuroscience and Society lecture. Photo courtesy of the Society for Neuroscience

“AI is no longer a niche laboratory science,” said AI pioneer Fei-Fei Li, Ph.D., at Saturday’s Society for Neuroscience (SfN) Public Dialogues Lecture. “It has not only entered daily life but is driving the fourth industrial revolution.”

Li, a professor of computer science and co-director of Stanford University’s Human-Centered AI Institute, opened SfN’s annual meeting to a crowded auditorium in Chicago’s convention center, eager to hear her take on AI and its impact on life as we know it.

The field of AI has been around for more than five decades, but three technological advances have lead to this more recent “explosive” impact on industry, she said, even exceeding the expectations of many scientists. They are: hardware and computing, algorithms, and big data. Together they’ve contributed to “a real force of change in society.”

Changes can be positive, such as the Google self-driving car, delivering a freedom to vision-impaired and blind people, she said. But technology will not only bring out forces of good, she warned. Job displacement, bias, and privacy infringement are real—and present—concerns.

To address these concerns, she’s developed an approach called Human-Centered AI, bringing values and awareness into the development and employment of this technology.” She outlined its three guiding principles:

  1. The development of AI must be guided by a concern for its human impact.
  2. AI should strive to augment and enhance us, not replace us.
  3. AI must be more inspired by human intelligence.

To follow the first principle, Li calls for an interdisciplinary approach to AI, to understand, anticipate, and guide its human impact. This means collaborating with a broad range of specialists, including historians, political scientists, education researchers, machine-learning researchers, and cognitive neuroscientists, she said.

Together these groups will need to address important and current issues, such as turning machine-bias, in areas such as race or gender, into machine-fairness, by working to improve fairness in data sets, algorithms, computer theories and decision-making, and ethics.

For those who fear human jobs replacement by automation, Li advocated for AI that augments, not replace humans—her second principle. She spoke about her ongoing studies in hospitals and senior homes, where she thinks AI can enhance human care by lowering costs, improving safety, and giving clinicians and caretakers more time to focus on their patients by relieving them of their more mundane tasks, such as monitoring medicine intake and hand sanitizing (a serious problem in hospitals, which can be deadly for patients, now often just observed by a person with a clipboard).

To work closely with humans, AI will need to understand human intelligence, and at times think similarly to us. Yet human intelligence is dynamic, multi-sensory, complex, uncertain, and interactive, while today’s AI is very static, disembodied, with simple goals, said Li. How do we bridge that gap? In her third principle, Li calls for AI programming to be more inspired by human intelligence.

For example, humans are born with an innate curiosity; babies want to play with the world around them. To mirror this learning style, Li and others are developing an AI system that is intrinsically motivated by its environment. Early results show behavioral patterns that mimic aspects of human learning, such as learning in stages and learning to focus on objects without being directed.

She’s also interested in having AI learn from humans by asking questions and by understanding the more nuanced way we perceive the visual world. In her work, AI systems have learned to ask more engaging questions and respond with specifics based on learning, to describe an image as a dahlia, not just a flower, for example.

While identifying a type of flower may seem a big step from high-level learning, it’s clear that AI momentum is moving at a rapid speed, with the financial backing to sustain it. As we look forward to a more technologically driven world, it’s comforting to know researchers such as Li are calling for collaboration and ethical review, while also educating the public on the potential benefits of AI.

For more details on Li’s human-centered AI approach, read her 2018 New York Times op-ed, and for a taste of her research on computer vision (teaching machines how to interpret what they “see”), watch her TedTalk “How we’re teaching computers to understand pictures” or visit her lab’s web page

Here is SfN’s video from the talk