Share This Page
Each year, tens of thousands of neuroscientists convene for the Society for Neuroscience’s annual conference, researchers and companies sharing the year’s biggest scientific discoveries regarding the brain and showcasing novel technologies that have the potential to not only change the course of research, but how humans interact with the world around them. Neuroscience 2019 was no different.
Brain-related technologies are advancing swiftly and have great implications for what it may mean to be human in the future, said Dustin Tyler, PhD, a biomedical engineer at Case Western Reserve University, who has developed a neural prosthetic upper-extremity device for amputees.
“We have these advances in artificial intelligence (AI) and hear a lot about how it might replace humans. We think about genetic engineering techniques like CRISPR and how this synthetic biology may change our underlying human structure,” he said. “But instead of making this discussion about man vs. machine, or technology vs. humanity, maybe we should be moving away from a discussion about competition and more toward one about how we can develop a more symbiotic relationship between technology and humans, where there is a mutually beneficial relationship between man and machine – and maybe there isn’t even as much of a distinction between the two as we’ve had before.”
As neurotechnology advances at an unprecedented pace, scientists are in a unique position to consider the ethical, legal, and societal implications of their use, today and in the future. Neuroscience 2019 highlighted such vital dialogues with two engaging presentations: A “Dialogues Between Neuroscience and Society” lecture given by Fei-Fei Li, Ph.D., co-director of Stanford University’s Human-Centered Artificial Intelligence (HAI) Institute, and a social issues roundtable entitled “Ethical and Social Issues Raised by Neural-Digital Interfaces,” moderated by Tyler. (see video of the lecture and video of the roundtable)
To Augment, Not Replace
Li, in her special lecture, discussed the “dizzying rise of AI,” including the development of computer algorithms that can mimic human intelligence to reason, problem solve, and/or make decisions. Technological advances, including faster hardware with more memory as well as “big data,” or immense data sets that can train AI algorithms, are driving enormous change, not only in the computing industry but across society. And because of that, she argued, it raises an important question: Has the technology exceeded our humanity?
“There are very serious changes coming. The social awareness now is rising to see what machines might impact, where it may be job displacement, machine learning bias and discrimination, or issues with personal and community privacy,” she said. “These are real issues happening today. And while as an AI scientist, I am privileged to contribute toward progress in AI and how it will likely innovate for the future, I know we need to ask, where is AI going? What is the next chapter we should write together as scientists, academicians, educators, and innovators about AI to make sure what we do is right for the future of our society and humanity?”
As the co-director of HAI, Li said her mission is to place human values into every step of AI development. She emphasized that AI should be guided by a concern for humanity and human impact – and that such tools should be inspired by human intelligence and designed to augment human abilities, not replace them. She discussed work that her lab is doing using AI technologies to solve common yet complex problems like stopping the spread of hospital-acquired infections and helping elderly people avoid falls at home.
“We have the ability to enhance human care using intelligent systems,” she said. “But what we make reflects who we are. And that’s why it is so important that this work is governed by human-centered principles and that we are engaging society at large to make sure such technologies are moving forward in the right ways.”
Contemplating Human Fusions
In the Social Issues Roundtable session, Tyler also promoted the importance of engagement as we move toward a brave new world of neurotechnologies that fuse the human nervous system with outside systems –engagement, he said, that needs to occur both within and outside the neuroscience community. Tyler, and collaborator Douglas Weber, Ph.D., a biomedical engineer at the University of Pittsburgh, discussed how technology allows the human nervous system to be directly connected to prosthetics and robotic systems. Those advances raise important ethical and social questions regarding how such tools could and should be used, both said.
“The technology is moving ahead, and we can now interact with it in very different ways,” Tyler said. “It raises a bunch of social, economic, individual, and other issues that needed to be addressed. It needs to be a multi-disciplinary conversation, and we need to think and talk bigger about what it means and what happens when we start connecting neural systems to technology in various ways.”
Tyler introduced Brandon Prestwood, a former millwright who lost his arm in an industrial accident and now is participating in the design and testing of a novel neuroprosthetic device being developed in Tyler’s laboratory. Prestwood described how it felt to use a system that restored sensation to his missing hand.
“The device gave me my fingers back,” he said. “It’s hard to explain what it meant to me to to get those feelings back. It’s not something I ever expected.”
But Prestwood, who signed up for a five-year research stint, will leave the research program at the end of 2020 – and leave use of the prosthetic device, which he called “life-changing,” behind when he does so. That’s one reason, said Suzanne Rivera, PhD, a neuroethicist at Case Western Reserve University, it is so important to have all stakeholders, including research participants, involved in conversations regarding the development of neurotechnology systems, to consider the variety of ethical dilemmas that may result.
“We need to carefully examine the human impacts of science, medicine, and technology, both for individuals and for society,” she said. “We need to make sure we are considering equality, inclusion, privacy, access, risk of physical or emotional harm, potential for nefarious use by bad actors, legal liability, and how these new technologies may change human capabilities – or even whether the changes to those capabilities may have the potential to change how we define humanity itself. And that requires inviting a lot of different voices to the conversation.”
Engaging Early and Often
Prestwood described the neuroprosthetic device he helped to test as miraculous and hopes that the technology will eventually become both available and accessible to other amputees. Yet, as noted by both Li and the scientists at the roundtable, the likelihood that will happen decreases unless the many ethical and social concerns resulting from its use are tackled early in the design process. To ensure what we make really does reflect who we are, they said, those discussions should continue throughout the development process, with all stakeholders having a voice in what could, should, and will be done with future discoveries and innovations.
“In some ways, the easy part is developing the technology,” said Tyler. “We did that, and we see the impact it has had on the people who have participated in our study. But there are a lot of other things that need to be considered, ranging from the economic to the regulatory, and there are no easy answers on how to address them. That’s why we need to have these discussions now and have to include different points of view from the start. Because, ultimately, it doesn’t matter what we can do from the technology side if we can’t help the people we want to help with these innovations. If we don’t have these discussions now, so we can help the people who need that help the most, we’ve failed.”