Share This Page
The keynote speaker at this year’s annual meeting of the International Neuroethics Society delivered a riveting explanation of how racism is deeply embedded in many technologies, from widely used apps to complex algorithms, that are presumed to be neutral or even beneficial but often heighten discrimination against Black people and other marginalized groups.
Ruha Benjamin, Ph.D., a sociologist and an associate professor of African American Studies at Princeton University, called this development “the new Jim Code” to emphasize that it is today’s version of the old “Jim Crow” laws, which were used in the Southern US to keep Black people subjugated and segregated for roughly a century after the Civil War.
Benjamin delivered the annual Fred Kavli lecture on Friday, Oct. 23, the capstone event of the two-day conference. The meeting was held virtually this year to protect participants from infection with the coronavirus. One benefit of an online meeting was that many more people were able to participate, nearly twice as many as any previous meeting, coming from 30 countries.
Benjamin approaches her subject from an interesting background. She is of Iranian heritage on her mother’s side, and African American on her father’s, as reported by Princeton’s office of communications. She was born in India before moving to the United States as a child and was raised in the Baha’i faith. She earned a bachelor’s degree from Spelman College, a historically Black college in Atlanta, and a doctorate in sociology from the University of California at Berkeley.
Benjamin laid out several stark examples of how “anti-Blackness” affects virtually all major institutions in US society. Some of her examples were no doubt familiar to those who have read her books or previous commentaries over the past decade, but most were new to me and presumably to many other registrants in the meeting as well.
For starters, she cited a horrific example that came to light in a newspaper report in 2015. The North Miami police department had been using mug shots of Black male criminals for target practice, a previously hidden instance of the kind of anti-Black sentiments that still distort policing.
The silver lining was that a group of clergymen offered their own faces as targets, saying “use me instead.” Such opposition was a crucial step toward showing the lunacy of what the Miami police were doing.
My personal favorite among her examples was a clever response to machine learning programs that use predictive police technologies to pinpoint where street crime is apt to occur (usually in lower income Black neighborhoods, which are more heavily policed to start with). A counter narrative by a group of techies in 2017 mocked this approach by using data from a financial regulatory firm to identify neighborhoods where white collar crime is apt to occur. They called them White Collar Crime Risk Zones and users of their app could get warnings when they were about to enter a danger zone. They also used portraits of corporate chief executives to create composite images of what likely wrongdoers would look like, namely white men. “What if cops went to financial neighborhoods and stopped and frisked white guys in business suits?” the project’s leader wondered in an interview at the time.
Two academic studies cited by Benjamin showed how difficult it is to root out ingrained prejudices. Researchers at the Yale School of Education told a group of preschool teachers to watch video clips of children in a classroom and told them to look for signs of challenging behavior, the kind that might get kids tossed out of school or the classroom. Eye-tracking technology showed that the teachers spent more time looking at Black boys than at white children. In actuality, the children were actors, and the clips did not show any inclination to misbehave. “Implicit biases don’t begin with Black men and the police,” the lead researcher told a reporter at the time. “They begin with Black preschoolers and their teachers, if not earlier.”
Meanwhile, researchers at Stanford University found in 2014 that when white people were shown statistics about the vastly disproportionate number of Black people in prison, they did not become supportive of criminal justice reform to relieve injustices against Black people but instead became more supportive of punitive policies, such as California’s Three Strikes Law and New York City’s stop-and-frisk policy, that were partly if not mainly responsible for the disproportionate incarceration rates. The white participants became even more fearful of Black criminals who they felt “deserve to be locked up.”
Benjamin warned against assuming that there will be an easy technical fix for racism. She cited Jonathan Kahn’s book, Race on the Brain, which argues that research involving implicit bias has increasingly been integrated with neuroscience and with attempts to locate racially coded responses in distinct brain regions through imaging technologies. That opens the way for a “pills for racism” approach divorced from the cultural, historical, and sociological factors that many consider the primary causes of racism.
Disease algorithms also came up. One seemingly objective algorithm that sought to identify the sickest patients who would need special attention to avoid hospitalization made the mistake of using health care costs as a proxy for sickness, assuming that if your care cost more you must be sicker. But Black people on average have lower health care costs than white people, so the software let healthier white patients into special programs ahead of sicker Black patients who needed them more. Millions of people were affected.
Benjamin also pointed to a 2010 book entitled The Protest Psychosis: How Schizophrenia Became a Black Disease. It was written by a psychiatrist and cultural critic who, after analyzing decades of records from a Michigan mental institution, summarized his findings in interviews at the time. He argued that schizophrenia was once considered a disease of non-violent white petty criminals, but after the riots and civil rights protests of the 1960’s was increasingly portrayed as a disease of, hostile, aggressive, violent Black men. The social milieu in which they were diagnosed had clearly changed.
Steven Hyman, Ph.D., a distinguished service professor at Harvard and board chairman of the Dana Foundation, noted that the topic of algorithmic bias is starting to emerge as an extraordinary challenge in healthcare and medicine. In an email to this writer, he praised Benjamin for providing a sophisticated perspective on the challenges. An important point that she made, he said, is that most people tend to see resource allocation by algorithm as fair and objective, assuming that computers do not bring their own intentions into decision making and are free of emotion. But because the large databases used as training sets for machine learning are derived from our imperfect world with its many hidden, and often unintended biases, “machine learning may actually make such biases universal rather than removing them—and may make them hard to spot.”
What can we do to eliminate hidden racism from our technologies?
Benjamin lamented the “complicity” of science and of neuroscience in contributing to these distorted views, so there is a potentially important role for neuroethics to play. She also thinks that humanists and social scientists, not just technology wizards, should be involved in designing technologies.
Education could be a big part of the solution, by educating people inside and out of the tech world in how to spot racism in society and in technologies and how to get rid of it. Benjamin herself has founded an innovative Just Data Lab at Princeton, which brings together relevant people to rethink the data needed for justice. The lab is developing a “pandemic portal” to track the racial dimensions of the pandemic and, in collaboration with community organizations, to address racial inequities.
Lots of companies and organizations have issued statements supporting social justice and Black Lives Matter. But the perplexing question, “the million-dollar question,” she said, is how to force action and reach solutions.
Benjamin said that the nation has become racist by “itty-bitty steps” since the era of Jim Crow that won’t be eliminated by grand gestures Instead, reformers need to “put a magnifying glass” on everyday patterns and practices that reinforce racism, including the way job offers are advertised and how racist attitudes are passed down within families. Grand statements won’t fix racism, she counseled. But we can begin to change the patterns of thought right now.
Read More: “Neuroscience Confronts Racism,” Philip Boffey’s column in the Fall issue of Cerebrum
Phil Boffey is former deputy editor of the New York Times Editorial Board and editorial page writer, primarily focusing on the impacts of science and health on society. He was also editor of Science Times and a member of two teams that won Pulitzer Prizes.
The views and opinions expressed are those of the author and do not imply endorsement by the Dana Foundation.