REWIRED 2021 Timnit Gebru Says Artificial Intelligence Needs to Slow Down
Artificial intelligence researchers are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the powerâ"and resourcesâ"to automate decision-making.
Organizations rely on AI to approve a loan or shape a defendantâs sentence. But the foundations upon which these intelligent systems are built are susceptible to bias. Bias from the data, from the programmer, and from a powerful companyâs bottom line can snowball into unintended consequences. This is the reality AI researcher Timnit Gebru cautioned against at a RE:WIRED talk on Tuesday.
âThere were companies purporting [to assess] someoneâs likelihood of determining a crime again,â Gebru said. âThat was terrifying for me.â
Gebru was a star engineer at Google who specialized in AI ethics. She co-led a team tasked with standing guard against algorithmic racism, sexism, and other bias. Gebru also cofounded the nonprofit Black in AI, which seeks to improve inclusion, visibility, and health of Black people in her field.
Last year, Google forced her out. But she hasnât given up her fight to prevent unintended damage from machine learning algorithms.
Tuesday, Gebru spoke with WIRED senior writer Tom Simonite about incentives in AI research, the role of worker protections, and the vision for her planned independent institute for AI ethics and accountability. Her central point: AI needs to slow down.
âWe havenât had the time to think about how it should even be built because weâre always just putting out fires,â she said.
As an Ethiopian refugee attending public school in the Boston suburbs, Gebru was quick to pick up on Americaâs racial dissonance. Lectures referred to racism in the past tense, but that didnât jibe with what she saw, Gebru told Simonite earlier this year. She has found a similar misalignment repeatedly in her tech career.
Gebruâs professional career began in hardware. But she changed course when she saw barriers to diversity and began to suspect that most AI research had the potential to bring harm to already marginalized groups.
âThe confluence of that got me going in a different direction, which is to try to understand and try to limit the negative societal impacts of AI,â she said.
For two years, Gebru co-led Googleâs Ethical AI team with computer scientist Margaret Mitchell. The team created tools to protect against AI mishaps for Googleâs product teams. Over time, though, Gebru and Mitchell realized they were being left out of meetings and email threads.
In June 2020, the GPT-3 language model was released and displayed an ability to sometimes craft coherent prose. But Gebru's team worried about the excitement around it.
âWe havenât had the time to think about how it should even be built because weâre always just putting out fires.â
Timnit Gebru, AI researcher
âLetâs build larger and larger and larger language models,â said Gebru, recalling the popular sentiment. âWe had to be like, âLetâs please just stop and calm down for a second so that we can think about the pros and cons and maybe alternative ways of doing this.ââ
Her team helped write a paper about the ethical implications of language models, called âOn the Dangers of Stochastic Parrots: Can Language Models Be Too Big?â
Others at Google were not happy. Gebru was asked to retract the paper or remove Google employeesâ names. She countered with an ask for transparency: Who had requested such harsh action and why? Neither side budged. Gebru found out from one of her direct reports that she âhad resigned.â
The experience at Google reinforced in her a belief that oversight of AIâs ethics should not be left to a corporation or government.
âThe incentive structure is not such that you slow down, first of all, think about how you should approach research, how you should approach AI, when it should be built, when it should not be built,â said Gebru. âI want us to be able to do AI research in a way that we think it should be doneâ"prioritizing the voices that we think are actually being harmed.â
Since leaving Google, Gebru has been developing an independent research institute to show a new model for responsible and ethical AI research. The institute aims to answer similar questions as her Ethical AI team, without fraught incentives of private, federal, or academic researchâ"and without ties to corporations or the Department of Defense.
âOur goal is not to make Google more money; itâs not to help the Defense Department figure out how to kill more people more efficiently,â she said.
At Tuesdayâs session, Gebru said the institute will be unveiled on December 2, the anniversary of her ousting from Google. âMaybe Iâll just start celebrating this every year,â she joked.
Slowing the pace of AI might cost companies money, she said. âEither put more resources to prioritize safety or [donât] deploy things,â she added. âAnd unless there is regulation that prioritizes that, itâs going to be very difficult to have all these companies, out of their own goodwill, self-regulate.â
Still, Gebru finds room for optimism. âThe conversation has really shifted, and some of the people in the Biden administration working on this stuff are the right people,â she said. âI have to be hopeful. I donât think we have other options.â
Watch the RE:WIRED conference on WIRED.com.
More Great WIRED Stories
0 Response to "REWIRED 2021 Timnit Gebru Says Artificial Intelligence Needs to Slow Down"
Post a Comment