This week, a Google employee spoke to WIRED alleging discrimination she faced as a Deaf Black woman working in the company's Responsible AI division. In an explosive report, WIRED's Paresh Dave details how Google limited Jalon Hall's access to an American Sign Language interpreter and curtailed her ability to do her job. This is one of many documented cases in Google's long history of sidelining marginalized voices within its workforce—and it isn't the first time WIRED has dug into such accusations. For this week's Classics, we're bringing back another story of discrimination at the center of Google's responsible AI efforts. In "What Really Happened When Google Ousted Timnit Gebru," published in July 2021, WIRED senior editor Tom Simonite spoke to the computer scientist about her experience after she was hired to help spearhead the company's ethical AI research.
Simonite's cover feature details Gebru's influential work in the field of machine learning and the culture she endured while working for Google's Ethical AI team. During her tenure, Gebru coauthored a paper warning of potential biases in large language models—models that would later go on to power ChatGPT and Google's Gemini. Google responded by censoring Gebru's work and forcing her out of the company. Simonite's reporting depicts a corporation that disrespects and undervalues outspoken women employees. It's also a fascinating look into how Google viewed Gebru's research team as an extraneous, disconnected voice—one that provided the company a sort of ethical halo, and a way for it to tell the public that it cared about their concerns without actually having to change the way it did business.
Reading about Gebru's experience three years later in the midst of the generative AI boom adds even more layers to the story. "The most striking thing as I look back at this story is how prescient Gebru and her coauthors were," Simonite says today. "The paper was essentially a survey of reasons to be careful with a then little-known technology that is now the talk of every tech company. Many of the things Gebru and her coauthors warned us about are now society-wide problems spurring urgent tech policy debates around the world, such as biased AI outputs, murkily acquired training data, and even the risk of psychological harms from persuasive chatbots."
In light of this, I wonder whether we should be more concerned about how corporate researchers and investigators continue to shoulder the burden of protecting consumers and setting ethics standards in the field of AI. If companies like Google can't listen to the concerns raised by the few diverse voices within their workforces, or appropriately accommodate their disabled employees, how can they build equitable platforms? What are your thoughts? In addition to harming individual employees, can discrimination issues like the ones Gebru and Hall faced affect the ethics and equitability of the products tech companies release into the world? Let me know in the comments below the story or email me at samantha_spengler@wired.com.
See you next weekend.
No comments:
Post a Comment
🤔