All the data you can eat
Timnit Gebru
Don't MissUmbilical Ruminations

The fight for diversity in AI escalates with the axing of Timnit Gebru

Jason Norwood-Young
Jason Norwood-Young
2020-12-10

It’s been quite a while since Google quietly sidelined its “Don’t Be Evil” slogan, but the axing of prominent AI ethics researcher Timnit Gebru over a paper highlighting the dangers of very large language datasets in machine learning really brought the point home: Google is now the Bad Guy ™. (Microsoft, you can finally breathe a sigh of relief. Amazon, you’re not off the hook. Let’s not mention Facebook.)

A quick update in case you missed it: Gebru is a renown AI ethicist who until last week worked for Google. A brouhaha erupted when the company refused to be associated to a paper she was co-authoring, saying she had to remove her name. She refused, and said she would resign some time in the future if she couldn’t publish. The company told her they accepted her resignation, immediately, and shut her out of their systems. (She was on holiday at the time. One of those “Don’t bother coming back in” situations.)

While the paper itself hasn’t been released, you can read a synopsis on MIT Technology Review. Essentially it deals with natural language processing done on very large data sets (called “large-scale AI language models”, such as GPT-3), and how this increasingly popular practice impacts marginalized communities.

Google, obviously into big NLP as its bread-and-butter, obviously felt a bit attacked by this, but that’s the point of having someone like Gebru working for it: someone needs to be asking the hard questions.

The optics have been pretty terrible for Google – firing a black woman from your AI team because you don’t want her to point out the discrimination inherent in your AI work. Google CEO Sundar Pichai has made half an apology, with at least a promise to investigate how things got so bad so fast.

Meanwhile, a solidarity movement is growing behind the event, led by an open letter with over 5,000 signatures at time of publication.

If this feels like a storm in a teacup, then you haven’t been paying enough attention to marginalization by sex and race for the last hundred years. While a lot of individuals (like Joy Buolamwini of the Algorithmic Justice League) have been ringing the warning bell for some time, the Gebru incident feels like an inflection point: this is a problem we need to get ahead of, because pretty soon AI is going to be so intrinsically embedded in our devices, systems and lives, that fixing the problem post-deploy is going to be impossible.

Jason Norwood-Young
  • Journalist, developer, community builder, newsletter creator and international man of mystery.

Leave a Comment

Your email address will not be published. Required fields are marked *