All the data you can eat
An apple with "Toaster" written on it
ROTM

Fight back against the AI with pen and paper

Jason Norwood-Young
Jason Norwood-Young
2021-03-05

OpenAI’s new image-recognising neural network, CLIP, blows previous deep learning algorithms out the water, especially when it comes to conceptual representations, like a drawing of a banana versus an actual banana. It works similarly to a human brain, the researchers believe, firing “neurons” that connect concepts rather than very specific images. But it’s so good at picking up abstract representations that its creators have discovered its Achilles Heel – simple text.

“Like the Adversarial Patch, this attack works in the wild; but unlike such attacks, it requires no more technology than pen and paper,” writes the researchers. Imagine sticking a “Go” sign on a traffic light to fool the most advanced self-driving cars.

This isn’t CLIP’s biggest problem, though. “We have observed, for example, a “Middle East” neuron with an association with terrorism; and an “immigration” neuron that responds to Latin America. We have even found a neuron that fires for both dark-skinned people and gorillas, mirroring earlier photo tagging incidents in other models we consider unacceptable.” For this reason, researchers are holding back the release of the AI that’s so much like humans that it’s racist. (We’ll take that as a win for machine learning ethics.)

Jason Norwood-Young
  • Journalist, developer, community builder, newsletter creator and international man of mystery.

Leave a Comment

Your email address will not be published. Required fields are marked *