Meet the college student trying to protect you from an "A.I. Chernobyl"
Artificial intelligence is bringing us cute puppy videos — and deepfakes, disinformation, and a potentially existential threat to democracy. Sneha Revanur has a vision to fix it before it's too late
Artificial intelligence is like ketchup. It changes everything. Whether for better or worse is up for debate.
Until what feels like five minutes ago, A.I. was mostly the stuff of science fiction for most of us. Then, quite suddenly, it was a real thing — very real, sometimes too real.
While its creators tend to speak of it in the breathless terms with which their predecessors once hawked the internet and social media and phones (while also warning of potentially grave risks), A.I. is already proving to be more than just a fun way to generate recipes or boost your productivity at work.
Real threats abound. A voice-cloning system has already been used in a robocall campaign targeting the Biden campaign this election season; A.I.-powered misinformation may have affected elections in Slovakia; even Taylor Swift has been victimized by pornographic deepfakes. And even when things function as intended, facial recognition systems and chatbots repeat our worst biases, while workers worry about just how quickly large language models might replace them.
Humanity may not need to worry — yet — about being turned into grey goo. But it’s definitely time to get a handle on A.I. development to make sure it serves people — not just tech CEOs.
That’s what Sneha Revanur is trying to do with her organization, Encode Justice. Revanur’s global team mobilizes young people worldwide (like Revanur, most members are college students, part of the generation that will have to deal with A.I.’s short-term impacts and future societal disruption) to advocate for a safer, more democratic “human-centered A.I.,” managed to serve the public interest. Encode Justice has already had an impact on the Biden administration’s executive order on A.I., and the group is at work on more far-reaching goals, with an eye on influencing international cooperation on A.I. regulation.
With ever more sophisticated models — like Sora, OpenAI’s new photorealistic text-to-video tool — coming online just two months into the biggest election year in human history, we reached out to Revanur to talk about A.I.’s imminent challenges and looming existential threats, the current state of regulation, and what her plan to safeguard humanity’s future looks like.
“I don't want us to have to wait for an A.I. Chernobyl to start taking this seriously on a political level,” Revanur told us. You won’t want to miss this conversation below.
If you enjoy interviews like these, will you sign up today as a paying subscriber?
Are the threats we face from artificial intelligence — at least at this point — really political problems rather than newfangled technological problems that we have no way at all to solve?
In many ways, the problems we have with A.I. have to do with it amplifying the worst of human nature and human society and human institutions. If we look at its democracy and disinformation impacts, it's important to recognize that these technologies are being weaponized by political actors who already have bad intentions, who are already agents of disinformation. This is simply another tool in their arsenal to supercharge that and to spread their lies and further undermine the political process.
At the same time, though, there are new threats that were not within the realm of human capabilities before. For example, we've seen some reports that indicate A.I.s are, potentially, capable of generating instructions for creating biological weapons; that GPT-4 could have been jailbroken to generate bomb-making instructions; and that A.I. used in drug discovery could be repurposed pretty easily to generate tens of thousands of lethal chemical warfare agents.
These may also be a matter of A.I. amplifying existing human problems, but I think in a way that could have a destabilizing effect on global politics, on the world order at large, as those effects start to trickle into societies around the world. So there’s a lot to be worried about.
Are there technological solutions to those problems? Or are the solutions political and human?
Keep reading with a 7-day free trial
Subscribe to The.Ink to keep reading this post and get 7 days of free access to the full post archives.