Deepfakes are Here


PSA: I generated the image for this post with AI and one prompt. It took 30 seconds.

It's fake. But it's also an illustration of just how easy it is to create something real-looking. And it's not the first Biden-fake we're encountering. And it won't be the last.

We knew it was coming, but we're seeing the first real signs of deepfakes attempting to influence the U.S. presidential election.

Bad actors used an AI clone of President Biden's voice to manipulate voters in New Hampshire, urging residents not to vote in the state's primary.

These dodgy voice-cloning scams have been everywhere over the past six months. The scary part is that bad guys only need a 3-second clip of your voice from your TikTok vlog or Instagram Live, and they can produce a very realistic clone of your voice. They can even add realistic emotions, like laughter or fear, to your voice. Yikes.

🚨 Get your Grandma a code word, pronto.

As an AI enthusiast and advocate for ethical adoption, I find myself grappling with a mix of fascination and concern and asking myself (yet again) how do we balance innovation with responsibility? How do we ensure that Generative AI serves humanity rather than lowly leading us into chaos?

Yes, AI holds immense potential for good, but misuse poses a real risk for democracy and trust.

I don't have answers.

What I do know is that we need robust ethical frameworks, transparent AI development, and an informed, AI-literate public that can discern AI-generated content. 

I don't think anyone has GOOD answers at this point. President Biden's 2023 Executive Order mandated watermarking for AI-generated content, but nobody really knows what that should mean or how we do it.

I think all we can do right now is be a part of developing robust ethical frameworks within our spheres, loudly demand transparent AI development, and work to educate ourselves and those in our paths to learn and adopt AI ethically.

Next
Next

“You Are…” GPT