The Rising Threat of AI-Generated Audio: What Schools Need to Know

As PR and communications professionals in education, we know trust is crucial. Recently published research highlights an emerging technology that could erode trust - AI-generated fake audio, known as “deepfakes.”

In experiments, over 500 listeners tried to identify fake audio clips of someone speaking. Alarmingly, they were only ~70% accurate on average. Showing examples of fakes beforehand barely helped.

This demonstrates how challenging it is for humans to reliably detect audio deepfakes. As the technology progresses, the fakes will likely get harder to catch.

The implications for schools are serious. Fake audio could be used for hoaxes aimed at students, teachers, or administrators. As deepfakes enable new forms of misinformation, we must be proactive as communicators.

Familiarize yourself with the deepfake threat. Follow updates from organizations like the Partnership on AI. Verify information through multiple channels before acting or sharing. If an audio clip sounds questionable, consider getting technical assistance to analyze it.

We also need better detection tools and coordinated responses. Grassroots efforts have made progress in spotting visual deepfakes but less focus on audio. Now is the time to get ahead of this challenge.

As PR professionals, we know trust is hard-earned - and easily lost. By raising awareness in our school communities and pressing for solutions, we can work to ensure voices are not misused to undermine trust and learning.

Previous
Previous

12 ChatGPT Prompts for PR Pros to Maximize Your Productivity & Professional Learning

Next
Next

Major AI Learning Advancement: New Research