AI can mimic a human voice well enough that deepfakes can fool many people into thinking they’re hearing a person talk. Inevitably, AI voices have been exploited for automated phone calls. The US Federal Communications Commission (FCC) is trying to combat the more malicious versions of these attempts and has a proposal aimed at strengthening consumer protections against unwanted and illegal AI-generated robocalls.
The FCC’s plan would help define AI-generated calls as well as texts, allowing the commission to then set boundaries and rules, like mandating AI voices disclose that they are fake when calling.
AI’s utility in less-than-savory communications and activities makes it unsurprising that the FCC is pursuing setting up regulations for them. It’s also part of the FCC’s effort to combat robocalls as nuisances and a way to commit fraud. AI makes these schemes more difficult to detect and avoid, prompting the proposal, which would require the disclosure of AI-generated voices and words. The call would have to start with the AI explaining the artificial origins of both what it is saying and the voice used to say it. Any group not doing so would then be heavily fined.
The new plan adds to the Declaratory Ruling from the FCC earlier this year, which said that voice cloning technology in robocalls is illegal without consent from the person getting called. That was borne out of the deliberate confusion wrought by a deepfake voice clone of President Joe Biden combined with caller ID spoofing to spread misleading information to New Hampshire voters ahead of the January 2024 primary election.
Calling on AI for Help
Beyond going after the sources of the AI calls, the FCC said it also wants to roll out tools to alert people when they are getting AI-generated robocalls and robotexts, particularly those that are unwanted or illegal. That might include better call filters stopping them from happening at all, AI-based detection algorithms, or enhancing caller ID to identify and flag AI-generated calls. For consumers, the FCC’s proposed regulations offer a welcome layer of protection against the increasingly sophisticated tactics used by scammers. By requiring transparency and enhancing detection tools, the FCC aims to reduce the risk of consumers falling victim to AI-generated scams.
Synthetic voices created with AI have been leveraged for many positive efforts, too. For instance, they can give people who have lost their voice the ability to speak again and open new options for communication among those with visual impairment. The FCC acknowledged that in its proposal, even as it cracks down on the negative impact the tools can have.
“Facing a rising tide of disinformation, roughly three-quarters of Americans say they are concerned about misleading AI-generated content. That is why the Federal Communications Commission has focused its work on AI by grounding it in a key principle of democracy – transparency,” said FCC Chairwoman Jessica Rosenworcel in a statement. “The concern about these technology developments is real. Rightfully so. But if we focus on transparency and taking swift action when we find fraud, I believe we can look beyond the risks of these technologies and harness the benefits.”