The use of AI in elections triggers a battle over guard rails

0
48

In Toronto, a candidate in this week’s mayoral election who promises to clear homeless camps released a series of campaign promises illustrated by artificial intelligence, including fake dystopian images of people camping on a downtown street and a fabricated image of tents set up in a park.

A political party posted in New Zealand a realistic looking representation on Instagram of fake robbers rampaging at a jewelry store.

In Chicago, the April mayoral runner-up complained that a Twitter account posing as a news outlet had used AI to clone his voice in a way that suggested he condoned police brutality.

What started a few months ago as a slow drip The number of donation emails and promotional images assembled by AI for political campaigns has morphed into a steady stream of campaign materials created by the technology, rewriting the political playbook for democratic elections around the world .

Increasingly, political advisors, election researchers and lawmakers believe that the introduction of new guard rails, such as legislation to curb synthetically generated advertising, should be an urgent priority. Existing countermeasures, e.g. B. Social media rules and services that claim so Detect AI contentdidn’t do much to stem the tide.

As the 2024 US presidential race heats up, some campaigns are already testing the technology. The Republican National Committee released a video with artificially generated images of doomsday scenarios after President Biden announced his re-election bid while governor Ron DeSantis from Florida published fake pictures of former President Donald J. Trump with Dr. Anthony Fauci, the former health official. The Democratic Party experimented with donation messages written by artificial intelligence in the spring – and found that they are often more effective at encouraging engagement and donations than texts written entirely by humans.

Some politicians see artificial intelligence as a way to reduce campaign costs by using it to generate instant answers to debate questions, to ads of attack, or to analyze data that might otherwise require expensive experts.

At the same time, the technology has the potential to spread disinformation to a wide audience. An unflattering fake video, an email blast full of computer-produced false narratives, or a made-up image of urban decay can reinforce prejudice and widen partisan divisions by showing voters what they expect to see, say experts.

The technology is already far more powerful than manual manipulation—not perfect, but rapidly evolving and easy to learn. In May, OpenAI CEO Sam Altman, whose company sparked an artificial intelligence boom last year with its popular ChatGPT chatbot, said: said a Senate subcommittee that he was nervous about the election season.

He said the technology’s ability to “manipulate, persuade and provide a sort of interactive one-on-one disinformation” is “a significant area of ​​concern.”

Rep. Yvette D. Clarke, a Democrat from New York, said in a statement last month that the 2024 election cycle “will be the first election where AI-generated content is prevalent.” She and other Democrats in Congress, including Senator Amy Klobuchar, of Minnesota, passed legislation that would require political ads using man-made material to include a disclaimer. A similar bill was recently enacted in Washington state.

The American Association of Political Consultants recently condemned the use of deepfake content in political campaigns as a violation of their code of ethics.

“People will be tempted to push the envelope and see where they can go,” said Larry Huynh, the group’s new president. “As with any tool, using it to lie to voters, mislead them, or make them believe in something that doesn’t exist can result in bad ends and bad actions.”

Technology’s recent incursion into politics came as a surprise in Toronto, a city that supports a thriving ecosystem of artificial intelligence research and startups. The mayoral election will take place on Monday.

Anthony Furey, a Conservative candidate in the running and a former news columnist, recently laid out his agenda a document That was dozens of pages long and filled with synthetically generated content that helped enforce his tough stance on crime.

Upon closer inspection, it became clear that many of the images were not real: a laboratory scene featured scientists who looked like extraterrestrial blobs. In another depiction, a woman wore a pin with illegible writing on her cardigan; Similar markings could be seen in the picture of a barrier tape at a construction site. A synthetic portrait of a seated woman with two arms crossed and a third arm touching her chin was also used in Mr Furey’s campaign.

The other candidates used this picture to laugh in a debate this month: “We actually use real images,” said Josh Matlow, who shared a photo of his family, adding that “no one in our pictures has three arms.”

Nonetheless, the sloppy portrayals were used to support Mr Furey’s argument. He gained enough momentum to become one of the most recognizable names in an election with more than 100 candidates. In the same debate, he acknowledged using the technology in his campaign, adding, “We’ll have a few laughs here as we learn more about AI.”

Policy experts fear that if misused, artificial intelligence could have a corrosive effect on the democratic process. Misinformation is a constant risk; A competitor of Mr Furey said in a debate that while her staff used ChatGPT, they always fact-checked the results.

“If someone can make noise, create uncertainty, or create false narratives, it could be a powerful way to influence voters and win the race,” wrote Darrell M. West, senior fellow at the Brookings Institution in a report Last month. “With the 2024 presidential election potentially involving tens of thousands of voters in some states, anything that can move people in one direction or another could end up being crucial.”

Increasingly sophisticated AI content is popping up with increasing frequency on social networks, most of which have been unwilling or unable to monitor it, said Ben Colman, chief executive officer of Reality Defender, a company that provides AI detection services. The weak oversight allows unlabeled synthetic content to do “irreversible damage” before it’s fixed, he said.

“Explaining to millions of users that the content they have already seen and shared was fake long after the fact is too little and too late,” Colman said.

For several days this month a Twitch live stream has had a non-stop unsafe debate between synthetic versions of Mr. Biden and Mr. Trump. Both were clearly identified as simulated “AI entities,” but if an organized political campaign created such content and disseminated it widely without disclosure, it could easily diminish the value of real-world material, disinformation experts said.

Politicians may shrug off responsibility and claim authentic footage of compromising acts is not real, a phenomenon known as the “liar’s dividend.” Ordinary citizens could make their own fakes, while others could dig themselves deeper into polarized information bubbles and only believe the sources they chose to believe.

“If people I can’t believe her eyes and ears they might just say, ‘Who knows?’” wrote Josh A. Goldstein, a research associate at Georgetown University’s Center for Security and Emerging Technology, in an email. “This could encourage a transition from a healthy skepticism that encourages good habits (like side-reading and seeking reliable sources) to an unhealthy skepticism that it’s impossible to know what’s true.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here