Seeing is no longer believing. photos were fake And manipulated almost as long as photography has existed.
Now it doesn’t even need reality to make photos look authentic – just artificial intelligence responding to a prompt. Even experts sometimes have trouble telling if one is real or not. Can you?
The rapid advent of artificial intelligence has raised alarms that the technology used to trick people is advancing much faster than the technology that can identify the tricks. Tech companies, researchers, photo agencies and news organizations are scrambling to catch up, trying to establish standards for content provenance and ownership.
The advances are already fueling disinformation and being used to fuel political divisions. Authoritarian governments have seemingly created realistic news channels to advance their political goals. Last month, some people were taken in by images showing Pope Francis donning a baggy Balenciaga jacket and an earthquake devastating the Pacific Northwest, though neither event had happened. The images were created using Midjourney, a popular image generator.
On Tuesday as Ex-President Donald J. Trump has turned himself in in the Manhattan District Attorney’s office to face a criminal complaint, artificial intelligence-generated images surfaced on Reddit showing actor Bill Murray as President in the White House. Another image, showing Mr Trump marching in front of a large crowd with American flags in the background, was quickly reshared on Twitter without the disclosure that had accompanied the original post, noting that it wasn’t really about a photo acted.
Experts fear the technology could accelerate an erosion of trust in the media, in government and in society. If every image can be made – and manipulated – how can we believe everything we see?
“Tools are getting better, they’re getting cheaper, and there will come a day when you can’t believe anything you see on the internet,” said Wasim Khaled, chief executive of Blackbird.AI, a company that helps clients fight disinformation.
Artificial intelligence enables virtually anyone to create complex works of art, like this one now exhibit at the Gagosian Art Gallery in New York or lifelike images that blur the line between reality and fiction. Insert a text description and the technology can generate a corresponding image – no special skills are required.
There is often evidence that viral images were created by a computer and not taken in real life: the luxuriously dressed pope, for example, had glasses that seemed to melt with his cheek and blurred fingers. AI art tools also often produce nonsensical text. Here are some examples:
Rapid advances in technology, however, eliminate many of these shortcomings. The latest version of Midjourney, released last month, is capable of rendering realistic hands, a feat that early imaging tools conspicuously eluded.
Days before Mr. Trump faced a criminal complaint in New York City, images of his “arrest” circulated on social media. They were created by Eliot Higgins, a British journalist and founder of Bellingcat, an open source investigative organization. He used Midjourney to visualize the arrest, trial, imprisonment of the former president in an orange jumpsuit, and escape through a sewer. He posted the images to Twitter and clearly labeled them as creations. They have since become widespread.
The pictures should not fool anyone. Instead, Mr. Higgins wanted to draw attention to the power of the tool – even in its infancy.
A new generation of chatbots
A bold new world. A new breed of chatbots powered by artificial intelligence has sparked a scramble to see if the technology could turn the economy of the Internet upside down, transforming today’s powerhouses into the past and creating the next giants of the industry. Here are the bots you should know about:
Midjourney’s images, he said, were able to pass screening in facial recognition programs, which Bellingcat uses to verify identities, typically of Russians who have committed crimes or other abuses. It is not difficult to imagine governments or other nefarious actors fabricating images to harass or discredit their enemies.
At the same time, Mr Higgins said, the tool also struggles to create compelling images with people who aren’t photographed as often as Mr Trump, like Britain’s new Prime Minister Rishi Sunak or comedian Harry Hill, “that’s probably not the case outside of the UK known.”
Midjourney certainly wasn’t amused. Mr Higgins’ account was suspended without explanation after the images went viral. The company did not respond to requests for comment.
The limitations of generative imagery make them relatively easy for news organizations or others attuned to the risk to discern — at least for now.
Nevertheless, photo agencies government regulators and a Music industry trading group have endeavored to protect their content from unauthorized use, but the technology’s powerful ability to mimic and adapt makes these efforts difficult.
Some AI image generators even reproduced images – a queasy “Twin Peaks” homage; Will Smith eats a handful of pasta – with distorted versions of the watermarks used by companies like Getty Images or Shutterstock.
In February, Getty accused Stability AI of illegally copying more than 12 million Getty photos along with captions and metadata to train the software behind its Stable Diffusion tool. In its lawsuit, Getty argued that stable diffusion diluted the value of the Getty watermark by incorporating it into images that “ranged from the bizarre to the grotesque.”
Getty said the “brazen theft and freeriding” was carried out “on an astounding scale.” Stability AI did not respond to a request for comment.
Getty’s lawsuit reflects concerns raised by many individual artists — that AI companies are becoming a competitive threat by copying content they do not have permission to use.
Trademark infringement has also become a problem: artificially generated images have recreated NBC’s peacock logo, albeit with garbled letters, and featured Coca-Cola’s familiar curvy logo with added O’s in the name.
In February, the US Copyright Office identified artificially generated images when evaluating the case of “Zarya of the Dawn,” an 18-page comic book written by Kristina Kashtanova with art created by Midjourney. The government administrator decided to copyright the comic’s text but not its art.
“Due to the significant distance between what a user can direct Midjourney to create and the visual material Midjourney actually produces, Midjourney users lack sufficient control over the images generated to be treated as the ‘mastermind’ behind them to become,” the office said in his decision.
The threat to photographers is quickly outstripping legal protections, said Mickey H. Osterreicher, general counsel of the National Press Photographers Association. Editorial offices will increasingly have difficulties authenticating content. Social media users ignore labels that clearly mark images as artificially generated and choose to believe they are real photos, he said.
Generative AI could also make fake videos easier to manufacture. A video appeared online this week that appeared to show Nina Schick, a writer and expert on generative AI, explaining how the technology “created a world where shadows are mistaken for the real.” Ms Schick’s face then glittered as the camera rolled back to reveal a body double in her place.
The video explained that the deepfake was created, with Ms Schick’s consent, by Dutch company Revel.ai and Truepic, a California company investigating broader digital content scrutiny.
The companies described their video as the “first digitally transparent deepfake”, which is stamped as computer-generated. The data is cryptographically sealed in the file; Tampering with the image breaks the digital signature and prevents the credentials from being displayed when using trusted software.
The companies hope the badge, which is paid for commercial customers, will be adopted by other content creators to create a trust standard for AI images.
“The magnitude of this problem will accelerate so rapidly that it will drive consumer education very quickly,” said Jeff McGregor, Truepic’s chief executive.
Truepic is part of the Coalition for Content Provenance and Authenticity, a project created through an alliance with companies like Adobe, Intel and Microsoft to better understand the origins of digital media. This was announced by the chip manufacturer Nvidia Last month that it is working with Getty to help train “responsible” AI models using Getty’s licensed content, paying royalties to artists.
On the same day, Adobe introduced its own image generation product, Firefly, which is trained only on licensed, proprietary, or out-of-copyright images. Dana Rao, the Company’s Chief Trust Officer, said on his website that the tool automatically adds content credits — “like a nutrition label for imaging” — that identify how an image was created. Adobe said it also plans to compensate contributors.
Last month, model Chrissy Teigen wrote on Twitter that she had been fooled by the Pope’s baggy jacket, adding that “there is no way I will survive the future of technology”.
Last week, a Set of new AI images showed the Pope, again in his usual robes, enjoying a tall pint of beer. The hands looked normal for the most part – apart from the wedding band on the Pope’s ring finger.
Additional production by Jeanne Noonan DelMundo, Aaron Krolik and Michael Andre.