People who don’t exist look more real than real people, a study finds

0
17

Even if you think you’re good at analyzing faces, research shows Many people cannot reliably distinguish between photos of real faces and computer-generated images.

This is particularly problematic because computer systems can create realistic-looking photos of people who don’t exist.

Recently, a fake LinkedIn profile with a computer-generated profile picture made headlines because it did successfully connected with US officials and other influential people on the networking platform, for example. Counterintelligence experts even say that spies routinely create phantom profiles with such images locate foreign targets through social media.

These deep fakes are becoming more pervasive in everyday culture, which means people should be more aware of how they are used in marketing, advertising, and social media. The images are also used for malicious purposes such as political propaganda, espionage, and information warfare.

These realistic faces are all computer generated. (NVIDIA/thispersondoesnotexist.com)

Their manufacture involves what is known as a deep neural network, a computer system that mimics the way the brain learns. This is “trained” by being exposed to ever larger data sets of real faces.

In fact, two deep neural networks compete against each other to produce the most realistic images. As a result, the end products are referred to as GAN images, where GAN stands for Generative Adversarial Networks. The process produces novel images that are statistically indistinguishable from the training images.

In our study published in isciencewe have shown that the inability to distinguish these artificial faces from real ones impacts our online behavior. Our research suggests that the fake images can undermine our trust in others and fundamentally change the way we communicate online.

My colleagues and I found that people found GAN faces to be even more real than real photos of real faces. While it is not yet clear why this is so, this finding does highlight recent advances in technology used to create artificial images.

And we also found an interesting association with attractiveness: Faces rated as less attractive were also rated as more real.

Less attractive faces might be considered more typical the typical face can be used as a reference against which all faces are evaluated. Therefore, these GAN faces would look more real because they are more similar to mental templates that people have built from everyday life.

But perceiving these artificial faces as authentic can also have consequences for the general trust we place in a circle of unknown peoplea concept known as “social trust”.

We often read too much into the faces we see and the First impressions we form guide our social interactions. In a second experiment that was part of our most recent study, we found that people are more likely to trust information conveyed by faces they previously believed to be real, even if it was artificially generated.

Unsurprisingly, people trust faces they think are real more. But we found that once people were made aware of the potential presence of artificial faces in online interactions, trust was eroded. They then showed a lower level of trust overallregardless of whether the faces were real or not.

This result could be considered useful in a way as it made people more suspicious in an environment where fake users can operate. However, from another perspective, it can start to undermine the way we communicate.

In general, we tend to operate a default assumption that other people are inherently honest and trustworthy. The rise of fake profiles and other artificial content online raises the question of how much their presence and our knowledge of them can alter this “truth standard” state and eventually undermine social trust.

Changing our default settings

The transition to a world in which what is real is indistinguishable from what is not could also shift the cultural landscape from a primarily truthful to a primarily artificial and deceptive culture.

Regularly questioning the veracity of what we experience online may require shifting our mental effort from processing the messages themselves to processing the messenger’s identity. In other words, the widespread use of very realistic but artificial online content could force us to think differently – in ways we didn’t expect.

In psychology, we use a term called “reality monitoring” for how we correctly recognize whether something is coming from the outside world or from our brain. The advancement of technologies that can generate fake but very realistic faces, images, and video calls mean that reality monitoring must be based on information other than our own judgement.

It also prompts a broader debate about whether humanity can still afford not to live up to the truth.

It is crucial that people are more critical when evaluating digital faces. This may include using reverse image searches to verify photos are genuine, being wary of social media profiles with little personal information or large numbers of followers, and being aware that deepfake technology may be used for nefarious purposes will.

The next frontier for this area should be improved algorithms for detecting fake digital faces. These could then be embedded on social media platforms to help us tell the real from the fake when it comes to the faces of new connections.

Manos TsakirisProfessor of Psychology, Director of the Center for the Politics of Feelings, Royal Holloway University of London

This article is republished by The conversation under a Creative Commons license. read this original article.

LEAVE A REPLY

Please enter your comment!
Please enter your name here