News

“It’s a nightmare of disinformation”

28 Jun 2021

Modern technology is revolutionizing the ways in which pictures and videos can be faked. The consequences can be serious, warns communication scholar Viorela Dan.

Deepfake-Videos:

"The use of AI is what’s new. This allows creators to mimic a person’s voice, words and facial express", says Dan. | © www.interfoto.at

Three years ago, American actor Jordan Peele caused a stir with a video. In it, former U.S. President Barack Obama is seen calling his successor Donald Trump a “total and complete dipshit.” Would you have recognized that this was a fake?

Viorela Dan: Not really from looking at it, as the video looks deceptively authentic. But I would quickly have become suspicious of the words, especially since they are spoken directly into the camera.

Deepfakes is what we call such fake videos. What is new about them?

Dan: The use of artificial intelligence (AI) is what’s new. This allows creators to mimic a person’s voice, words and facial expressions - quite convincingly. That’s why deepfakes are a dream come true for people who want to spread disinformation. And a nightmare for those fighting against it.

How do you make a deepfake that will mislead people?

Dan: If you wanted to create a scandal, you would probably have to make a video that looked as if it was recorded with a hidden camera. Not like the Obama deepfake, where the ex-president looks directly into the camera. And the statements would have to be credible. How far would a politician really go when they otherwise weigh every word they say? We would be faced with a major problem if plausible words were put into the mouths of synthetized versions of politicians.

Such falsifications could also be made in other ways, such as with faked audio recordings. Why are videos so effective?
Dan: Because we tend not to question the authenticity of videos. We believe what we see. We may now understand that videos are sometimes staged. But we don’t fundamentally doubt the authenticity of the people in them. Think of the importance we attach to CCTV footage. No one would think of saying, “That person we’re seeing is not real.”


video player

If you click to view this video your personal data will be transmitted to YouTube and cookies may also be stored on your device. LMU has no influence over how any such data is transmitted or indeed over its further usage.

More information available here: LMU data protection policy, data protection policy from YouTube / Google

Jordan Peele is faking Barack Obama.

1:12 | 28 Jun 2021

A small flag makes the difference:

The fictional politician Peter Behrens in campaign mode, in an experiment sometimes liberal, sometimes conservative. | © Dan/Arendt

It is even easier to manipulate still images. The term “subtle backdrop cues” (SBCs) has become established in communications science. What does it actually mean?

Dan: These are things like symbols or images that can be seen in the background of a photo. If you saw me in a picture and there was a cross behind me, you’d assume I was voting CDU or CSU. But politicians also use SBCs to convey messages that might not be socially acceptable.

Can you give us an example?

Dan: During the 2016 election campaign, Donald Trump tweeted a picture of his rival Hillary Clinton with a load of banknotes behind her. And the words “Most corrupt candidate ever!” inside a six-pointed star, which could be interpreted as a Jewish Star of David. So Trump was suggesting that Hillary Clinton was financed and controlled by Jewish business circles.

And how did people react?

Dan: Trump was immediately called out for it by the liberal media. He then denied having had such intentions. That is exactly what’s special about persuasive communication that’s basically visual. You can always claim: “I never said that. I can’t control what you read into the picture. But okay, I retract the statement.” However, the target group you wanted to reach with it has been reached. That’s why SBCs are like a kind of dog whistle: the messages get through to the target audience, but not necessarily to the general population.

These are questions that PR consultants have no doubt been thinking about for some time now. What are the new developments in this area?
Dan: What is new is that we’re now seeing this phenomenon at all political levels. In the past, such strategies were used by people like the President or the Chancellor. But the fact that someone like the governor of the Austrian state of Vorarlberg, Markus Wallner, is also doing it is new.

What happened there?

Dan: A photo was taken during a meeting between Wallner and Austria’s Chancellor Sebastian Kurz. Behind Wallner there was a picture of an old woman smoking some kind of cigar. But you could have thought it was a joint. And that was replaced by a landscape painting. It came to the public’s attention because the original version was also released. Wallner then justified himself by saying that he and his party had a very clear stance on cannabis. If a woman had been seen behind him smoking a joint, people could have thought that he was in favor of legalizing cannabis. And that is what he wanted to avoid.


The interview is taken from the just published issue of the research magazine EINSICHTEN (in german).

How do you explore the impact of subtle backdrop cues?

Dan: Experimentally, just like you explore the impact of deepfakes. We show people images with different SBCs (say, liberal versus conservative) that we put in the Twitter feed of a fictional politician; a control group is shown the same foreground images, without any SBCs. We then ask the different groups the same questions and compare the results.

What kind of messages are you exploring there?

Dan: We were able to show, for example, that a politician with a German flag in the background is more likely to be considered conservative, even if the flag is really inconspicuous. Perhaps that’s unsurprising, so here’s another example: We showed a picture of a politician in his office, with a framed photo of a woman on his desk. One time the woman is sitting at the computer, and in the other photo she’s standing at the stove, taking a roast out of the oven. And in fact, people assumed that the politician, about whom they had no other clues, was left-wing in one case and right-wing in the other. The right-wing one was the one with the woman at the stove.

Can deepfakes and subtle backdrop cues be dangerous?

Dan: Yes. They can lead to politicians being elected who do not communicate their views openly (SBCs) or to politicians not being elected or being ousted for the wrong reasons (deepfakes). In Gabon, for example, a video said to be a deepfake triggered an attempted coup. The president, Ali Bongo, had not appeared in public for some time. A video showing him delivering a New Year’s address was then used by his opponents to claim that his political camp had created a fake version of the president because he had been unable to deliver the New Year’s address himself. There was speculation that he was seriously ill or had died. And this supposed power vacuum led the military to attempt a coup.

In the Obama deepfake he co-created, American actor Jordan Peele warns viewers against being too gullible. Is it that simple? Or should making fake videos be made a punishable offense?

Dan: We shouldn’t paint too gloomy a picture, but we do have to think about solutions. For a long time, people looked primarily to information technology for these solutions: money has been invested in algorithms that are supposed to detect and remove disinformation. But we will not be able to “decontaminate” the Internet. Even if we had the best algorithms, it is completely unrealistic to think that we could. Not to mention the fact that deleting posts or social media accounts causes more problems than it solves. I think we should focus more on media literacy. And we need to try to limit the damage caused by deepfakes—such as by fact checking. Or with videos that show how deepfakes are created. Automatic identification and regulation are important, but we also need to study their impact.

Interview: Nikolaus Nützel

Dr. Viorela Dan is a postdoctoral researcher in the Department of Media and Communication (IfKW) at LMU Munich. During the summer term 2021 she is Junior Researcher in Residence at LMU's Center for Advanced Studies (CAS).

News

Being more comfortable with uncertainty

Read more

Read more articles on the topic of "crises" in the current issue of "INSIGHTS. Magazine".

You can also read selected stories in the online section of INSIGHTS.

What are you looking for?