The Unstoppable Rise of Synthetic Advertising.

Written by Graeme Murray

Amid the new AI arms race we are currently witnessing, one thing much less hyped is the heralding of a new era of ‘synthetic advertising’, which has been born due to the rise of artificial intelligence, machine learning and algorithms.  

These ads are “generated or edited through the artificial and automatic production and modification of data” (Campbell et al., 2021).

Here are three different recent examples:

Coca-Cola collaborated with OpenAI, using their DALL-E2 generative image model and ChatGPT to bring to life some of the world’s most famous works of art. 

VW touched the hearts of many people in Brazil using an actress and deepfake facial recognition software to make it look like Elis Regina Carvalho Costa, a legendary Brazilian singer who died over 40 years ago, was performing her 1976 hit Como Nossos Pais in a duet with her daughter while driving a VW van. 

Orange in France created a deepfake with AI to address sexism in football and show its support for the French female national team.

The use of manipulation in advertising is nothing new. Make-up, lights and good editing were the tools of the trade in the analogue era. These evolved into CGI, photoshop and filters in the digital era. Today’s synthetic era is characterised by deepfakes, AI and machine learning (Campbell et al., 2021).

Advertisers and agencies seem bullish. Synthetic advertising represents a golden opportunity to up the ante towards greater creativity. And there is evidence to suggest that heightened realism can make advertising more persuasive.

On the flip side, there is also the risk of increasing awareness of ad fakery among consumers. And if they see greater fakery, advertising persuasiveness is potentially at risk. It seems a double-edged sword, but many brands, such as Coca-Cola, VW and Orange, are already firm believers and seizing their opportunities.

Beauty and fashion brands like Maybelline and Jacquemus have recently pushed the boundaries regarding fake out-of-home advertising on platforms such as Instagram and TikTok. Both brands ask the same question of their content: is this real or not? 

This curiosity is good for virality and is something we have already witnessed in early viral videos like Quiksilver’s ‘Dynamite Surfer’ from 2007, which attracted over 20 million views on the web. 

Brands see this as an opportunity to be inventive, and one can argue that these TikToks and the like are just the latest incarnation of providing some brief form of entertainment on today’s social media. There’s value in doing so. It’s purely an entertainment play at the expense of any authenticity. And maybe the attraction is the (apparent) artificiality of the idea itself.

However, synthetic advertising will require strong legislation and responsible usage to protect consumers. The risks of deepfakes, fake news, misinformation, tarnishing reputations, potential for manipulation and causing harm are genuine. All AI systems carry subliminal, manipulative or exploitative risks. 

Many tough moral questions need considering and answering when it comes to synthetic advertising:

Is it right to use the image of someone famous who is no longer alive and cannot approve of its usage? 

Where does the source material that the AI model uses come from, who owns it, and are there any copyright or legal implications? 

Has the AI been trained on datasets biased in terms of gender and race, and how are these biases being addressed to minimise stereotypes?

Could ads potentially cause some people to confuse fiction with reality, especially among younger children and teenagers? 

Will consumers still be able to make informed decisions when reality is no longer accurate? 

Will consumers still be able to trust brands if there is no discernible difference between real and fake?

Will brands and their agencies act responsibly, ethically and with care, or will they let consumers decide for themselves?

What’s clear is that the synthetic advertising space is evolving rapidly. Analyst Gartner expects 30% of marketing messages from large organisations will be synthetically generated by 2025, up from less than 2% in 2022.

And it will not end here. It is a taste of a not-so-distant future where we will inhabit virtual, immersive worlds, disconnected from the real world - think of the film Ready Player One from 2018, where humans use virtual reality to escape the real world.

Meta’s Mark Zuckerberg has spent billions of dollars trying to shape, build and popularise his Metaverse. Apple’s Tim Cook is convinced we will want to wear an Apple Vision Pro headset and play in Apple’s virtual (or, as Apple calls it, spatial) world. 

The gaming worlds of Roblox, Minecraft and Fortnite prove that there is an appetite for such worlds if done appropriately.

But what if we can’t tell what is real and what is artificial anymore in the future? Does it change the meaning and origin of what we see and believe? Does synthetic advertising lead to a fake or an enhanced experience? Are we a step closer to living in a real Matrix?

Maybe by then, the question of ‘Is it real?’ won’t matter anymore as we live in a mixed reality world.

Until then, we must hope that synthetic advertising will be used as a source of good for the industry and not purely for human exploitation and corporate profiteering.

We’ve already had one warning after witnessing the development of social media over the last decade or so. We would be wise not to let synthetic advertising follow a similar destructive route.

Next
Next

We’ve Sold Out to the Algorithms – Are we Ready for the Consequences?