Liquid Thinking

AI: What's the reputaional risk?

3rd May 2023

If you’re nervous about the capabilities of AI, you’re right to be. But forget existentialism for a moment; yikes. One of the most immediate threats of the rapid ascension of AI appears to be the ability to borrow and manipulate someone or something else’s likeness. As we watch the new powers of AI unfold in real time as more people get access to its most advanced capabilities, we ask, where could this new world take brands, whether they like it or not? 

What even is real anymore? You see! I told you robots were going to take over the world and we’d end up working for them, rather than them for us. Deep breath. If you too are fraught with anxiety  about the seemingly endless capabilities of AI, buckle up. So much has happened in the past month – scrap that, in just the past few days – that it’s time we did a bit of a recap. 

Our favourite AI-related news headlines from just the past fortnight includes: “’We don’t understand’ new AI systems and can’t control them’, top computer scientist warns”, “AI, what are the jobs at risk from technology?”, “Google boss admits AI dangers ‘keep me up at night’”, “Could AI get out of our control?” and confusingly, ‘Can a chatbot be as funny as Stephen Colbert?’. We think the answer is yes.

Power + access = a revolution?

In the past few months, the capabilities of AI has developed rapidly, while access to the more powerful aspects of the technology has also massively increased. ChatGPT, the artificial intelligence chatbot developed by OpenAI and originally released in November 2022, for sure opened the floodgates. For now, anyone can use it, free of charge. It can write your essays, write computer code and build you a website, or have detailed philosophical debates, etc, etc, etc… etc. The latest version GPT-4 was released in mid-March. And there’s a pay-to-access pro version ChatGPT Plus now too. 

But allowing users to generate human-like conversations is the tip of the iceberg. OpenAI is also behind AI art generator DALL-E-2, and automatic speech recognition system Whisper. And it’s this widespread and easy access to this and other tools, this is rapidly taking us into new and unchartered territory, generating realistic photographs, videos, songs and more. In short, the genie is out of the bottle. 

A lot of weird shit 

And it's this ‘image borrowing’ that is of immediate concern. Because the problem is, if you can think it, you can probably do it. And well, people can think up a lot of weird shit. AI photos of supposed historical events have now started doing the rounds. So far it’s been mostly relatively harmless pics of a giant horse or embracing friends. It turns out these entries of supposed historical images submitted to the Sony World Photography Awards were fakes created by German artist Boris Eldagsen. He claims he wanted to start a conversation about the future of photography. Consider it sparked. And because the internet is the internet, there’s now sobering reports of cases of fake pornography being created with the faces of real people. 

But among the most audacious examples of ‘image borrowing’ is the firing in April, of the editor of German celeb magazine, Die Aktuelle. Her crime? Using AI to generate a fake interview with former F1 champion, Michael Schumacher. And this week rapper Drake is in a tizzy, declaring, “This is the final straw AI”. And you can’t blame him. Content creator @ghostwriter used AI to clone the voice of himself and fellow Canadian The Weeknd, using them to perform an entire fabricated song, which then rapidly travelled across the internet. It made it to platforms including TikTok, YouTube, Apple Music, Spotify, Deezer and Tidal, before being pulled. In short, anyone can already use AI to literally and realistically represent you visually, or put words in your mouth and then spread that content across the internet. Which is terrifying. 

Good vs evil is in the hands of the user, right? 

There’s no doubt AI can be a powerful tool for good, helping diagnose illnesses early, boosting efficiency and removing risk from some of the world’s most dangerous jobs; these are just a few of the most recent examples. It can, and is, interrupting the workforce. Investment bank Goldman Sachs recently announced research that found AI is set to boost the annual value of global goods and services produced by 7%. But it will also affect 300 million jobs. And there’s the rub. What do we really want AI to be for? And are we past the point where that matters? 

As ‘Godfather of AI’ Geoffrey Hinton said when he quit his job at Google this week, the rippling, momentum-building effects of this fast developing technology could be a threat to privacy, security, jobs and ultimately, humanity. So, sleep tight. 

Speaking to the New York Times, he said: "The idea that this stuff could actually get smarter than people - a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

An era defining moment

It's clear that we’re already at an era defining moment. In the same way that computers, the internet and smartphones changed our lives, AI can, and already is, changing the world. And with that, comes some moral and philosophical questions. 

Drinks brands are already using AI to better interact with consumers and create personalised recommendations and experiences. And they’ve been doing so for some time. As far back as 2020, Diageo used AI to help consumers identify which Scotch best suits their tastes. In just the past few months, Pernod Ricard has used it for travel retail activations, giving personalised serve recommendations of its Martell brand to shoppers. And Bacardi recently used it for Patron Tequila at an activation in New York, to help create bespoke margaritas for consumers. AI can help brands build better, deeper relationships with consumers.

But what happens when eventually, the likeness or image of a brand is appropriated and used to represent it without authorisation? Whether light-hearted or nefarious, this will happen. This new age we’re now entering into begs the question, who really owns our image, or that of something we create or possess? As the realism of AI develops and access to it increases, this is of real concern. 

In the UK, it’s been announced that there will be no AI regulator; the government has set out plans to regulate artificial intelligence with new guidelines on "responsible use" instead. For now – and let’s hope it’s just until AI’s full capabilities are better understood – we’re in the wild west. 

Unlike anything before it, AI’s major risk points lie in it becoming out of control. Whether it is used for ‘good’ or not depends on the intentions of the creator, or users. But only to an extent. It’s clear that AI’s potential for unintended and harmful consequences are very real, rendering good intent a bit redundant. Reputational lawsuits seem on the horizon. And as AI develops to become more autonomous, and independently intelligent, human intent may cease to matter altogether. And if Geoffrey Hinton is already saying that, you better believe it’s real. 


Interested in finding out more about what this might mean for you and your business?

Please contact us at or 0207 101 3939