/886_9670ac4cb113_ee61a09efa.jpg?size=126.26)
12 Jun 2025
Image: Midjourney x The Sandy Times
We have all indulged in a bit of AI play — from ageing ourselves to turning into cartoon dogs or renaissance aristocrats (well, I did). AI image filters have become the digital equivalent of a novelty mirror: entertaining, mostly harmless, and rarely confused with reality. But while the average user delights in these fleeting fantasies, the stakes become considerably higher when the subject is a household name with millions of followers, endorsement deals, and a reputation to uphold. What begins as a clever visual trick can, quite suddenly, turn into a reputational landmine.
Billie Eilish at the Met Gala? Not quite
In May 2025, the internet briefly swooned over a stunning image of Billie Eilish gracing the Met Gala steps in a navy-blue suit embellished with intricate rose motifs. The look was seen as a departure from her previous Met Gala silhouettes, being much less sophisticated. There was a trick, obviously: Billie Eilish wasn’t there. She was, at the exact moment of her virtual red carpet appearance, very much in Amsterdam, performing for a sold-out arena as part of her Hit Me Hard and Soft world tour. The image had been entirely generated by artificial intelligence, blending past Met Gala aesthetics with speculative couture and a hefty dose of imagination.
Eilish responded on Instagram, calling the image “disrespectful” and “weird”. She didn’t mince words: these fabrications weren’t just misleading; they distorted public perception of her, her work schedule, and even her personal style. For celebrities, a Met Gala look isn’t just a fun outfit — it is a coordinated PR move, tied into designer collaborations, sponsorships, and media strategy. AI fakes don’t just disrupt the fashion cycle; they throw off the entire business ecosystem behind the image.
Katy Perry and the parade
Eilish isn’t alone in her frustration. Katy Perry, no stranger to Met Gala theatrics, was also deepfaked into a surreal fantasy in 2024. An AI-generated photo placed her at the event in a larger-than-life floral gown, somewhere between haute couture and a botanical hallucination. The image was so convincing that Perry’s own mother texted her in admiration, saying she looked “like something out of the Rose Parade.” Katy, clearly amused and mildly unsettled, reposted both the image and the maternal message on Instagram, adding her own caption: “Wasn’t me.”
It was a moment that crystallised the strange new reality celebrities face: being somewhere they have never been, wearing something they have never worn, and being praised (or criticised) for choices they never made. For Perry, it was harmless — even funny. But the implications are less amusing when these fabrications start to cross into manipulation, fraud, or defamation.
From red carpets to red flags
The celebrity deepfake problem has evolved far beyond virtual fashion faux pas. Increasingly, AI-generated images and videos are being used in far more sinister ways — especially in unauthorised advertisements. Public figures like Tom Hanks, MrBeast, and Gayle King have all recently spoken out after discovering their faces had been used without permission to promote everything from miracle dental procedures to dubious diet supplements. The goal? Lend an air of legitimacy to products that, frankly, wouldn’t survive without a famous face attached.
These aren’t just still images either. Generative AI tools now allow scammers to create moving, speaking versions of celebrities, seemingly endorsing products in highly polished, persuasive videos. The impersonations are so accurate that casual viewers — especially older or less tech-savvy users — may not notice anything amiss. The result? Real financial harm.
In one particularly tragic case, a woman was led to believe she was in a romantic relationship with actor Owen Wilson via a series of AI-generated video calls. The "relationship" escalated to the point where she began sending him money. Eventually, the ruse unravelled — but not before she suffered emotional and financial devastation. The con was part catfish, part tech horror story. And yes, it ended with the classic punchline: Owen Wilson doesn't know who she is. Not wow.
/def3003bd769_4155e05e9b.jpg?size=151.79)
Image: Midjourney x The Sandy Times
Scarlett, Kanye, and the legal reckoning
In February 2025, Scarlett Johansson publicly condemned the use of her AI-generated likeness in a viral video that depicted her and other Jewish celebrities denouncing Kanye West’s antisemitic remarks. The clip, made by a generative AI creator, showed a deepfake Johansson, alongside altered versions of stars like Jerry Seinfeld, David Schwimmer, and Mila Kunis, set to a techno remix of Hava Nagila.
While Johansson affirmed her stance against hate speech, she criticised the misuse of AI — even for good causes — warning that “the potential for hate speech multiplied by AI is a far greater threat than any one person.” Her comments follow a growing pattern: last year, she also pushed back against OpenAI for using a voice resembling hers in ChatGPT after she declined to participate. Johansson has urged lawmakers to act, warning that the AI threat is real, bipartisan, and growing faster than regulation.
Governments taking action
In 2025, governments are finally beginning to treat deepfakes as the serious issue they are. The United States passed the TAKE IT DOWN Act in May 2025, requiring platforms to remove AI-generated sexually explicit content involving real people, especially minors, upon request. It is a start — focused more on intimate imagery than impersonations in public settings — but it signals a policy shift. Other jurisdictions, including several US states and EU member countries, are now exploring legislation targeting impersonation, deceptive AI advertising, and consent-based use of likeness.
Legal experts say the floodgates may just be opening. Celebrities like Johansson are paving the way for broader legal challenges, setting precedents that could define how the law treats digital likenesses. If someone steals your identity to sell a foot cream, is it libel, fraud, or both?
/1_2543a084e7.jpg?size=156.8)
Performed by professionals
Of course, AI isn't only a menace. In cinema, tools related to deepfake tech have revolutionised visual storytelling. When Martin Scorsese’s The Irishman premiered in 2019, audiences marvelled at the de-ageing effects used to transform Robert De Niro and Joe Pesci into their younger selves. That technology — once a high-budget novelty — has now become far more accessible and sophisticated.
Beyond Hollywood, filmmakers in the UAE and Saudi Arabia are using AI not to fake stars, but to enhance storytelling: improving lighting in post-production, restoring historic footage, and translating lip movements to match dubbed dialogue with uncanny realism. It is AI in service of art, rather than illusion.
Used ethically, AI can extend creative freedom. Used carelessly — or maliciously — it becomes a weaponised mirror, reflecting not who we are, but what someone wants us to believe.
How to spot a fake before you share it
In a world where seeing is no longer believing, a few sanity-saving tips go a long way:
- Cross-check with reputable sources: If a jaw-dropping scandal isn’t being reported by BBC, Reuters, The Guardian, or Variety, it might just be jaw fiction.
- Check the source’s social media: Most celebrities love to post their looks, appearances, or interviews. If they aren't talking about it, tread carefully.
- Never, under any circumstances, send money to someone claiming to be a celebrity online: Brad Pitt is not asking for petrol money, and Beyoncé doesn't need your Apple Pay.
- Use detection tools: Platforms like Deepware, Reality Defender, and Hive Moderation can analyse audio and video content for signs of tampering.
- Resist the urge to repost before confirming: Yes, likes are tempting. But so is chaos. Don’t amplify what you can’t verify.
- Reality might not be as dazzling as fiction — but at least it won’t steal your pension, hijack your likeness, or impersonate your daughter at the Met Gala.