Reports have emerged regarding deceptive social media advertisements falsely claiming that Taylor Swift is giving away free Le Creuset products. These posts, found on platforms like TikTok and Meta’s Ad Library, utilize deepfake technology to create a synthetic version of Swift’s voice and feature clips suggesting the artist is offering free cookware sets. The AI-generated cloned voice addresses Swift’s fans, known as “Swifties,” making various remarks.
The misleading ads direct users to counterfeit versions of websites, such as The Food Network, with fabricated articles and testimonials about the supposed Le Creuset giveaway. Interested parties are prompted to pay $9.96 for shipping to receive the alleged free products. However, no cookware is delivered, and customers find additional monthly charges on their cards. Le Creuset has officially confirmed that no such giveaway promotion is taking place. It serves as a reminder to exercise caution and verify information when encountering enticing offers online, especially those involving celebrities.
The issue of celebrities having their voices co-opted using AI is not exclusive to Taylor Swift, as other public figures like Joanna Gaines have also been targeted in scams featuring verified or sponsored posts. The Better Business Bureau issued a warning in April 2023 about the high quality of ads incorporating AI-generated versions of celebrities. Since then, scammers have exploited deepfake technology to promote various products, using the likenesses of Luke Combs for weight loss gummies, Tom Hanks for dental plans, and Gayle King for other weight loss products.
The current landscape for regulating and penalizing the creation of deepfakes is relatively limited. Platforms such as YouTube have taken steps to address the issue, implementing measures for reporting deepfakes. Additionally, some platforms are collaborating with musicians to lend their voices for AI-generated versions, fostering interest in this technology.
In Congress, two bills were introduced last year to tackle the problem of deepfakes—the No Fakes Act and the Deepfakes Accountability Act. However, the future of these pieces of legislation remains uncertain. Presently, only certain states, including California and Florida, have implemented any form of regulation specifically addressing AI. The evolving nature of deepfake technology underscores the need for continued efforts to develop effective measures and regulations to combat its misuse.