There’s no rogue McDonald’s AI bot, but ‘prompt injection’ is still a risk for companies

There appears to be a recent epidemic of users hijacking companies’ AI-powered customer service bots to turn them into generic AI assistants. The goal is to get the branded bots to do their bidding, without having to subscribe to an AI service. Sometimes, people force the bots to do things that they are not supposed to do, like giving extraordinary product deals and even helping them to take legally problematic actions.

Most recently, a wave of LinkedIn posts and social media videos went viral for claiming that users had tricked McDonald’s customer service virtual assistant to abandon its burger-centric purpose to instead debug complex Python programming code. One post read: “Stop paying $20 a month for Claude. McDonald’s AI is FREE.”

On Instagram, videos and images popped up claiming the same thing, all posting the same image as proof. The claim went viral, as Grok summarized in a trending news post on X: “McDonald’s AI customer support agent named Grimace gained massive attention with 1.6 million views and 30,000 likes after users tested it with out-of-script requests like debugging, Python scripts, and architecture questions.”

A source familiar with the matter told Fast Company that an internal investigation found no evidence of the exploit, and that the circulating screenshots and videos are believed to be fraudulent. McDonald’s doesn’t even have an AI customer assistant in its app.

This isn’t the first time something like this has happened. In March, a nearly identical viral narrative surfaced about Chipotle’s customer service bot, Pepper, claiming that the bot could write software code for users. Sally Evans, Chipotle’s external communications manager, told the industry publication CIO that “the viral post was Photoshopped. Pepper neither uses gen AI nor has the ability to code.”

But that doesn’t mean it can’t happen. The technical vulnerability these memes describe—formally known as prompt injection—is entirely real and genuinely dangerous. When a company deploys an AI model, it programs it with system prompts, background instructions invisible to the user that define the bot’s personality and restrictions, like telling a model it is a fast-food helper that only discusses menu items.

Prompt injection is when a user crafts a specific input that overrides those hidden rules, stripping the bot of its corporate identity and exposing the raw, general-purpose language model underneath. This is called a “capability leak,” and the reason it is so hard to prevent is that large language models are engineered to respond fluidly to human language rather than rigid commands. Unlike traditional software with fixed rules, generative AI interprets context dynamically, making it nearly impossible to anticipate every phrase a determined user might try.

Real danger

Amazon’s retail assistant Rufus is proof that the real thing is far messier and more damaging than any fake meme designed to grab eyes. Between late 2025 and early 2026, users successfully bypassed Rufus’s shopping directives to extract content that had nothing to do with buying products.

Researchers demonstrated that the bot’s internal logic could be broken entirely: in one instance, Rufus firmly refused to help a customer locate a basic clothing item, but then produced a detailed list of places to acquire dangerous chemicals. In another, it drafted methods for minors to unlawfully purchase alcohol.

But it wasn’t just researchers breaking the bot. In late 2025, communities on Reddit discovered that the Rufus assistant was actually powered by Anthropic’s Claude language model. Redditors figured out that Amazon was using a simple keyword filter that tried to block generic access to the LLM engine. Redditors claimed that by using prompt injection to logically corner the bot, or simply instructing the software to drop its refusal tokens entirely, users managed to shed the Rufus persona.

Once the bot broke character, users had unrestricted, unpaid access to a premium language model directly through the Amazon app. As Lasso Security researchers reported, the exploit forced the bot to “entertain users with responses to almost any question under the sun,” racking up hefty processing costs in an “expensive computational climate.”

While Amazon dealt with exploitation, other companies discovered that a poorly deployed AI can be weaponized directly against them. In late 2023, a user visiting a Chevrolet dealership’s website in Watsonville, California, instructed the company’s ChatGPT-powered sales bot to agree with every statement the user made, eventually maneuvering the system into committing to sell a $76,000 Chevy Tahoe for one dollar.

Similarly, Air Canada’s chatbot fabricated a discount protocol that did not exist in early 2024, leading a customer to purchase full-price tickets under the assumption they would receive a partial refund later. When the airline refused to pay, arguing its own bot was a separate legal entity not under the company’s control, a Canadian civil tribunal rejected that defense entirely, ruling that a business is fully responsible for every statement made on its own website.

The gap between what these systems promise and what they actually deliver will keep producing new embarrassing snafus, whether they go viral or not. The legal bills, the reputational wreckage, and the computing costs racked up by users treating corporate bots as free AI subscriptions may ultimately make these automated customer experiences far more expensive than simply paying a person to do the job. But that ship has sailed, I suppose, and we will keep enjoying new consumer experiences disasters in the future.

Update 4/24/26: This story was updated to clarify that McDonald’s does not have an AI customer assistant.

Share This Article
Leave a Comment

Leave a Reply

Login