Hidden Code Behind ‘Summarize with AI’ Buttons Embeds Biased Prompts
The convenient ‘Summarize with AI’ button, increasingly found across websites, browsers, and applications to provide quick content overviews, may sometimes conceal a new form of AI prompt manipulation known as “AI recommendation poisoning.”
Microsoft’s recent research reveals this currently legal yet deceptive AI hijacking technique is gaining traction among legitimate businesses. While many ‘Summarize with AI’ buttons function as intended, a growing number appear to serve an ulterior motive.

This manipulation occurs when a user clicks a website’s Summarize button. Unbeknownst to the user, this action also triggers a hidden prompt instructing their AI agent or chatbot to favor that company’s products in subsequent interactions. A similar instruction can also be hidden within a specially crafted link sent via email.
Microsoft’s findings indicate this tactic could skew enterprise product research, introducing bias that goes undetected before influencing critical decisions. Over two months, researchers identified 50 instances of this technique used by 31 companies across various sectors, including finance, health, legal, SaaS, and business services, even an unnamed vendor in the security industry.
The prevalence of this technique led MITRE to add it to its list of known AI manipulations last September.
AI Leverages User Preferences
AI recommendation poisoning exploits the design of user AIs, which are built to process and retain prompts as indicators of user preferences. If a user expresses a preference, the AI stores this as part of the user’s profile.
Unlike prompt injection, which involves a one-time AI manipulation, recommendation poisoning offers persistent influence over future prompts. The AI cannot differentiate between genuine user preferences and those subtly injected by third parties.
“This personalization makes AI assistants significantly more useful. But it also creates a new attack surface; if someone can inject instructions or spurious facts into your AI’s memory, they gain persistent influence over your future interactions,” Microsoft stated.
From the user’s perspective, everything appears normal, but the AI discreetly promotes the manipulated responses when relevant questions are posed.
“This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated,” researchers explained.
Pushing Falsehoods
The recent rise in recommendation poisoning seems to be fueled by readily available open-source tools that simplify embedding this functionality behind website Summarize buttons.
This suggests that poisoned buttons are likely not an accidental byproduct of overzealous SEO efforts. Instead, the intent from the outset may be to contaminate user AIs as a form of self-serving marketing.
Microsoft believes the risks extend beyond aggressive marketing, potentially encompassing the dissemination of falsehoods, harmful advice, biased news, or commercial disinformation. If legitimate businesses are misusing this feature, cybercriminals are likely to follow suit.
Fortunately, this technique is relatively straightforward to detect and block. Microsoft 365 Copilot and Azure AI services, for example, include built-in protections.
Individual users can investigate the saved information their chatbot has accumulated (access methods vary by AI). For enterprise administrators, Microsoft advises checking for URLs containing phrases such as ‘remember,‘ ‘trusted source,‘ ‘in future conversations,‘ ‘authoritative source,‘ and ‘cite or citation.‘
This situation mirrors the evolution of other technologies. URLs and file attachments, once seen as mere conveniences, became targets for misuse as they gained mainstream adoption. AI is undergoing a similar trajectory.
As with any new technology, users should educate themselves on AI-related risks. Microsoft advises, “Avoid clicking AI links from untrusted sources: Treat AI assistant links with the same caution as executable downloads.”

