.A recent study by researchers at Israel’s Ben-Gurion University has shed light on significant privacy vulnerabilities inherent in several AI chatbots, raising concerns about the security of private conversations.
According to Yisroel Mirsky, head of the Offensive AI Research Lab at Ben-Gurion University, malicious actors can exploit these vulnerabilities to eavesdrop on chats conducted through platforms like ChatGPT.
Mirsky highlighted that individuals sharing the same Wi-Fi network or local area network (LAN) as the chat participants, or even remote malicious actors, can intercept and monitor conversations without detection.
The research report identifies these exploits as “side-channel attacks,” a method wherein third parties gather data passively through metadata or other indirect means rather than breaching security barriers.
Unlike traditional hacks that directly penetrate firewalls, side-channel attacks leverage weaknesses in encryption protocols. Despite encryption efforts by AI developers like OpenAI, Mirsky’s team discovered flaws in their encryption implementation, leaving message content susceptible to interception.
While side-channel attacks are generally less invasive, they pose significant risks, as demonstrated by the researchers’ ability to infer chat prompts with 55 per cent accuracy. This susceptibility makes sensitive topics easily detectable to malicious actors.
Although the study primarily scrutinizes OpenAI’s encryption practices, it suggests that most chatbots, excluding Google’s Gemini, are susceptible to similar exploits.
Central to these vulnerabilities is the use of “tokens” by chatbots, which facilitate efficient communication between users and AI models. Although chatbot transmissions are typically encrypted, the tokens themselves create a vulnerability that was previously overlooked.
Access to real-time token data enables malicious actors to infer conversation prompts, akin to overhearing a conversation through a closed door.
To substantiate their findings, Mirsky’s team employed a second AI model to analyze raw data acquired through the side-channel. Their experiments revealed a high success rate in predicting conversation prompts, underscoring the severity of the vulnerability.
Responding to these concerns, Microsoft assured users that personal details are unlikely to be compromised by the exploit affecting its Copilot AI. However, the company pledged to address the issue promptly with updates to safeguard customers.
The implications of these vulnerabilities are profound, particularly concerning sensitive topics such as abortion and LGBTQ issues, where privacy is paramount. Exploitation of these vulnerabilities could have serious consequences, potentially endangering individuals seeking information on such topics.
As the debate surrounding AI ethics and privacy intensifies, these findings underscore the urgent need for robust security measures to protect users’ privacy in AI-driven interactions.