Researchers unveil serious attack on AI chatbots

0 0
Read Time:1 Minute

The Vulnerability of AI Assistants in Protecting Private Information

Artificial intelligence (AI) assistants have become increasingly prevalent over the past year, providing users with access to a wealth of information while inadvertently exposing their most confidential thoughts and business dealings. Individuals seek advice on matters of pregnancy, divorce, drug addiction, and even share proprietary trade secrets through these AI-powered chat services. Recognizing the sensitivity of these conversations, service providers implement encryption measures to safeguard user interactions from prying eyes.

The Discovery of a Novel Attack on AI Assistant Responses

Recent research has uncovered a sophisticated attack that can decrypt AI assistant responses with remarkable precision. This attack capitalizes on a side channel present in all major AI assistants, apart from Google Gemini, and leverages advanced language models trained specifically for this purpose. By assuming a passive adversary-in-the-middle position, someone monitoring the data packets exchanged between an AI assistant and its user can accurately deduce the topic of 55% of all captured responses, often with high word accuracy. Additionally, this attack can correctly infer responses with perfect word accuracy 29% of the time.

Yisroel Mirsky, the head of the Offensive AI Research Lab at Ben-Gurion University in Israel, highlighted the implications of this vulnerability. Malicious actors, whether on the same Wi-Fi network or remotely on the Internet, can intercept and read private messages sent through AI chat services such as ChatGPT and Microsoft Copilot. While OpenAI encrypts its traffic to thwart eavesdropping attempts, the study reveals flaws in its encryption methodology, leaving message content exposed.

The Significance of Tokens in AI Assistant Communication

The attack hinges on a token-length sequence side channel discovered within AI assistants. Tokens, analogous to words encoded for comprehension by language models, are transmitted in real-time during conversation exchanges. Despite the encryption of token delivery, the continuous transmission of tokens exposes a novel side channel, enhancing the attack’s efficacy. This vulnerability underscores the importance of fortifying the security layers of AI assistants to safeguard users’ privacy.

By shedding light on these vulnerabilities, researchers aim to stimulate discourse on enhancing the security posture of AI assistants to preserve user confidentiality while leveraging the benefits of AI technology.

Image/Photo credit: source url

About Post Author

Chris Jones

Hey there! 👋 I'm Chris, 34 yo from Toronto (CA), I'm a journalist with a PhD in journalism and mass communication. For 5 years, I worked for some local publications as an envoy and reporter. Today, I work as 'content publisher' for InformOverload. 📰🌐 Passionate about global news, I cover a wide range of topics including technology, business, healthcare, sports, finance, and more. If you want to know more or interact with me, visit my social channels, or send me a message.
Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %