Hackers can read private AI-assistant chats even though they’re encrypted (arstechnica.com)
from pelespirit@sh.itjust.works to technology@lemmy.ml on 14 Mar 2024 17:48
https://sh.itjust.works/post/16219124

But now, researchers have devised an attack that deciphers AI assistant responses with surprising accuracy. The technique exploits a side channel present in all of the major AI assistants, with the exception of Google Gemini. It then refines the fairly raw results through large language models specially trained for the task. The result: Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.

#technology

threaded - newest

Capricorn@lemmy.today on 14 Mar 2024 19:05 next collapse

Nice, this may be the path to kill AI made for money and not for helping people

SnotFlickerman@lemmy.blahaj.zone on 14 Mar 2024 20:31 next collapse

The endless evolutionary arms race between Control versus Resistance.

elephantintheroom@lemmy.ml on 14 Mar 2024 22:31 next collapse

Good thing I’m running my LLMs locally on a heavily encrypted PC with no network capabilities at all. Only way to not have my data siphoned, be it by hackers or big tech.

JackGreenEarth@lemm.ee on 14 Mar 2024 22:46 collapse

Does this affect locally run LLMs through summat like Jan? I already knew cloud based LLMs weren’t at all private, and thus don’t use them for anything I don’t care the public knowing about.