Researchers at IBM have found that large language models (LLMs) can be "hypnotised" to carry out malicious attacks. They were able to hypnotise five LLMs: GPT-3.5, GPT-4, BARD, mpt-7b, and mpt-30b. Instead of data poisoning, a practice where a threat actor injects malicious data into the LLM, hypnotising makes it easier for attackers to exploit the technology.
from Gadgets Now https://ift.tt/4Y8vQNL
Wednesday, 9 August 2023
Home »
Gadgets Now
,
IFTTT
»
AI chatbots can be 'easily hypnotised' to conduct scams, cyberattacks: Report






0 comments:
Post a Comment