Opinion
The Register on MSNOpinion
It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic
Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset Poisoning AI models might be way easier than previously thought if an Anthropic study is ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results