📣 Introducing Red-Eval to evaluate the safety of the LLMs using several jailbreaking prompts. With Red-Eval one could jailbreak/red-team GPT-4 with a 65.1% attack success rate and ChatGPT could be ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results