Bigger models, more parameters, higher benchmarks. There is often a fixation on scale in the discourse around AI, making it easy to assume that the bigger a Large Language Model (LLM) is, the better ...
A new study shows that fine-tuning ChatGPT on even small amounts of bad data can make it unsafe, unreliable, and veer it wildly off-topic. Just 10% of wrong answers in training data begins to break ...