Mass General Brigham-led study found that large language models (LLMs) often fail to challenge illogical medical prompts due to sycophantic behavior, posing risks for misinformation. The researchers ...