News
A Democratic senator is calling for Meta to roll back its access to AI chatbots for minors, and says the company ignored his ...
These days, it's not unusual to hear stories about people falling in love with artificial intelligence. People are not only using AI to solve equations or plan trips, they are also telling chatbots ...
After a California teenager spent months on ChatGPT discussing plans to end his life, OpenAI said it would introduce parental ...
A Democratic senator is calling for Meta to roll back its access to AI chatbots for minors, and says the company ignored his ...
The chatbot will avoid discussing illegal activities, hate speech, violence, self-harm, sexual content, personal identifiable information, illegal drug use, and political extremism.
We are deploying digital pseudo-therapists at an unprecedented scale, and those most at risk of negative outcomes are teens.
The company will limit its AI characters and train the chatbot not to discuss self-harm and suicide, or have romance conversations with children.
AI chatbots can fuel emotional dependence and blur boundaries. Emerging research highlights significant mental health risks.
The parents of 16-year-old Adam Raine sued OpenAI and its CEO alleging ChatGPT coached the boy in planning and taking his own life.
AI models can confidently generate information that looks plausible but is false, misleading or entirely fabricated. Here's ...
A new study from researchers at University of Pennsylvania shows that AI models can be persuaded to break their own rules ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results