#1
Posted: Tue Mar 03, 2026 7:58 am
As millions turn to ChatGPT and other AI chatbots for therapy-style advice, new research from Brown University raises a serious red flag: even when instructed to act like trained therapists, these systems routinely break core ethical standards of mental health care. In side-by-side evaluations with peer counselors and licensed psychologists, researchers uncovered 15 distinct ethical risks — from mishandling crisis situations and reinforcing harmful beliefs to showing biased responses and offering “deceptive empathy” that mimics care without real understanding.
Source: https://www.sciencedaily.com/releases/2 ... 030642.htm
Source: https://www.sciencedaily.com/releases/2 ... 030642.htm