Tag Archives: AI sycophancy

With AI chatbots, Big Tech is moving fast and breaking people

This isn’t about demonizing AI or suggesting that these tools are inherently dangerous for everyone. Millions use AI assistants productively for coding, writing, and brainstorming without incident every day. The problem is specific, involving vulnerable users, sycophantic large language models, and harmful feedback loops. A machine that uses language fluidly, convincingly, and tirelessly is a… Read More »

AI therapy bots fuel delusions and give dangerous advice, Stanford study finds

The Stanford study, titled “Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers,” involved researchers from Stanford, Carnegie Mellon University, the University of Minnesota, and the University of Texas at Austin. Testing reveals systematic therapy failures Against this complicated backdrop, systematic evaluation of the effects of AI therapy becomes particularly important.… Read More »

Annoyed ChatGPT users complain about bot’s relentlessly positive tone

Owing to the aspirational state of things, OpenAI writes, “Our production models do not yet fully reflect the Model Spec, but we are continually refining and updating our systems to bring them into closer alignment with these guidelines.” In a February 12, 2025 interview, members of OpenAI’s model-behavior team told The Verge that eliminating AI… Read More »