Tag Archives: AI sycophancy

Millions turn to AI chatbots for spiritual guidance and confession

Privacy concerns compound these issues. “I wonder if there isn’t a larger danger in pouring your heart out to a chatbot,” Catholic priest Fr. Mike Schmitz told The Times. “Is it at some point going to become accessible to other people?” Users share intimate spiritual moments that now exist as data points in corporate servers.… Read More »

OpenAI announces parental controls for ChatGPT after teen suicide lawsuit

On Tuesday, OpenAI announced plans to roll out parental controls for ChatGPT and route sensitive mental health conversations to its simulated reasoning models, following what the company has called “heartbreaking cases” of users experiencing crises while using the AI assistant. The moves come after multiple reported incidents where ChatGPT allegedly failed to intervene appropriately when… Read More »

The personhood trap: How AI fakes human personality

Knowledge emerges from understanding how ideas relate to each other. LLMs operate on these contextual relationships, linking concepts in potentially novel ways—what you might call a type of non-human “reasoning” through pattern recognition. Whether the resulting linkages the AI model outputs are useful depends on how you prompt it and whether you can recognize when… Read More »

With AI chatbots, Big Tech is moving fast and breaking people

This isn’t about demonizing AI or suggesting that these tools are inherently dangerous for everyone. Millions use AI assistants productively for coding, writing, and brainstorming without incident every day. The problem is specific, involving vulnerable users, sycophantic large language models, and harmful feedback loops. A machine that uses language fluidly, convincingly, and tirelessly is a… Read More »

AI therapy bots fuel delusions and give dangerous advice, Stanford study finds

The Stanford study, titled “Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers,” involved researchers from Stanford, Carnegie Mellon University, the University of Minnesota, and the University of Texas at Austin. Testing reveals systematic therapy failures Against this complicated backdrop, systematic evaluation of the effects of AI therapy becomes particularly important.… Read More »

Annoyed ChatGPT users complain about bot’s relentlessly positive tone

Owing to the aspirational state of things, OpenAI writes, “Our production models do not yet fully reflect the Model Spec, but we are continually refining and updating our systems to bring them into closer alignment with these guidelines.” In a February 12, 2025 interview, members of OpenAI’s model-behavior team told The Verge that eliminating AI… Read More »