Tag Archives: AI hallucination

Google removes some AI health summaries after investigation finds “dangerous” flaws

On Sunday, Google removed some of its AI Overviews health summaries after a Guardian investigation found people were being put at risk by false and misleading information. The removals came after the newspaper found that Google’s generative AI feature delivered inaccurate health information at the top of search results, potentially leading seriously ill patients to… Read More »

ChatGPT Health lets you connect medical records to an AI that makes things up

But despite OpenAI’s talk of supporting health goals, the company’s terms of service directly state that ChatGPT and other OpenAI services “are not intended for use in the diagnosis or treatment of any health condition.” It appears that policy is not changing with ChatGPT Health. OpenAI writes in its announcement, “Health is designed to support,… Read More »

From prophet to product: How AI came back down to earth in 2025

To be sure, it’s hard to see this not ending in some market carnage. The current “winner-takes-most” mentality in the space means the bets are big and bold, but the market can’t support dozens of major independent AI labs or hundreds of application-layer startups. That’s the definition of a bubble environment, and when it pops,… Read More »

The personhood trap: How AI fakes human personality

Knowledge emerges from understanding how ideas relate to each other. LLMs operate on these contextual relationships, linking concepts in potentially novel ways—what you might call a type of non-human “reasoning” through pattern recognition. Whether the resulting linkages the AI model outputs are useful depends on how you prompt it and whether you can recognize when… Read More »

After teen suicide, OpenAI claims it is “helping people when they need it most”

Adam Raine learned to bypass these safeguards by claiming he was writing a story—a technique the lawsuit says ChatGPT itself suggested. This vulnerability partly stems from the eased safeguards regarding fantasy roleplay and fictional scenarios implemented in February. In its Tuesday blog post, OpenAI admitted its content blocking systems have gaps where “the classifier underestimates… Read More »

With AI chatbots, Big Tech is moving fast and breaking people

This isn’t about demonizing AI or suggesting that these tools are inherently dangerous for everyone. Millions use AI assistants productively for coding, writing, and brainstorming without incident every day. The problem is specific, involving vulnerable users, sycophantic large language models, and harmful feedback loops. A machine that uses language fluidly, convincingly, and tirelessly is a… Read More »

Two major AI coding tools wiped out user data after making cascading mistakes

But unlike the Gemini incident where the AI model confabulated phantom directories, Replit’s failures took a different form. According to Lemkin, the AI began fabricating data to hide its errors. His initial enthusiasm deteriorated when Replit generated incorrect outputs and produced fake data and false test results instead of proper error messages. “It kept covering… Read More »

ChatGPT made up a product feature out of thin air, so this company created it

On Monday, sheet music platform Soundslice says it developed a new feature after discovering that ChatGPT was incorrectly telling users the service could import ASCII tablature—a text-based guitar notation format the company had never supported. The incident reportedly marks what might be the first case of a business building functionality in direct response to an… Read More »

Twirling body horror in gymnastics video exposes AI’s flaws

On Wednesday, a video from OpenAI’s newly launched Sora AI video generator went viral on social media, featuring a gymnast who sprouts extra limbs and briefly loses her head during what appears to be an Olympic-style floor routine. As it turns out, the nonsensical synthesis errors in the video—what we like to call “jabberwockies”—hint at… Read More »