Tag Archives: large language models

From prophet to product: How AI came back down to earth in 2025

To be sure, it’s hard to see this not ending in some market carnage. The current “winner-takes-most” mentality in the space means the bets are big and bold, but the market can’t support dozens of major independent AI labs or hundreds of application-layer startups. That’s the definition of a bubble environment, and when it pops,… Read More »

How AI coding agents work—and what to remember if you use them

This context limit naturally limits the size of a codebase a LLM can process at one time, and if you feed the AI model lots of huge code files (which have to be re-evaluated by the LLM every time you send another response), it can burn up token or usage limits pretty quickly. Tricks of… Read More »

OpenAI built an AI coding agent and uses it to improve the agent itself

Ed Bayes, a designer on the Codex team, described how the tool has changed his own workflow. Bayes said Codex now integrates with project management tools like Linear and communication platforms like Slack, allowing team members to assign coding tasks directly to the AI agent. “You can add Codex, and you can basically assign issues… Read More »

OpenAI releases GPT-5.2 after “code red” Google threat alert

In attempting to keep up with (or ahead of) the competition, model releases proceed at a steady clip: GPT-5.2 represents OpenAI’s third major model release since August. GPT-5 launched that month with a new routing system that toggles between instant-response and simulated reasoning modes, though users complained about responses that felt cold and clinical. November’s… Read More »

OpenAI CEO declares “code red” as Gemini gains 200 million users in 3 months

In addition to buzz about Gemini on social media, Google is quickly catching up to ChatGPT in user numbers. ChatGPT has more than 800 million weekly users, according to OpenAI, while Google’s Gemini app has grown from 450 million monthly active users in July to 650 million in October, according to Business Insider. Financial stakes… Read More »

Syntax hacking: Researchers discover sentence structure can bypass AI safety rules

Researchers from MIT, Northeastern University, and Meta recently released a paper suggesting that large language models (LLMs) similar to those that power ChatGPT may sometimes prioritize sentence structure over meaning when answering questions. The findings reveal a weakness in how these models process instructions that may shed light on why some prompt injection or jailbreaking… Read More »

Google tells employees it must double capacity every 6 months to meet AI demand

While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs. During an all-hands meeting earlier this month, Google’s AI infrastructure head… Read More »

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data

The goals are sound, but ultimately they depend on users reading the dialog windows that warn of the risks and require careful approval before proceeding. That, in turn, diminishes the value of the protection for many users. “The usual caveat applies to such mechanisms that rely on users clicking through a permission prompt,” Earlence Fernandes,… Read More »

Google CEO: If an AI bubble pops, no one is getting out clean

Market concerns and Google’s position Alphabet’s recent market performance has been driven by investor confidence in the company’s ability to compete with OpenAI’s ChatGPT, as well as its development of specialized chips for AI that can compete with Nvidia’s. Nvidia recently reached a world-first $5 trillion valuation due to making GPUs that can accelerate the… Read More »

Forget AGI—Sam Altman celebrates ChatGPT finally following em dash formatting rules

When Altman celebrates finally getting GPT to avoid em dashes, he’s really celebrating that OpenAI has tuned the latest version of GPT-5.1 (probably through reinforcement learning or fine-tuning) to weight custom instructions more heavily in its probability calculations. There’s an irony about control here: Given the probabilistic nature of the issue, there’s no guarantee the… Read More »