SAN FRANCISCO—On Tuesday, dozens of speakers gathered in San Francisco for the first TED conference devoted solely to the subject of artificial intelligence, TED AI. Many speakers think that human-level AI—often called AGI, for artificial general intelligence—is coming very soon, although there was no solid consensus about whether it will be beneficial or dangerous to humanity. But that debate was just Act One of a very long series of 30-plus talks that organizer Chris Anderson called possibly “the most TED content in a single day” presented in TED’s nearly 40-year history.
Hosted by Anderson and entrepreneur Sam De Brouwer, the first day of TED AI 2023 featured a marathon of speakers split into four blocks by general subject: Intelligence & Scale, Synthetics & Realities, Autonomy & Dependence, and Art & Storytelling. (Wednesday featured panels and workshops.) Overall, the conference gave a competent overview of current popular thinking related to AI that very much mirrored Ars Technica’s reporting on the subject over the past 10 months.
Indeed, some of the TED AI speakers covered subjects we’ve previously reported on as they happened, including Stanford PhD student Joon Sung Park’s Smallville simulation, and Yohei Nakajima’s BabyAGI, both in April of this year. Controversy and angst over impending AGI or AI superintelligence were also strongly represented in the first block of talks, with optimists like veteran AI computer scientist Andrew Ng painting AI as “the new electricity” and nothing to fear, contrasted with a far more cautious take from leather-bejacketed AI researcher Max Tegmark, saying, “I never thought governments would let AI get this far without regulation.”
The elephant in the room was OpenAI, which loomed over the event in an oddly indirect way. There was consensus among most speakers that ChatGPT, released a mere 10 months ago, was the reason they were all there. The speed at which a general-purpose chatbot had been achieved caught everyone, including longtime AI researchers, off guard. The only representative from OpenAI to speak was Chief Scientist Ilya Sutskever, perhaps one of the most influential minds in AI research today, who concluded session one’s talks with his trademark intensity, looking like he would break the entire audience in half while pausing on stage for seconds that felt like minutes before he initially began talking. He earned the rapt attention of the entire 108-year-old Herbst Theater auditorium, often used for opera performances. In a structure that’s positively ancient by post-1906-quake San Francisco standards, the history of the future was seemingly being written.
As a counterpoint to OpenAI’s relatively closed methods of late, which we have covered in the past, several speakers spoke prominently about the importance of truly open AI models. During the first block of talks, Percy Liang, the director of the Stanford Center for Research on Foundation Models, gave a passionate argument about the need for transparency in AI. Later in the day, open source advocate Heather Meeker sent a zinger toward OpenAI by saying, “Let’s talk about open source AI. Not OpenAI—that’s just a company name.” She addressed the need for new terminology related to open-weights AI models, since the term “open source” isn’t quite accurate—something we covered with an update to our launch coverage of Meta’s Llama 2 language model.