The OpenAI CEO is on a world tour to talk up the benefits of AI and the need for regulation — but not too much. Some, though, think Altman’s vision is dangerous.
Share this story
The queue to see OpenAI CEO Sam Altman speak at University College London on Wednesday stretched hundreds deep into the street. Those waiting gossiped in the sunshine about the company and their experience using ChatGPT, while a handful of protesters delivered a stark warning in front of the entrance doors: OpenAI and companies like it need to stop developing advanced AI systems before they have the chance to harm humanity.
“Look, maybe he’s selling a grift. I sure as hell hope he is,” one of the protestors, Gideon Futerman, a student at Oxford University studying solar geoengineering and existential risk, said of Altman. “But in that case, he’s hyping up systems with enough known harms. We probably should be putting a stop to them anyway. And if he’s right and he’s building systems which are generally intelligent, then the dangers are far, far, far bigger.”
When Altman took to the stage inside, though, he received an effusive welcome. The OpenAI CEO is currently on something of a world tour following his recent (and equally affable) senate hearing in the US last week. So far, he’s met with French President Emmanuel Macron, Polish Prime Minister Mateusz Morawiecki, and Spanish Prime Minister Pedro Sánchez. The purpose seems twofold: calm fears after the explosion of interest in AI caused by ChatGPT and get ahead of conversations about AI regulation.
In London, Altman repeated familiar talking points, noting that people are right to be worried about the effects of AI but that its potential benefits, in his opinion, are much greater. Again, he welcomed the prospect of regulation — but only the right kind. He said he wanted to see “something between the traditional European approach and the traditional US approach.” That is, a bit of regulation but not too much. He stressed that too many rules could harm smaller companies and the open source movement.
“On the other hand,” he said, “I think most people would agree that if someone does crack the code and build a superintelligence — however you want to define that — [then] some global rules on that are appropriate … I’d like to make sure we treat this at least as seriously as we treat, say, nuclear material; for the megascale systems that could give birth to superintelligence.”
According to OpenAI’s critics, this talk of regulating superintelligence, otherwise known as artificial general intelligence, or AGI, is a rhetorical feint — a way for Altman to pull attention away from the current harms of AI systems and keep lawmakers and the public distracted with sci-fi scenarios.
People like Altman “position accountability right out into the future,” Sarah Myers West, managing director of the AI Now institute, told The Verge last week. Instead, says West, we should be talking about current known threats created by AI systems — from faulty predictive policing to racially biased facial recognition to the spread of misinformation.
Altman did not dwell much on current harms but did address the topic of misinformation at one point during, saying he was particularly worried about the “interactive, personalized, persuasive ability” of AI systems when it comes to spreading misinformation. His interviewer, author Azeem Azhar, suggested one such scenario might involve an AI system calling someone using an artificial voice and persuading the recipient to some unknown end. Said Altman: “That’s what I think would be a challenge, and there’s a lot to do there.”
However, he said, he was hopeful about the future. Extremely hopeful. Altman says he believes even current AI tools will reduce inequality in the world and that there will be “way more jobs on the other side of this technological revolution.”
“My basic model of the world is that the cost of intelligence and the cost of energy are the two limited inputs, sort of the two limiting reagents of the world. And if you can make those dramatically cheaper, dramatically more accessible, that does more to help poor people than rich people, frankly,” he said. “This technology will lift all of the world up.”
He was also optimistic about the capacity of scientists to keep increasingly powerful AI systems under control through “alignment.” (Alignment being a broad topic of AI research that can be described simply as “make software do what we want and not what we don’t.”)
“We have a lot of ideas that we’ve published about how we think alignment of superintelligent systems works, but I believe that is a technically solvable problem,” said Altman. “And I feel more confident in that answer now than I did a few years ago. There are paths that I think would be not very good, and I hope we avoid those. But honestly, I’m pretty happy about the trajectory things are currently on.”
Outside the talk, though, protestors were not convinced. One, Alistair Stewart, a master’s student at UCL studying political science and ethics, told The Verge he wanted to see “some kind of pause or moratorium on advanced systems” — the same approach advocated in a recent open letter signed by AI researchers and prominent tech figures like Elon Musk. Stewart said he didn’t necessarily think Altman’s vision of a prosperous AI-powered future was wrong but that there was “too much uncertainty” to leave things to chance.
Can Altman persuade this faction? Stewart says the OpenAI CEO came out to talk to the protestors after his time onstage but wasn’t able to change Stewart’s mind. He says they chatted for a minute or so about OpenAI’s approach to safety, which involves simultaneously developing the capabilities of AI systems along with guardrails.
“I left that conversation slightly more worried than I was before,” said Stewart. “I don’t know what information he has that makes him think that will work.”