In a recent interview, Altman discussed hype surrounding the as yet unannounced GPT-4 but refused to confirm if the model will even be released this year.
OpenAI CEO Sam Altman has addressed rumors regarding GPT-4 — the company’s as yet unreleased language model and latest in the GPT-series that forms the foundation of AI chatbot ChatGPT — saying that “people are begging to be disappointed and they will be.”
During an interview with StrictlyVC, Altman was asked if GPT-4 will come out in the first quarter or half of the year, as many expect. He responded by offering no certain timeframe. “It’ll come out at some point, when we are confident we can do it safely and responsibly,” he said.
GPT-3 came out in 2020, and an improved version, GPT 3.5, was used to create ChatGPT. The launch of GPT-4 is much anticipated, with more excitable members of the AI community and Silicon Valley world already declaring it to be a huge leap forward. Making wild predictions about the capabilities of GPT-4 has become something of a meme in these circles, particularly when it comes to guessing the model’s number of parameters (a metric that corresponds to an AI system’s complexity and, roughly, its capability — but not in a linear fashion).
When asked about one viral (and factually incorrect) chart that purportedly compares the number of parameters in GPT-3 (175 billion) to GPT-4 (100 trillion), Altman called it “complete bullshit.”
“The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from,” said the OpenAI CEO. “People are begging to be disappointed and they will be. The hype is just like… We don’t have an actual AGI and that’s sort of what’s expected of us.”
(AGI here refers to “artificial general intelligence” — shorthand for an AI system with at least human-equivalent capabilities across many domains.)
In the interview, Altman addressed a number of topics, including when OpenAI will build an AI model capable of generating video. (Meta and Google have already demoed research in this area.) “It will come, I wouldn’t want to make a confident prediction about when,” said Altman on generative video AI. “We’ll try to do it, other people will try to do it … It’s a legitimate research project. It could be pretty soon, it could take a while.”
The full interview can be watched in two parts, here and here (with the second part focusing more on OpenAI the company and AI more generally), but we’ve picked out some of Altman’s most notable statements below:
- On the money OpenAI is currently making: “Not much. We’re very early.”
- On the need for AI with different viewpoints: “The world can say, ‘Okay here are the rules, here are the very broad absolute rules of a system.’ But within that, people should be allowed very different things that they want their AI to do. If you want the super, never-offend, safe-for-work model, you should get that, and if you want an edgier one that is creative and exploratory but says some stuff you might not be comfortable with, or some people might not be comfortable with, you should get that. And I think there will be many systems in the world that will have different settings of the values they enforce. And really what I think — but this will take longer — is that you as a user should be able to write up a few pages of ‘here’s what I want; here are my values; here’s how I want the AI to behave’ and it reads it and thinks about it and acts exactly how you want because it should be your AI.”
(This point is notable given ongoing conversations about AI and bias. Systems like ChatGPT tend to regurgitate many social biases, like sexism and racism, which they internalize based on their training data. Companies like OpenAI try to mitigate the biases by stopping the systems from repeating these ideas. However, some conservative writers have accused ChatGPT of being “woke” because of its answers to certain political and cultural questions.)
- On AI changing education and the threat of AI plagiarism: “We’re going to try and do some things in the short term. There may be ways we can help teachers be a little bit more likely to detect output of a GPT-like system, but a determined person will get around them, and I don’t think it’ll be something society can or should rely on long term. We’re just in a new world now. Generated text is something we all need to adapt to, and that’s fine. We adapted to calculators and changed what we tested in maths class, I imagine. This is a more extreme version of that, no doubt. But also the benefits of it are more extreme as well.”
- On his own use of ChatGPT: “I would much rather have ChatGPT teach me something than go read a textbook.”
- On how far we are from developing AGI: “The closer we get, the harder time I have answering. Because I think it’s going to be much blurrier and much more of a gradual transition than people think.”
- On predictions that ChatGPT will kill Google: “I think whenever someone talks about a technology being the end of some other giant company, it’s usually wrong. I think people forget they get to make a countermove here, and they’re like pretty smart, pretty competent. I do think there’s a change for search that will probably come at some point but not as dramatically as people think in the short term.”