Davos: Amid concerns raised in various quarters about risks posed by AI, ChatGPT creator OpenAI's CEO Sam Altman on Thursday said artificial intelligence won't replace human care for each other, just like computers didn't kill the game of chess.
Speaking at a session on 'Technology in a Turbulent World' at the World Economic Forum Annual Meeting 2024 here, he said even with its very limited current capability and very deep flaws, people are finding ways to use this tool for great productivity gains or other gains and understand the limitations.
"People understand tools and the limitations of tools more than we often give them credit for. People have found ways to make ChatGPT super useful to them and understand what not to use it for, for the most part," he said.
Altman said AI has been somewhat demystified because people really use it now. And that's always the best way to pull the world forward with new technology.
The OpenAI CEO said AI will be able to explain its reasoning to us.
"I can't look in your brain to understand why you're thinking.. what you're thinking. But I can ask you to explain your reasoning and decide if that sounds reasonable to me or not."
"I think, our AI systems will also be able to do the same thing. They'll be able to explain to us in natural language the steps from A to B, and we can decide whether we think those are good steps, even if we're not looking into it to see each connection," he explained.
Altman said when IBM chess computer Deep Blue beat World Champion Garry Kasparov in 1997, commentators said it would be the end of chess, and no one would bother to watch or play chess again.
But chess has never been more popular than it is now, and almost no one watches two AIs play each other, we are very interested in what humans do, he said.
"Humans know what other humans want. Humans are going to have better tools. We've had better tools before, but we're still very focused on each other," Altman said.
He said humans will deal more with ideas, and AI will change certain roles by giving people space to come up with ideas and curate decisions.
He also welcomed the scrutiny of AI technology.
"I think it's good that we and others are being held to a high standard. We can draw on lessons from the past about how technology has been made to be safe and how different stakeholders have handled negotiations about what safe means," he added.
Altman said it was the responsibility of the tech industry to get input from society into decisions, such as what the values are, and the safety thresholds, so that the benefits outweigh the risks.
"I have a lot of empathy for the general nervousness and discomfort of the world towards companies like us...We have our own nervousness, but we believe that we can manage through it, and the only way to do that is to put the technology in the hands of people."
"Let society and the technology co-evolve and sort of step by step with a very tight feedback loop and course correction, build these systems that deliver tremendous value while meeting safety requirements," he said.
Altman predicted that new economic models for content would develop.