Sam Altman speaking at MIT
John WernerAt Cambridge, Sam Altman answered questions from Sally Kornbluth, MIT president and cell biologist.
Her first question was: “what is your PDoom?” In other words, what are the odds that AI will eliminate all human life?
Responding, Altman criticized the construct of having people rate their doomsday predictions from one to 100.
“Whether you think it’s 2, or 10, or 90, it’s not zero,” he said, also pointing out that developing a PDoom assumes a static system.
“Society always holds space for doomsayers,” he said, expressing his approval for that tolerance, but also suggesting that instead of rating our potential for doom, we should be asking key questions that might help us to avoid an AI disaster.
“What needs to happen to navigate safely?” he said. “Take it in, confront it, take it seriously.”
As for ChatGPT, he said, the model will be better in time.
“We have a ton of work in front of us,” he said.
Contrasting his earlier notions of a sort of mythical AI creature with superintelligence that “rains money” on people, he talked about now how the tech revolution is showing itself as a fundamental trend.
We have a new tool in the tech tree of humanity, and people are using it to create many things,” he said. “I think it will continue to get way more capable, and it’s just going to integrate into society in an important and transformative way.”
Contrasting AI to human cognition, he noted that the combination will be, in the end, capable of a lot.
“If we make something that is as smart as all of super-smart students here, that’s a great accomplishment in some sense,” he said. “There are already a lot of smart people in the world.”
As for predictions about the effects later on, as AI helps us to reinvent our environment, Altman was hopeful.
Sam Altman
John Werner“Quality of life goes up,” he said, “the economy cycles a little bit faster.”
In answer to another question from Kornbluth, Altman also addressed bias, saying we’ve made surprisingly good progress in dealing with its influence.
“For as much as people like to talk about this and say ‘oh, we can use these things because they’re just spewing toxic waste all the time,’” he said, “GPT works well (in some ways) …and… who decides what bias means? How do we decide what the system is supposed to do?”
Then he mentioned an important trade-off: where do you put the hard limits for AI use?
“I think it’s important to give people a lot of control,” he said. “That said, there are some things that a system just shouldn’t do .”
Kornbluth asked him how to navigate between privacy and the need for shared data.
In response, Altman predicted that the future involves personalized AI with “a full recording of your life.”
“You can imagine that will be a super helpful thing to have,” he said. “You can also imagine the privacy concerns that it would present. … If we stick on that, how are we going to navigate the privacy versus utility versus safety trade-off, or security trade-offs that come with that?”
President of MIT Sally Kornbluth interviewing CEO of Open AI, Sam Altman
John WernerHe mused that AI might, for example, testify against you or get subpoenaed by a court.
“That will be a new thing for society to navigate,” he said. “We already have (some of this privacy issue) with services we all use: AI makes it higher stakes, higher trade-offs.”
That’s the first part of Altman’s talk. We’ll cover the rest in future articles.
{Categories} _Category: Takes,*ALL*{/Categories}
{URL}https://www.forbes.com/sites/johnwerner/2024/05/03/sam-altman-answers-the-burning-questions-about-ai-bias-privacy-etc/{/URL}
{Author}John Werner, Contributor{/Author}
{Image}https://imageio.forbes.com/specials-images/imageserve/6634f7aa7ae8fc891242916c/0x0.jpg?format=jpg&height=600&width=1200&fit=bounds{/Image}
{Keywords}AI,/ai,Innovation,/innovation,AI,/ai,standard{/Keywords}
{Source}Implications{/Source}
{Thumb}{/Thumb}