“Eventually you say to the computer, ‘learn everything and do everything,’” Schmidt said Sunday in an interview with ABC News. “And that’s a dangerous point. When the system can self-improve, we need to seriously think about unplugging it.” Schmidt predicted that AI will advance from the sort of task-specific agents like Microsoft CoPilot, to more complex systems that can make decisions on their own. When AI reaches that stage, it will be time for humans to step in and consider turning off the system, said Schmidt. Humans need to ensure that the AI itself can’t counteract efforts to shut it down. “In theory, we better have somebody with the hand on the plug, metaphorically,” Schmidt said. Schmidt worked in Silicon Valley for decades. In 2001, Google founders Sergey Brin and Larry Page brought him in to help scale their business. At the time, Brin and Page felt they needed a steady hand with high-profile executive experience to shepherd their company, while they dedicated themselves to research and development. Over his career, Schmidt had a front-row seat to the tech industry’s many waves of innovation. Schmidt’s fears about the worst-case scenario are shared by other AI luminaries. Nobel Laureate Geoffrey Hinton, whose discoveries earned him the moniker the Godfather of AI, said in 2023 he “couldn’t see a path that guaranteed safety” because AI could think for itself. OpenAI CEO Sam Altman said the worst-case scenario of artificial general intelligence is “lights out for all of us.” Elon Musk, an OpenAI cofounder turned rival, warned AI could lead to the destruction of civilization. AI is “most likely going to be great,” Musk said in October. “There’s this sub chance, that could be 10% to 20%, that it goes bad. The chances aren’t zero that it goes bad.” Schmidt, too, highlighted the positives of AI. “The important thing is that the power of this intelligence means that each and every person is going to have the equivalent of a polymath in their pocket,” Schmidt told ABC’s George Stephanopoulos. “In addition to your notes and writers, you’re going to have an Einstein and a Leonardo da Vinci to give you advice on your show. That will be true for everyone on the planet.” To ensure mankind reaps those benefits without incurring significant damages, Schmidt believes governments will have to start regulating AI. Schmidt referenced conversations on the topic he’d had with the late Henry Kissinger, with whom he cowrote an upcoming book Genesis: Artificial Intelligence, Hope, and the Human Spirit. “Government has a role,” Schmidt said. “Dr. Kissinger felt very strongly that the future of intelligence should not be left to people like me. The technologists should not be the only ones making these decisions.” So far, the U.S. has not implemented any AI regulation at the federal level. However, California put forth a series of AI bills to protect the film industry and cracked down on deepfakes made using the technology. But California Gov. Gavin Newsom vetoed a comprehensive bill known as SB 1047 that venture capitalist and tech executives heavily opposed due to the stringent reporting requirements it would have imposed on developers. Do you have what it takes to make it to the C-suite? Learn how Fortune 500 CEOs overcame surprising obstacles on the road to the corner office, and learn leadership strategies that make you smarter in seconds. Sign up for Next to Lead, our weekly newsletter by editor Ruth Umoh. {Categories} _Category: Takes{/Categories} {URL}https://fortune.com/2024/12/16/ex-google-ceo-eric-schmidt-warns-ai-self-improve-unplug-it/{/URL} {Author}Paolo Confino{/Author} {Image}https://fortune.com/img-assets/wp-content/uploads/2024/12/GettyImages-696319358-e1734374674990.jpg?w=2048{/Image} {Keywords}Business{/Keywords} {Source}POV{/Source} {Thumb}{/Thumb}