OpenAI is racing to bring down the prohibitively high cost of its latest breakthrough, text-to-video tool Sora, but acknowledged its potential to spread misinformation during a crucial election year would affect its commercial launch plans.
OpenAI’s Chief Technology Officer, Mira Murati, told the Wall Street Journal that there is much work left to streamline their powerful tool to match the size and complexity of their image generator, DALL-E. This effort aims to reduce compute requirements and, consequently, the price tag.
“We’re trying to make it available at a similar cost eventually to what we saw with DALL-E,” Murati said, adding Sora was still “much, much more expensive” than the latter.
Only one year ago, videos created by some written prompts were amateurish and immediately recognizable. But Sora creates 60-second clips so realistic, the untrained eye can barely tell they are fake.
While humans have become better at spotting telltale signs an image is not authentic, we have little experience with advanced AI videos, which makes them perfect fodder for malicious actors looking to influence public opinion.
VIDEO
Asked whether Murati therefore felt comfortable releasing Sora before November—when the entire House, a third of the Senate and the White House are up for grabs—she said the possibility their proprietary technology could be abused was a concern.
“That’s certainly a consideration dealing with the issues of misinformation and harmful bias, and we will not be releasing anything that we don’t feel confident on when it comes to how it might affect global elections,” said Murati, who briefly stepped in as interim CEO during November’s leadership crisis.
‘We need to figure out these issues’
Take for example Mark Kaye, a conservative white radio show host in Florida.
BBC Panorama reported this month on how he created AI-generated images of Donald Trump surrounded by a crowd of adoring Black voters and shared them with his over one million followers on Facebook as evidence the former President was popular with all demographics.
It’s not just Americans going to the polls, however.
The United Kingdom will likely hold a general vote, perhaps as early as May, while European Union citizens cast their ballots for the next EU Parliament in June.
That is also likely when Indians will have their shot at saying whether Narendra Modi’s Hindu nationalist party continues to lead the world’s largest democracy.
All told, over half of Earth’s population are set to cast a ballot in 2024 in what Time magazine has called a “make-or-break year for democracy”.
Thanks to AI, however, voters will be more challenged than ever to decide whether the content they see and hear is real or not.
“We need to figure out these issues before we can confidently deploy them broadly,” Murati said.
Does OpenAI use data from copyrighted content to train?
That’s because OpenAI with the help of large shareholder Microsoft poured all its effort into making the most convincing generative AI tools around—even if Sora video doesn’t yet come with a corresponding audio output.
“Our goal was to really focus on developing the best capability,” she said, explaining why Sora is too expensive to market to consumers yet. “Now we will start looking into optimizing the technology so people can use it at low cost and make it easy to use.”
When it came to the question regarding the origin of the data sets used to train Sora, however, Murati could not answer apart from claiming they were all publicly available or licensed.
“I’m actually not sure about that,” she said, when asked whether they utilized YouTube videos in the process.
Authors like George R.R. Martin, as well as media such as the New York Times, have recently sued OpenAI for copyright infringement, arguing their content was used without their consent to develop a commercial product like ChatGPT without compensation. (OpenAI has also been sued by Elon Musk over a separate issue)
When pressed further as to the source of Sora’s training data, Murati declined to comment.
“I’m just not going to go into the details of the data that was used,” she awkwardly answered, “but it was publicly available or licensed data.”
Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.
{Categories} _Category: Platforms,*ALL*,_Category: Implications{/Categories}
{URL}https://fortune.com/2024/03/15/openai-chatgpt-misinformation-elections-sora-mira-murati/{/URL}
{Author}Christiaan Hetzner{/Author}
{Image}https://content.fortune.com/wp-content/uploads/2024/03/GettyImages-1258711418.jpg?w=2048{/Image}
{Keywords}Business{/Keywords}
{Source}Fortune | FORTUNE{/Source}
{Thumb}{/Thumb}