Stanford Professor Accused of Using AI to Write Expert Testimony Criticizing Deepfakes

In what appears to be an embarrassing and ironic gaffe, a top Stanford University professor has been accused of spreading AI-generated misinformation while serving as an expert witness in support of a law designed to keep AI-generated misinformation out of elections.
Jeff Hancock, the founding director of Stanford’s Social Media Lab, submitted his expert opinion earlier this month in Kohls v. Ellison, a lawsuit filed by a YouTuber and Minnesota state representative who claim the state’s new law criminalizing the use of deepfakes to influence elections violates their First Amendment right to free speech.
His opinion included a reference to a study that purportedly found “even when individuals are informed about the existence of deepfakes, they may still struggle to distinguish between real and manipulated content.” But according to the plaintiff’s attorneys, the study Hancock cited—titled “The Influence of Deepfake Videos on Political Attitudes and Behavior” and published in the Journal of Information Technology & Politics—does not actually exist.

“The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT,” the plaintiffs wrote in a motion seeking to exclude Hancock’s expert opinion. “Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question, especially when much of the commentary contains no methodology or analytic logic whatsoever.”
The accusations about Hancock’s use of AI were first reported by the Minnesota Reformer. Hancock did not immediately respond to Gizmodo’s request for comment.

Minnesota is one of 20 states to have passed laws regulating the use of deepfakes in political campaigns. Its law prohibits knowingly or acting with reckless disregard to disseminate a deepfake up to 90 days before an election if the material is made without the consent of the person depicted and is intended to influence the results of the election.
The lawsuit challenging the law was filed by a conservative law firm on behalf of Minnesota state Representative Mary Franson and Christopher Kohls, a YouTuber who goes by the handle Mr Reagan.

A lawsuit filed by Kohls challenging California’s election deepfake law led to a federal judge issuing a preliminary injunction last month preventing that law from going into effect.

{Categories} _Category: Implications{/Categories}
{URL}https://gizmodo.com/stanford-professor-accused-of-using-ai-to-write-expert-testimony-criticizing-deepfakes-2000527975{/URL}
{Author}Todd Feathers{/Author}
{Image}https://gizmodo.com/app/uploads/2024/11/minnesota-deepfake-lawsuit.jpg{/Image}
{Keywords}Artificial Intelligence,Deepfake{/Keywords}
{Source}Implications{/Source}
{Thumb}{/Thumb}

Exit mobile version