The Los Angeles Times newspaper headquarters in El Segundo, California on January 18, 2024. (Photo … [+] by Patrick T. Fallon / AFP) (Photo by PATRICK T. FALLON/AFP via Getty Images)
AFP via Getty ImagesLife is always full of surprises. When you think you’ve seen enough, there’s always some interesting twist. I recently learned, for instance, that Los Angeles Times owner Patrick Soon-Shiong plans to introduce an AI-powered bias meter for news. It’s not entirely clear how it’d work, but it is supposed to analyze a story to understand if it’s skewed in favor of a specific point of view, and then provide, on demand, a different version of that story from another perspective.
The rationale behind the introduction of the feature seems to be the idea that news and opinion are not separated enough, nowadays, and that readers are losing trust in the media. There’s certainly some truth in that, but I can see why journalists feel outraged and think that "the newspaper’s owner has publicly suggested his staff harbors bias, without offering evidence or examples".
However, what I’m interested in here, is not so much the debate itself, but the proposed cure for the supposed problem. Using some kind of algorithm to make up for a perceived lack of credibility is precisely the idea a tech (biotech, in this case) entrepreneur could have – and in fact the tool will rely on the same technology developed at his other companies.
On a deeper level, it seems to fit into the "solutionist" line of thinking made famous by Evgeny Morozov a few years ago – the idea that there’s a technological solution to any kind of problem. Even typically human issues, such as the lack of trust in a relationship (the one between readers and writers, in this case), could be solved frictionlessly with the right app or code.
This is problematic on many levels. First, one could argue that the mutual distrust that we are currently witnessing in society in general and between mainstream media and a significant part of the audience in particular, is actually fuelled and amplified by technology.
While media mistrust has been building for decades, the dominant role played by social media platforms in the last decade has not only eroded the business model of publishers, but also allowed anyone to create their own "news bubble" made only of people and content that they agree with, effectively confirming and even boosting biases.
The digital landscape itself, with all the abundance of fake news, misinformation, AI-generated content, has made it more challenging for individuals to discern credible information from falsehoods, only amplifying the trust crisis. The introduction of AI tools in newsroom is not necessarily going to improve the situation.
A recent survey by international research group YouGov in the UK, found out that only 6% of Britons think the benefits of using AI in journalism will outweigh the drawbacks and 60% percent of respondents said they would not trust an article created with AI support. To be clear, we are not talking about stories that are totally AI-generated here, but of situations in which the AI plays the role of the journalist, collaborating with a human editor, or the other way around.
More in general, in the U.S., approximately two-thirds of U.S. adults express low confidence in the ability of AI tools, including chatbots and search engines, to provide reliable and factual information. So, it’s unclear how introducing an AI tool to improve trust levels would help, if the technology is not trusted by the audience in the first place.
There’s another, more serious issue to consider. To be credible and reliable, an AI bias meter should be able to explain how it reached the conclusion that an article is "biased" and – it should go without saying – be free of biases itself. But we know that one of the big problems with using AI tools for decision-making, is that it is sometimes hard, even for those who programmed them, to tell how they came to a certain conclusion.
There are some many parameters at play and the way these AI tools create connections among them is so fast and, to an extent, unpredictable, that they effectively appear as "black boxes"; so much so that there’s an entire research field, called explainable AI, trying to mitigate the issue.
Not only that, but far from being omniscient, impersonal and impartial oracles, machine learning results can be heavily conditioned by the quality of the input data and by the assumptions in the machine learning algorithm’s modeling. Using flawed data or unfairly attributing some aspects of a dataset more weight than others can lead to the wrong outcomes.
Labeling articles as veering too much to the left or to the right, would mean having clear, unambiguous data on what can be considered belonging to the former or to the latter, and weighting them fairly and appropriately, which is no easy task. It’s very much possible that Soon-Shiong’s team has found a good solution for this, or that the bias meter tool he briefly outlined in a podcast is more nuanced than it seems.
Still, until more details are released, I would maintain a healthy skepticism about well-intentioned efforts to use AI meters to combat journalists’ alleged biases. Not least because they could easily be employed to provide a false impression of impartiality, while actually hiding their own biases in a way that the audience might not even be aware of.
{Categories} _Category: Implications{/Categories}
{URL}https://www.forbes.com/sites/federicoguerrini/2024/12/08/can-ai-really-fix-media-bias-los-angeles-times-owners-controversial-plan/{/URL}
{Author}Federico Guerrini, Contributor{/Author}
{Image}https://imageio.forbes.com/specials-images/imageserve/6755c2031967632f22649a47/0x0.jpg?format=jpg&height=600&width=1200&fit=bounds{/Image}
{Keywords}Enterprise Tech,/enterprise-tech,Innovation,/innovation,Enterprise Tech,/enterprise-tech,AI,/ai,technology,standard{/Keywords}
{Source}Implications{/Source}
{Thumb}{/Thumb}