Opinion | What artificial intelligence means for internet freedom – The Washington Post

The increasing power of artificial intelligence tools is both good news and bad news for internet freedom. That is one glaring implication of the latest Freedom on the Net report from global watchdog Freedom House, which found that, in 2023, for the 13th year in a row, the internet became less free. AI could make that picture better or worse, depending on how governments and everyday people use it.

Much of this year’s deterioration involves more established technologies. Of the 70 countries Freedom House’s investigation covers, 29 of them saw worse conditions for civil liberties this year than last. Citizens in a record 55 countries faced legal consequences for what they said online. There are about 4.9 billion people with access to the internet, and 78 percent live in nations where individuals were arrested or imprisoned for their posts.
For its part, AI does not so much enable new types of repression as it makes old types easier to carry out.
Certainly, this is true of censorship. Freedom House found that, in at least 22 countries, social media companies were required to use automated systems to moderate content on their services, often to comply with draconian laws. India, for instance, has enshrined in its information technology framework the employment of automated systems to root out broad categories of content: speech that could undermine public order as well as decency or morality. These tools, operating silently behind the scenes, can also “mask the role of the state” in quashing dissent — allowing regimes to avoid the so-called dictator’s dilemma of risking public anger at restrictions.

The story is scarier still for information warfare, a favorite tactic of the insecure autocrat. The propaganda campaign predates the semiconductor, but increasingly clever AI can create increasingly convincing lies. Already, states are seizing on these burgeoning capabilities; Freedom House discovered in at least 16 countries the distorting use of AI tools that can churn out images, text or audio. Venezuela’s state outlets have distributed pro-government synthetic videos supposedly depicting news anchors for an international English-language channel. The Israel-based outfit known as Team Jorge has generated text based on keywords and stood up an army of false accounts to disseminate it — casting doubt, for instance, on troubling allegations against the director of Mexico’s criminal investigations unit.
The buzziest development in machine learning this year might also best illustrate the way these technologies can empower or disempower depending on who’s in charge: ChatGPT and other generative large-language models can, in the best of circumstances, let users in repressive states evade restrictions on the free flow of information. The training data for today’s mainstream systems comes not from the strict speech environments of authoritarian states but instead from the global internet.
But the censorious regimes know this; that’s why Vietnam has cautioned its citizens against ChatGPT on the grounds that it smears the Communist Party. And it’s why Chinese regulators have instructed the country’s biggest tech conglomerates to ensure the model isn’t part of their services, or available via any app store. China is also manipulating its homegrown models, mandating “truth, accuracy, objectivity, and diversity” (as defined by the Chinese Communist Party) in training data for chatbots as well as clamping down on the output they produce. Ask Baidu’s Ernie Bot to engage on the subject of Tiananmen Square, and it will refuse.
The same principle applies to other forms of AI. Think back to those automated systems for content moderation: These tools could be deployed in ways that enhance expression rather than curtail it — to thwart disinformation campaigns, say, or to elevate the facts about human rights abuses. To ensure these better outcomes, countries such as the United States should develop domestic frameworks for AI tool creators centered on civil liberties, complete with transparency obligations about what material chatbots are trained on, as well as safeguards against discrimination and excessive surveillance. The White House is expected to advance its blueprint for an AI bill of rights; legislation from Congress would be even better.
The United States and like-minded allies should champion their approach around the world — incorporating AI and the internet into their democracy assistance efforts as well as bringing their joint influence to bear in international forums. China and Russia have had substantial success in guiding the United Nations’ convention on cybercrime toward overreach; the United States and its friends shouldn’t let that happen on AI.
Democratic leaders in the internet’s early days generally assumed the web would enhance freedom around the world, and they didn’t do enough to fight for it. The good news amid the bad about AI is that now, with a new technology ascendant, there’s an opportunity to get things right this time.
The Post’s View | About the Editorial BoardEditorials represent the views of The Post as an institution, as determined through discussion among members of the Editorial Board, based in the Opinions section and separate from the newsroom.
Members of the Editorial Board: Opinion Editor David Shipley, Deputy Opinion Editor Charles Lane and Deputy Opinion Editor Stephen Stromberg, as well as writers Mary Duenwald, Shadi Hamid, David E. Hoffman, James Hohmann, Heather Long, Mili Mitra, Keith B. Richburg and Molly Roberts.

{Categories} _Category: Takes,*ALL*{/Categories}

Exit mobile version