Algorithmic threats

Need to be watched for and mitigated

Another set of words from an author, this time about algorithmic threats to humanity, were read and thought to be worth sharing to help others see a little more perspective, from different voices, of contemporary issues facing us.

Taking responsibility (collectively, transparently and accountably) and not running away with progress just for the sake of, well, invariably monetary profit, possibly even power and status . . . your device can now receive the latest algorithm and will make your life that much more controllable, but not by you, despite appearances . . . do you wish to upgrade?

In the face of the threat algorithms pose to the democratic conversation, democracies are not helpless. They can and should take measures to regulate Al and prevent it from polluting our infosphere with fake people spewing fake news. The philosopher Daniel Dennett has suggested that we can take inspiration from traditional regulations in the money market.52 Ever since coins and later banknotes were invented, it was always technically possible to counterfeit them. Counterfeiting posed an existential danger to the financial system, because it eroded people’s trust in money. If bad actors flooded the market with counterfeit money, the financial system would have collapsed. Yet the financial system managed to protect itself for thousands of years by enacting laws against counterfeiting money. As a result, only a relatively small percentage of money in circulation was forged, and people’s trust in it was maintained.53

What’s true of counterfeiting money should also be true of counterfeiting humans. If governments take decisive action to protect trust in money, it makes sense to take equally decisive measures to protect trust in humans. Prior to the rise of AI, one human could pretend to be another, and society punished such frauds. But society didn’t bother to outlaw the creation of counterfeit humans, since the technology to do so didn’t exist. Now that Al can pass itself off as human, it threatens to destroy trust between humans and to unravel the fabric of society. Dennett suggests, therefore, that governments should outlaw fake humans as decisively as they have previously outlawed fake money.54

The law should prohibit not just deepfaking specific real people – creating a fake video of the US president, for example – but also any attempt by a nonhuman agent to pass itself off as a human. If anyone complains that such strict measures violate freedom of speech, they should be reminded that bots don’t have freedom of speech. Banning human beings from a public platform is a sensitive step, and democracies should be very careful about such censorship. However, banning bots is a simple issue: it doesn’t violate anyone’s rights, because bots don’t have rights.55

None of this means that democracies must ban all bots, algorithms and Als from participating in any discussion. Digital agents are welcome to join many conversations, provided they don’t pretend to be humans. For example, Al doctors can be extremely helpful. They can monitor our health twenty-four hours a day, offer medical advice tailored to our individual medical conditions and personality, and answer our questions with infinite patience. But the Al doctor should never try to pass itself off as a human.

Another important measure democracies can adopt is to ban unsupervised algorithms from curating key public debates. We can certainly continue to use algorithms to run social media platforms; obviously, no human can do that. But the principles the algorithms use to decide which voices to silence and which to amplify must be vetted by a human institution. While we should be careful about censoring genuine human views, we can forbid algorithms to deliberately spread outrage. At the very least, corporations should be transparent about the curation principles their algorithms follow. If they use outrage to capture our attention, let them be clear about their business model and about any political connections they might have. If the algorithm systematically disappears videos that aren’t aligned with the company’s political agenda, users should know this.

These are just a few of numerous suggestions made in recent years for how democracies could regulate the entry of bots and algorithms into the public democracies can regulate the information market and that their very survival depends on these regulations. The naive view of information opposes regulation and believes that a completely free information market will spontaneously generate truth and order. This is completely divorced from the actual history of democracy. Preserving the democratic conversation has never been easy, and all venues where this conversation has previously taken place – from parliaments and town halls to newspapers and radio stations – have required regulation. This is doubly true in an era when an alien form of intelligence threatens to dominate the conversation.

Yuval Noah Harari Nexus