Imagine that in your city there is a square where you can speak in public, just like the ancient Greek agora. Here you can freely share your ideas without censorship.
You may only get a few listeners when you speak, while someone with similar ideas has a large audience. But there is one important difference. For his economic benefit, a person decides who can listen to which speech or which speaker. And this is also not disclosed when you enter.
Would this be free speech?
This is an important question because modern agoras are social media platforms – and this is how they organize a speech. Social media platforms don’t just present users with the posts of those they follow in the order they are posted.
Instead, algorithms determine what content is displayed and in what order. In our research, we have called this ‘algorithmic audience’. And we believe it deserves a closer look at the debate over how free speech is practiced online.
Our understanding of freedom of speech is too limited
The free speech debate has been rekindled by news of Elon Musk’s plans to take over Twitter, his pledge to reduce content moderation (including by restoring Donald Trump’s account), and, more recently, speculation that he could get out of the deal if Twitter can’t prove the platform isn’t overrun with bots.
Musk’s approach to free speech is typical of how this issue is often framed: content moderation, censorship, and deciding what address is allowed to enter and remain on the platform.
But our research shows that this focus overlooks how platforms systematically hinder freedom of expression on the part of the public rather than on the speaker’s side.
Outside the social media debate, freedom of expression is regarded as the “free trade of ideas”. Speaking is about discourse, not just the right to speak. Algorithmic interference in who gets to hear which speech serves to undermine this free and fair exchange of ideas directly.
Suppose social media platforms are “the digital equivalent of a town square” committed to defending free speech. In that case, as both Mark Zuckerberg and Facebook’s Musk argue, algorithmic audiences should be considered to allow speech to be free.
How it works
An algorithmic audience is through algorithms that increase or decrease the reach of any message on a platform. This is done by design, based on the monetization logic of a forum.
Newsfeed algorithms amplify content that keeps users most “engaged” because engagement leads to more user attention to targeted ads and more opportunities for data collection.
This explains why some users have a large audience while others with similar ideas are barely noticed. Those who speak with the algorithm achieve the greatest diffusion of their ideas. This is comparable to large-scale social engineering.
At the same time, the workings of Facebook and Twitter’s algorithms remain largely opaque.
How it disrupts freedom of expression?
An algorithmic audience has a material effect on public discourse. While content moderation only applies to malicious content (which makes up only a small fraction of all speech on these platforms), algorithmic audience applies systematically to all content.
This kind of interference with free speech has been overlooked because it is unprecedented. This was not possible in traditional media.
And it’s relatively recent for social media, too. Previously, messages were sent to one’s follower network rather than subjected to algorithmic distribution. Facebook, for example, only started populating news feeds using algorithms that optimize for engagement in 2012, after it was publicly listed and under increasing pressure to make money.
It is only in the last five years that algorithmic audiences have truly become a widespread problem. At the same time, the magnitude of the problem is not fully known, as it is nearly impossible for researchers to access platform data.
But we know it is important to address this, as it can encourage spreading harmful content such as misinformation and disinformation.
We know that such content is more commented on and shared, attracting further reinforcement. Facebook’s research has found that its algorithms can encourage users to join extremist groups.
What can be done?
Individually, Twitter users should heed Elon Musk’s recent advice to rearrange their news feeds chronologically, which would curb the amount of algorithmic audience share.
You can also do this for Facebook, but not by default – so you’ll have to choose this option every time you use the platform. The same goes for Instagram (also owned by Facebook’s parent company, Meta).
Plus, switching to chronological order will only go so far in reducing algorithmic audiences – because you’ll still get other content (other than what you sign up for directly) that targets you based on the monetization logic of the platform.
And we also know that only a fraction of users change their default settings. Ultimately, regulation is needed.
Although social media platforms are private companies, they enjoy far-reaching privileges to moderate content on their platforms under Section 230 of the U.S. Communications Decency Act.
In return, the public expects platforms to facilitate a free and fair exchange of their ideas, as these platforms provide the space for public discourse. An algorithmic audience is an infringement of this privilege.
As U.S. lawmakers ponder social media regulation, tackling algorithmic audiences should be on the table. Still, it’s barely been part of the debate, focusing entirely on content moderation.
Any serious regulation will have to challenge the entire business model of platforms as the algorithmic audience is a direct result of capitalist surveillance logic – where platforms capture and trade our content and data to predict (and influence) our behavior – all to make a profit.
Until we regulate this use of algorithms and the monetization logic that underlies it, social media speech will never be free in the true sense of the word.
Kai Riemer, Professor of Information Technology and Organization, University of Sydney, and Sandra Peter, Director, Sydney Business Insights, University of Sydney
This article is republished from The Conversation under a Creative Commons license. Read the original article.