On April 25, 2022, Elon Musk finally purchased Twitter in a whopping USD 44 billion deal amongst much fanfare and speculation about the future of the company. Much has been said about what Elon’s “free speech absolutist” version of Twitter could look like, and what it means for democracy and society. In this article, I’d like to discuss another change Elon has proposed- making the Twitter algorithm “open source”.
Twitter uses an algorithm to curate your timeline with tweets (and advertisements) based on topics that you’ve shown interest in or follow. Elon has said that this algorithm should be publicly available on GitHub, a code-hosting platform popular with programmers, for anyone to read and copy. A day after Elon’s takeover was announced, Twitter added a GitHub repository called “the algorithm” and then inexplicably deleted it.
Making the Twitter algorithm public has long been the demand of conservatives in the United States, who claim that “Big Tech” silences them and limits their reach with biased algorithms, although Twitter’s own research shows that its algorithm actually amplifies conservative views.
What experts have to say
Experts have already pointed out that making the algorithms public will not improve our understanding of how Twitter works. Nobody apart from AI/ML experts can understand complex algorithms in the first place. It is quite hard to understand how content recommendation algorithms work by merely reading the code, in the absence of the data used to train such algorithms.
While sharing the source code does increase transparency, cracking the algorithm open presents a far more pressing concern- it can make the platform vulnerable to being gamed. Most complex algorithms operate as opaque “black boxes” and even programmers who develop them cannot often explain why the algorithm makes particular decisions. Given this scenario, open sourcing the algorithm means it can be viewed and scrutinized by anyone. The decision-making process, objectives and rules embedded in the model become public knowledge. By studying the algorithm carefully, an expert could reverse engineer the timeline logic and generate content guaranteed to rank higher and get the greatest number of views. In other words, bad faith actors could rig what goes viral on Twitter.
Such manipulation on a platform the sheer size and scale of Twitter would give bad faith actors unprecedented reach to the public, allowing them to disseminate fake news and propaganda more effectively. In today’s attention economy, eyeballs mean everything. Well-targeted disinformation campaigns can easily influence public opinion. The “disinformation for hire” shadow industry is already booming. People’s use of social media platforms correlates with how politically polarized they are. Politicians have already been misusing social media platforms to influence elections. Open sourcing the platform could make their job easier.
What the authorities have said?
The newly enacted Digital Services Act in the European Union (EU) mandates content moderation for “dangerous misinformation”, although such a law would likely be at odds with free speech rights in the United States. EU lawmakers have already warned Elon that Twitter will have to obey its rules, or face severe penalties. Effective content moderation could be a solution for dangerous misinformation from being spread on Twitter, however, Elon has made his stance clear on moderation.
Elon’s two big visions for Twitter, “free speech absolutism” and “open source algorithms”, could work in tandem to create a Cambridge Analytica like havoc on democracy. Since these two visions seem to fundamentally be at odds with each other, it would be interesting to see how Twitter handles the trade-offs. Twitter may react to such gaming by obfuscating its algorithm, i.e., concealing portions of it. Alternatively, they could create ensemble models, a family of algorithms that collectively vote on what your timeline looks like. Either way, the algorithm becomes more complex and opaquer, defeating the “open source” vision of Twitter. Alternatively, Twitter could police its platform rigorously to censor such disinformation and manipulation, defeating its “free speech” promises. Either way, the decisions made by Twitter over the next few months will determine how we think about social media platforms and their accountability to the publicå.
Disclaimer: All views expressed here are the author’s own and do not reflect the beliefs of their organizations.
Vasundhara is an LLM Student at the University of California, Berkeley. She was an IP attorney in India, with a special focus on trademarks, copyrights, and platform regulation law and policy. She is currently part of the team at AITruth to write a paper on corporate AI ethics best practices, soon to be published in the Springer Nature’s AI and Ethics Journal. You can find her on Twitter or follow her posts on Linkedin.
Indrajit is a Machine Learning Engineer who believes in best practices such as data governance, explainability, and fairness while building AI solutions. Currently, he works at Embark Trucks, an autonomous trucking company based in San Francisco. He is also pursuing a master’s in machine learning and robotics at Georgia Tech. You can find him on Twitter or follow his posts on LinkedIn.