Recent months have been a joy for sports fans around the world, as the cream of the athletic crop descended on Paris for this year’s Olympic Games.
But one athlete in particular found their achievements at the games tinged with a level of unfounded and deeply unpleasant public attacks. Instead of taking some time out to relax and celebrate her gold medal win when the games ended, Algerian boxer Imane Khelif instead found herself launching a lawsuit against the prominent social media platform now known as X. In the lawsuit, Khelif claims X aided and abetted the vicious cyberbullying she experienced by facilitating the spread of false and harmful content about her.
This case raises serious questions about the extent to which social media platforms should be responsible for the content shared on their platforms. On one hand, free speech advocates argue that platforms should not be held liable for the actions of individual users. On the other hand, some believe that platforms have a responsibility to protect users from what appears to be an ever-expanding wave of harmful content.
Today, more than five billion people worldwide use social media—around 64% of the global population. Thanks to its widespread use and growing accessibility, social media has become increasingly influential in shaping public discourse. Platforms can amplify voices, spread misinformation, and mobilise people around various causes; something we’ve seen the worst possible example of in the UK in recent weeks.
This enormous influence has long provoked concerns about the potential for platforms to be misused to spread hate speech, disinformation, and other destructive or damaging content. But the balance between free speech and content moderation is a delicate one, and requires careful consideration of both legal frameworks and ethical guidelines.
In the UK, the legal framework surrounding free speech and content moderation is complex. The Human Rights Act protects the right to freedom of expression, butting up against laws such as the Defamation Act and the Communications Act that place limits on certain types of speech. As the influence of social media platforms continues to grow, we must continue to discuss the balance between free speech and content moderation—and where we draw the legal lines.
In this blog, we’ll examine the relationship between social media, free speech, and the law—and how both individuals and businesses can navigate this increasingly complex legal landscape.
Jump to section:
- The complex history of social media and free speech
- The dual nature of social media platforms
- Content moderation
- The algorithm on trial
- The UK legal framework in relation to social media
- How law firms can support clients
- How will UK law evolve?
The complex history of social media and free speech
Humanity has been arguing about freedom of expression for thousands of years, with free speech forming a cornerstone of the world’s first democracy in 6th century Athens. In the 1700s, early modern democracies began to catch up, with governments in France and the newly formed United States enshrining the right to free speech for their people.
Here in the UK, the evolution of free speech principles has long been intertwined with international human rights law development, with two major milestones in particular shaping the way we view freedom of expression.
Adopted in 1950, The European Convention on Human Rights consecrates the right to freedom of expression, stating that everyone has the right to “hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”
Almost half a century later, the UK enacted The Human Rights Act 1998, requiring UK courts to interpret and apply domestic law in a way that was compatible with the ECHR, including the right to freedom of expression.
Alongside these conventions, there have also been several key legal cases that have moulded the development of free speech in the UK. In 1979, for example, R v Sunday Times Newspapers Ltd established that the right to freedom of expression extends to the media, and that any restrictions on the media must be justified by a pressing social need.
R v Secretary of State for the Home Department (1991) considered the limits on the right to freedom of expression in the context of national security and held that restrictions on speech must be proportionate to the aim pursued. And that’s not to mention the countless libel and defamation cases which have helped to define the boundaries of free speech when it comes to the protection of reputation.
Free speech in the digital era
But despite the creation of global constitutions like 1948’s Universal Declaration of Human Rights, debates and even violent conflicts over the freedom to dissent still rage. And the conversation around freedom of speech has only become more complicated since the emergence of a phenomenon that forever changed how the world communicates: social media.
The very concept of free speech has undergone some major shifts in the digital age, and while the core principle remains pretty much unchanged, how we practice and regulate free speech has evolved significantly.
Throughout history, free speech was primarily understood as a person’s right to express themselves without fear of government censorship or punishment. But the digital revolution has introduced new dimensions to this long-standing debate.
The internet, and social media in particular, has democratised communication, giving individuals a platform through which they can express their views on a global scale with relative ease.
When social media arrived, it gave people and their thoughts instant, widespread reach, the likes of which they’d never had before. With the click of a button on a portable device, ordinary people could put out a message to, in theory, the entire world instantaneously. Most people (thankfully) used this newfound power for innocuous purposes, sharing pictures of their lunch or their pets.
The rapid adoption of this digital stage led to a proliferation of diverse voices and perspectives—a shift that has proved both empowering and challenging.
Social media platforms have undoubtedly expanded the reach of free speech, connecting users with a global audience and putting new eyes on their ideas. These platforms also afford users anonymity, a feature which encourages individuals to express themselves more freely and without fear of consequences.
The flipside of this cloak of obscurity, however, is that some users are emboldened to share socially or morally questionable views, and engage in online harassment and hate speech that pushes the boundaries of what we deem to be acceptable free speech.
Add in algorithms designed to promote content most likely to provoke engagement, and it’s easy to see how social media platforms have become a hotbed of complexity when it comes to freedom of expression.
A new kind of public square
Social media platforms like Facebook, X (formerly Twitter), Instagram and TikTok have become the modern-day public square, providing spaces for individuals to gather, exchange ideas, debate issues, and engage in discourse.
This shift from traditional physical public spaces to digital ones has massive implications for how we communicate, interact, and form opinions.
For starters, these platforms are highly accessible, allowing diverse individuals from all kinds of backgrounds and locations to engage in public conversation. This accessibility plays a huge role in diversifying the views we’re exposed to, enabling marginalised groups to have more of a voice.
This infinite public square also helps connect people, facilitating the formation of online communities based on shared interests, identities, or beliefs. Information can be disseminated quickly and without the influence of traditional gatekeepers like news organisations—information that can greatly influence public opinion and shape social sentiment.
But the openness, enlightenment, and ability to connect people on offer on social media can be a double-edged sword, with the advantages of this new breed of public square also presenting many challenges.
The rapid, unchecked flow of information makes it difficult to decipher truth from fabrication, contributing to the dissemination of potentially dangerous misinformation and disinformation.
Online communities can quickly become echo chambers, where users are spoon-fed users content that reinforces their views and opinions by engagement-hungry, morally ambiguous algorithms designed to commodify and, if necessary, manipulate public discourse for profit.
And when users are continuously shown harmful content that reinforces odious viewpoints, social media platforms can be breeding grounds for online harassment and hate speech.
Recently, the UK watched this unpleasant mixture of false information and animosity come to fruition in real-time, when falsehoods about the perpetrator of a heinous attack on three children circulated online before spilling over into the streets. Despite being confirmed as false, this information spread rapidly online, with ‘protests’ quickly organised by far-right social media figures. What followed was days of country-wide violence, destruction, and terrorism—a wave of disorder which would never have been possible, or even likely, without the use of social media.
Social media platforms have become essential components of the digital public sphere, and while they present valuable opportunities for greater participation and variety of voices, they also throw up problems that can have serious, tangible consequences.
Today, governments and legal professionals are wrestling with some big questions about free speech on the internet. How do we balance the right to free speech with the need to protect individuals from online harassment and hate speech? How can we develop international norms and standards for free speech in the digital age in the face of diverse legal and cultural contexts?
And what role should social media platforms play in moderating content and preventing the spread of misinformation and disinformation?
Facilitators and gatekeepers: The dual nature of social media platforms
Having witnessed the social impact social media has made over the past decade or so, most people would probably agree that social media companies must have some accountability for the content shared on their platforms.
But outlining and enforcing this responsibility is complicated by the duality of social media’s role in today’s free speech landscape.
Social media platforms serve both as facilitators of free expression and as gatekeepers of content. They’re designed to be highly accessible digital soapboxes that rapidly disseminate and amplify information from all kinds of users, from regular citizens sharing cat pics to artists, activists, and independent journalists.
And yet, we also expect them to moderate what’s circulated through them, create restrictions on certain types of content, and fact-check information shared by users.
The twofold role of social media platforms as promoters of free speech and content concierges raises tough questions about the balance between free expression and platform responsibility. We know these platforms can be powerful tools for promoting democracy and civic engagement. We also know that they have the potential to be misused and abused.
So what are social media platforms doing to strike this fragile balance?
Content moderation and censorship in the digital age
Today, all major social media companies have policies in place to help moderate the content shared on their platforms. The big argument now is: are these policies fit for purpose? Are they legally sound? And do they sufficiently balance the need for free speech with the need to mitigate risk?
How social media is moderating its content
Since the very earliest days of social media, there’s been an awareness that not all content generated by users is suitable for public consumption. Even MySpace, the social media pioneer of the early 2000s, employed professional moderation staff. Since then, social media’s footprint in our lives has only grown, and so has the need for moderation.
Leading social platforms typically issue community guidelines to all users, setting out rules and expectations that users are expected to follow on the platform. However, when these guidelines are not adhered to, the need for content moderation comes into play. Top social media platforms have a variety of content moderation policies in place to try and give users a safe and positive experience. These policies tend to cover a range of hot-button user safety issues, including:
- Hate speech and discrimination policies that prohibit the promotion or incitement of hate speech, discrimination, or violence based on factors like race, religion, ethnicity, nationality, sexual orientation, gender identity, disability, or age. Examples of these can be seen in documents like Facebook’s Community Standards, X’s Rules, and Instagram’s Community Guidelines.
- Harassment and bullying policies that aim to prevent harassment, bullying, and stalking, often using tools like reporting features and automated detection systems.
- Misinformation and disinformation policies that see some platforms partnering with fact-checking organisations to combat the spread of false or misleading information. On X, this task is designated to users, who can label content as false or misleading and provide additional information or context.
- Child safety policies to protect children from harmful content, including child exploitation and online grooming.
- Violent and graphic content policies that instigate the removal of content that promotes or glorifies violence, including threats and incitement.
- Privacy protection policies to protect user privacy and prevent the sharing of personal information without consent.
Outlining these policies is barely half the battle; enforcing them is the real challenge. With such a huge task at hand, human content moderation is often outsourced to third-party content moderation companies that employ countless people to pour over social media content looking for anything that breaches these policies.
But with more than 300 million photos uploaded each day, and around 510,000 comments posted every single minute, social media platforms are turning to their own algorithms to help them identify and remove harmful content.
These algorithms can be trained to recognise patterns, phrases, and images associated with hate speech, harassment, and other types of violations. By using technology like natural language processing, image and video analysis, and user reporting, algorithms have become an invaluable tool in the battle against harmful content, but they’re far from infallible.
Algorithms can overmoderate and mistakenly flag harmless content, frustrating guideline-abiding users, stifling free expression, and hampering the diversity of voices on their platform. Plus, no algorithm, no matter how clever, can be expected to successfully navigate the multitude of cultural norms and legal frameworks it faces when moderating content on a global scale.
Content moderation policies and the algorithms that enforce them play a crucial role in creating a safer online environment, but they’re by no means a silver bullet in the battle to balance user safety and the preservation of free speech.
Striking the balance
Social media platforms clearly need to balance the right to free expression with the need to prevent harmful content. It’s a complex undertaking, made all the more difficult by the multitude of stakeholders with varying perspectives on what constitutes “harmful” content and how it should be addressed.
Over-moderate, and platforms can stifle free expression, leading to concerns about censorship and the suppression of legitimate viewpoints. Under-moderate, and harmful content may be allowed to proliferate, resulting in the spread of hate speech, misinformation, and harassment.
Then there are the practical limitations of content modernation to consider. Technical solutions like automated content moderation systems are prone to errors, while human moderators may introduce their own biases into the content moderation process, leading to inconsistent decisions and potential discrimination.
Content moderation controversies
The challenges that social media platforms face in balancing safety and freedom are innumerable, and their efforts to overcome these challenges are often controversial.
On many occasions, attempts by social media companies to introduce consequences for breaching their content moderation policies have been met with controversy, media attention, and even legal push-back.
One of the most popular ways we’ve seen social media companies attempt to oust harmful, offensive, or misleading content from their platforms is by removing users’ posts or accounts.
Known as deplatforming, this simple measure is frequently met with concerns about censorship, and around the platform’s potential to influence discourse; especially when the user or content is related to politics. In some jurisdictions, removing content posted by political figures may be considered a violation of their right to free speech.
The most prominent example of deplatforming came in 2021, when former U.S. President Donald Trump was permanently banned from Twitter and Facebook after the January 6th insurrection. (His account has since been reinstated following Elon Musk’s controversial takeover of the platform.)
Conspiracy theorist Alex Jones, far-right extremist group The Proud Boys, and former Ku Klux Klan leader David Duke have also been banned from a multitude of major platforms including YouTube, Facebook, and Twitter for spreading misinformation and inciting violence.
But while deplatforming can be effective in preventing the spread of harmful content, protecting public health, and holding individuals and organisations accountable for their actions, deplatforming has been criticised.
Detractors of the tactic say that deplatforming can be used as a form of censorship, limiting access to information and shutting down diverse viewpoints. Others have pointed out that deplatforming can set a dangerous precedent, leading to the removal of legitimate content or the targeting of marginalised groups. There’s also some debate about its efficacy, with many critics arguing that it may simply drive harmful content to less moderated platforms; something that became apparent during Alex Jones’ recent defamation trials.
Other methods of holding social media users accountable include demonetisation, which prevents users from earning revenue, usually by preventing adverts from appearing on their content. Demonetisation can have significant financial consequences for content creators, making it one of the most frequently challenged methods of punishing rule breakers. Demonetisation might even occur in response to actions taken outside of the platform—in 2023, YouTube demonetised Russell Brand’s account after he was accused of multiple sexual assaults.
Geo-blocking is often used to restrict access to certain content in specific geographic regions, usually to comply with local laws or cultural sensitivities. Again, this practice can raise concerns about censorship and discrimination.
Fact-checking and labeling is a tactic that’s come into practice over the past few years, largely in response to the major waves of false information that circulated around COVID-19 and the 2020 U.S. election. According to research, adding these ‘warning labels’ to content containing misinformation is thought to be broadly effective in reducing belief in online falsehoods. Legally, however, such labelling can be controversial, especially if the platform is seen as biased or overstepping its role. In some jurisdictions, such actions may be considered defamation or interference with the right to free speech.
The algorithm on trial
But trying to shut down harmful content isn’t the only way that social media companies can find themselves in legal trouble. In fact, they’re increasingly being called out for helping to promote it.
Just like a social media user, algorithms are subject to laws around defamation hate speech, and privacy laws, and their growing impact on social media services is triggering some serious legal discussion.
In the early days, most social media platforms ranked content chronologically and used basic algorithms to help users connect with people they already knew. But in the 2010s, platforms like Instagram and Twitter began rolling out more advanced, engagement-driving algorithms designed to enhance the experience of their users, serve up content they were interested in, and ultimately, keep them scrolling for longer.
These algorithms use various factors to determine what content to show users, including user behaviour (like what they’ve watched or engaged with previously), relationships with other users, and the relevance, timeliness, and popularity of content elsewhere on the platform.
However, this drive to prioritise content that’s most relevant and engaging to each individual can play on personal biases and lead to the amplification of extremist or harmful content.
In the most severe cases, this destructive content can further radicalise those with already hateful views.
In 2019, a white supremacist attacked a mosque in Christchurch, New Zealand while live-streaming the shooting on Facebook. The resulting investigation found that social media likely played a significant role in the gunman’s radicalisation, connecting him with like-minded individuals. Following the massacre, in which 51 people were murdered, the live-stream video circulated on various social media sites, drawing support from other far-right users and inspiring a number of copycat attacks. The original video was viewed 4,000 times before Facebook removed it, and social media companies struggled to contain the spread of duplicated versions.
Later that year, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron jointly called for technology companies to put more effort into combating violent extremism.
And it’s not just violence that social media algorithms are beginning to be held responsible for. In early 2024, a group of New York City-based organisations launched a lawsuit against social media giants including TikTok, Instagram, Facebook, Snapchat, and YouTube.
Leaders from the city government, its education system, and its health services are seeking to hold the companies accountable for their role in creating what they claim is a nationwide youth mental health emergency.
The lawsuit alleges that these companies are deliberately designing their apps to get children and young people addicted to social media, including using algorithms to keep users on the platforms longer and encourage compulsive use. The litigation aims to force social media companies to change their behaviour, as well as recover the costs of addressing the mental health crisis.
Social media’s legal responsibility under UK law
Here in the UK, we’re also working on holding social media platforms accountable when the pendulum of content moderation swings too far out in either direction.
One of our most valuable tools in pursuing legal action against wayward social media companies is the Communications Act 2003.
The Act provides a useful framework for regulating electronic communications services in the UK. Not only does it implement the European Union’s Information Society Directive, which places obligations on social media platforms to take measures to prevent the misuse of their services, but it also requires them to have procedures in place to respond to notices of illegal content and take appropriate action.
The Communications Act 2003 brings up an important concept in the UK’s free speech landscape: intermediary liability. This term refers to the legal responsibility of online intermediaries (like social media apps and ISPs) for the content that is transmitted or stored on their networks. Intermediary liability aims to balance the need to protect users from harmful content with the desire to safeguard free speech, and promote the growth and development of the internet.
As well as ensuring accountability, the Communications Act also offers some benefits to social media platforms. If platforms comply with the requirements of the Communications Act, for example, they will have limited liability for user content. This limited liability is not absolute, however, and can be lost if platforms fail to take reasonable steps to prevent the misuse of their services.
The Defamation Act 2013 lays out some key provisions relevant to social media platforms too, stating that platforms can be held liable for defamatory content posted by users if they’ve published or authorised the publication of that content.
Social media platforms also have a variety of specific legal responsibilities according to other UK laws, including obligations around content moderation, notice and takedown procedures, copyright infringement, and privacy protection.
The UK legal framework: Protecting free speech while regulating platforms
Even with the most effective combination of clear guidelines, content moderation, and adherence to relevant laws, social media platforms are unlikely to ever construct a perfect solution.
Instead, striking a balance that facilitates safe and inclusive online communities while protecting the right to free expression will likely remain a work in progress, especially as social and legal goalposts keep on moving.
And while social media companies continue to work on achieving this crucial equilibrium, our legal system must work to protect the free speech of UK citizens.
So how are we doing that in today’s complex digital environment?
What free speech looks like under UK law
Free speech is a fundamental right in the UK, protected by both domestic and international law. We’ve touched on some of those laws already, including the Defamation Act 2013, Communications Act 2003, and the Human Rights Act 1998.
The Human Rights Act in particular had a significant impact on the development of free speech principles in the UK. The Act incorporated the European Convention on Human Rights into UK law, giving domestic courts the power to strike down laws that are incompatible with Article 10 of the ECHR, which specifically protects the right to freedom of expression. This has helped protect individuals from restrictions on their right to freedom of expression and created a more nuanced and sophisticated understanding of free speech principles.
Crucially, though, even in 1950 the ECHR recognised that the right to free speech is not absolute, and can be subject to restrictions to in regards to “national security, public safety, public order, the prevention of crime, the protection of health or morals, or the protection of the rights and freedoms of others.”
Since the creation of the ECHR, additional laws have come into force in the UK that limit free speech in certain situations. The Public Order Act 1986, for example, contains a number of provisions that restrict free speech in certain circumstances, prohibiting behaviour that is threatening, abusive, or disorderly, and restricting the use of threatening language in public places.
Similarly, the Racial and Religious Hatred Act 1998 forbids the publication of material that incites racial or religious hatred, and bans the use of threatening or abusive language that is racially or religiously motivated.
The aforementioned Communications Act 2003 also includes provisions that specifically target the use of the internet to send offensive or indecent messages, and regulates the use of electronic communications to harass or annoy others.
What is the UK doing to regulate social media platforms?
Both the courts and the government continue to play a fundamental role in balancing the right to free speech with other important values like fairness and safety. But in the digital age where social media is king, that job has become a little more complicated.
Over the past few years, the UK government has been actively working to regulate social media platforms, making them safer and more accountable. Perhaps the biggest piece of legislation to come out of this effort so far is the Online Safety Bill, which aims to protect users from harmful content while preserving their right to free speech.
The Bill became law in October 2023 after years of debate by lawmakers.
What does the Online Safety Act say?
- Social media platforms have a duty of care to protect users from a wide range of harmful content, such as hate speech, incitement to violence, animal cruelty, illegal immigration, drug dealing, terrorism, self-harm and suicide content, and child sexual abuse material
- They must have robust systems in place to identify and remove harmful content promptly
- They must implement age verification measures including enforced age limits and age-checking measures to protect children from accessing harmful content
- They must publish transparency reports detailing the risks and dangers to children, their content moderation efforts, and the effectiveness of their systems—they must also allow bereaved parents to obtain information from their children’s accounts
- They must provide clear ways for parents and children to report problems online
Aiming to negotiate a balance between protecting users from harmful content and preserving their right to free speech, the Online Safety Act outlines several advisories that social media platforms must take into account.
Firstly, the Act provides social media companies with definitions of what the UK government considers to be harmful content, ensuring that platforms have a clear understanding of what is prohibited. The Act also emphasises the importance of human oversight in content moderation decisions, and requires that any action taken is proportionate to how harmful the content is to prevent over-moderation. Finally, the Act gives users the right to appeal content moderation decisions if they believe they’ve been unfairly treated.
The establishment of the Act has created several new offences, making it illegal to send threats of violence (including sexual violence) online, assist or encourage self-harm, send unsolicited sexual imagery online, and share “deepfake” pornography.
For social media platforms themselves, those with a significant number of UK-based users will face fines of up to £18 million (or 10% of their annual revenue) if they fail to comply with the new rules.
The Online Safety Act: A sufficient solution?
Since its first iteration went public, the Online Safety Bill has faced criticism from some quarters. Concerns have been raised about the potential for censorship, the erosion of free speech rights, and government intrusion, while some critics have questioned its effectiveness in combatting misinformation.
The Act introduces criminal liability for social media executives who fail to comply with child safety rules or withhold information—a move which critics say may create an incentive to over-moderate online environments.
Opponents of the OSA are worried that this could lead to algorithms filtering out and limiting discussion around important social topics like racial justice and gun violence, or preventing young people from accessing resources related to LGBTQ+ topics.
Particularly divisive is Section 122, which requires messaging platforms to scan users’ messages for illegal material; a move that would, many argue, render end-to-end encrypted messaging services like WhatsApp and Signal completely moot and potentially vulnerable to data theft.
James Baker, Campaigns Manager at the Open Rights Group called the OSA “an overblown legislative mess that could seriously harm our security by removing privacy from internet users,” stating that the Act will also “undermine the freedom of expression of many people in the UK.”
Despite these admonitions, proponents of the Act maintain that it is necessary to protect users, particularly children, from harmful content and that the safeguards included in the Act will help to prevent any overreach.
The impact of the OSA will become clear over time, with the Government itself admitting that the Act may take years to make any discernable difference to our online experiences. Its success will hinge on its enforcement and how well it holds up against the legal challenges that will undoubtedly come once its rules begin to be imposed.
The role of the courts in regulating social media
Since the advent of social media, UK courts have played a pivotal role in shaping the legal landscape around free speech on the internet. Through a handful of landmark cases, the courts have established foundational principles and guided both social media platforms and individuals on the subject of free speech.
In the McAlpine v. Bercow (2013) case, Sally Bercow posted a tweet appearing to identify former Conservative Party minister Lord McAlpine as the subject of a BBC Newsnight broadcast that falsely linked a “leading Conservative politician” to sex abuse claims. The High Court ruled that the tweet was libellous, highlighting the potential for defamatory statements on social media to cause significant harm.
Another case in 2015, this time involving multiple defendants, highlighted the intersection of social media use and harassment law. In R v Lennox & Others (2015), several people were convicted under the Communications Act 2003 with sending abusive and threatening messages on social media to feminist campaigner Caroline Criado-Perez and MP Stella Creasy.
Widely known as the ‘Twitter joke trial’, Chambers v DPP (2012) focused on the case of Paul Chambers, who was convicted under the Communications Act 2003 for posting a joke about blowing up Robin Hood Airport on Twitter. While not considered by the airport as a credible threat, Chambers was found guilty of “sending a public electronic message that was grossly offensive or of an indecent, obscene or menacing character”. The case was later appealed in the High Court, which quashed the conviction on the grounds that the tweet was not intended to be menacing; a big win for free speech campaigners.
The importance of judicial review
When it comes to creating and enforcing laws that govern free speech, judicial review plays an important role.
A critical legal process, judicial review allows courts to review decisions made by public bodies, including social media platforms,
Judicial review provides a mechanism for individuals to challenge decisions (such as removing or restricting content, for example) that they believe are unlawful, unfair, or violate their right to free speech.
In 2020, digital rights group Foxglove sought judicial review to challenge the Home Office’s use of a visa application algorithm. Foxglove claimed the algorithm was using biased data inputs, including information gathered from social media, and as a result, was making decisions that discriminated against applicants with certain nationalities. Foxglove v. Secretary of State for the Home Department (2020) argued that the algorithm violated principles of fairness and equality under the law. In the end, the Home Office conceded before a hearing could take place and scraped the algorithm. Though it didn’t play out in full, this case showed how judicial review can be used to challenge governmental use of social media data on the grounds of legality, fairness, and human rights.
How law firms can support clients in a complex landscape
Clearly, the intersection of social media and free speech is a complicated one. Not only is it covered by a myriad of intricate laws, but the technology behind social media, how we use it, and even the way we view free speech is constantly developing.
It’s a tricky area of law, but with advice and support from legal experts, you can navigate this new and evolving landscape safely. Here are a few tips to keep in mind.
Advice for individuals
1. Be mindful of defamation laws
Avoid making false or unfounded statements about others, especially if those statements could harm someone’s reputation. Defamatory statements can lead to lawsuits for libel if they’re made in writing or another permanent form such as social media. Before you post anything online, think about whether what you’re saying is factual and whether it could be interpreted as damaging. If you’re unsure, take care to phrase your opinions clearly as opinions rather than statements of fact. Or better yet, just don’t post it at all.
2. Avoid posting threats or abusive content
Don’t post anything that could be construed as a threat, harassment, or abusive towards others. This includes direct threats, incitement to violence, and messages that could be interpreted as grossly offensive or designed to cause harm; laws like the Communications Act 2003 make it illegal to send threatening or offensive messages online.
3. Respect privacy and intellectual property rights
Never share private information about others without their consent, and avoid posting content that infringes on others’ intellectual property, such as copyrighted images or videos. Sharing private information (known as doxxing) or intellectual property without permission can lead to legal actions for invasion of privacy or copyright infringement.
Advice for businesses
1. Adhere to advertising and marketing regulations
Do your due diligence to make sure that that all promotional content, including social media posts, complies with the Advertising Standards Authority (ASA) guidelines. Be truthful in your advertising, avoid making misleading claims about your products or services, and clearly label paid promotions or sponsored content. Misleading advertisements can lead to sanctions from the ASA, damage to your brand’s reputation, and potential legal action.
2. Be cautious with user-generated content and engagement
Monitor and moderate content on your social media pages, including comments, shares, and retweets. Ensure that any user-generated content that you amplify or leave visible on your platforms does not contain defamatory, offensive, or unlawful material. Businesses can be held liable for content they share or endorse, even if it’s created by others, and failing to address harmful content on your accounts could result in defamation claims, breach of hate speech laws, or violations of other regulations.
3. Protect confidential information and respect data privacy laws
Avoid sharing any confidential business information or personal data about employees, customers, or partners without explicit consent. You should make sure your social media activities comply with the General Data Protection Regulation (GDPR), particularly when it comes to handling and processing personal data; unauthorised sharing of confidential or personal data can lead to serious legal consequences under GDPR, including hefty fines and damage to your business’s reputation.
The future of social media, free speech, and the law
Social media, free speech, and the laws that govern them are rapidly evolving. As the technology that underpins social media advances and societal expectations shift, we’ll no doubt continue to see significant developments in this area, as the law tries to keep up and keep citizens safe online.
We’re already seeing certain trends emerge on the social media regulation front. Governments worldwide are increasing their of scrutiny social media platforms, for example, because of growing concerns about misinformation, hate speech, and the potential for these platforms to influence elections.
There’s also a growing push for social media platforms to be held more accountable for the content that’s shared on their watch, with demands for greater transparency about content moderation practices and potential legal liabilities on the rise.
Due to the global interconnectedness of social media, international cooperation is becoming a vital factor in regulating social media too. This means governments and international organisations are working more closely to develop common standards and best practices to combat key issues like algorithmic bias, misinformation and disinformation, data privacy, and child safety.
Lawmakers are hard at work trying to make social media safer while protecting the free speech of its users, and will need all the help they can get to navigate the complex and wide-ranging issues that come with it. Luckily, there are a few new factors that could come into play to assist them.
Artificial Intelligence, for example, will play an increasingly important role in content moderation and the detection of harmful content as its abilities and accuracy evolve. Decentralising social media platforms may also gain popularity as a way to reduce the power of large tech companies. And new legal frameworks, such as the Online Safety Act and the EU’s Digital Services Act will develop and equip legal professionals with new ways to address the unique challenges posed by social media platforms.
The evolution of free speech in the digital age
The advent of the digital age has dramatically altered the progression of free speech, with new challenges and opportunities arising that have impacted both the philosophy behind it and its practical application. As technology continues to advance, so too will the legal frameworks governing online communication; especially as users become more digitally literate and critical in their engagement with online content.
We may see a convergence of global regulations governing online platforms, with international bodies like the United Nations playing a more active role in holding platforms accountable through fines or even criminal penalties.
Ruling on cases of both individual and corporate accountability in free speech-related cases may also require courts to develop more nuanced understandings of free speech in the digital age, considering factors like the platform, audience, and potential harm. That’s in addition to expanding their remit to deal with new categories of free speech like deepfakes or algorithmic manipulation, which may require specific legal frameworks.
The future of free speech in the digital age will be driven by a complex interplay of legal developments, technological advancements, and societal shifts. But whatever direction this aeons-long journey takes next, the law must stay in step to protect free speech as a fundamental human right in the digital era.
How will UK law evolve to meet these challenges?
The UK government has been actively working to address the challenges posed by social media, and we can certainly expect further developments under the new Labour leadership.
Enforcing the OSA will be high on the government’s list of priorities, as Ofcom is already being criticised by leading children’s charities for allowing tech giants to utilise loopholes to avoid accountability.
The government is also reportedly considering amending the Act following the far-right riots that took place in August 2024, with leading figures like London Mayor Sadiq Khan calling the OSA “not fit for purpose.”
Also in the pipelines is a bill focused on regulating AI, though so far details are sparse. This bill could have an impact on social media users and free speech if restrictions on the use of AI are brought in.
What we can do to protect ourselves online
While those at the top strive to create laws that balance free speech and the restriction of harmful content, businesses and individuals alike must keep up to date with the changing landscape to protect themselves from legal difficulties while leveraging the opportunities that social media offers.
Ongoing education and awareness of the legalities of free speech and internet usage are critical, and we must make sure we’re well-informed and ready to adapt our online behaviour as new laws and regulations emerge.
Understanding the latest legal developments can help individuals avoid legal pitfalls, protect their rights, and feel empowered to advocate for themselves if their rights are violated.
By staying informed about legal developments, businesses can identify and mitigate potential legal risks associated with social media, ensure compliance with relevant laws and regulations, develop effective policies, and avoid reputational damage caused by legal issues. Here are a few sources you can use to stay informed:
- GOV.UK: https://www.gov.uk/
- Ofcom: https://www.gov.uk/government/organisations/ofcom
- Law Society of England and Wales: https://www.lawsociety.org.uk/en
- Law Society’s Digital Law Guide: https://www.lawsociety.org.uk/en
The ongoing battle for balance
The delicate balance between free speech and the regulation of social media platforms is a knotty issue. Free speech is a fundamental human right, it is not absolute, and there is a growing need to equalise the right of free expression with the rights of others to be kept safe from all kinds of harm.
With social media platforms playing a massive role in how modern society connects and communicates, lawmakers must find an appropriate way to regulate these platforms so that our new public squares can be places where all users feel protected and able to speak their minds.
The quest to strike the right balance is ongoing, and from the High Court to legal practices like ours, the law will continue to support the pursuit of this vital equilibrium. Seeking out informed legal guidance can help you and your business navigate social media and all its complexities successfully—without giving up your obsession with TikTok baking videos.
If you’d like further guidance or legal support on this front, get in touch with the friendly team of experts at Coyle White Devine.