close
close

“Why the Online Safety Act won’t stop the kind of misinformation that has led to unrest

“Why the Online Safety Act won’t stop the kind of misinformation that has led to unrest

Byline Times is an independent, reader-funded investigative newspaper that stands outside the established press system and reports on “what the newspapers don’t say” – without fear or favor..

To support the work of the group, subscribe to the monthly Byline Times Print edition packed with exclusive investigations, news and analysis.

The question of whether there are laws against the spread of disinformation online, which I discussed here last week, has now turned to the Online Safety Act (OSA).

Following the riots – sparked by the deaths of three young girls in Stockport – there has been much talk about the misinformation and disinformation that underpinned them, and how quickly it spread unchecked on social media.

After the riots suddenly ended on August 7, London Mayor Sadiq Khan said: “I think the government has been very quick to recognise that there needs to be changes to the Online Safety Act. I think the government should very quickly look at whether it is fit for purpose. I think it is not fit for purpose.”

Prime Minister Sir Keir Starmer warned that social media was not a “lawless zone”. Photo: PA Images / Alamy

Cabinet Minister Nick Thomas Symonds said Sky News that while some aspects of the OSA have not yet entered into force, we are “ready to make changes if necessary” and that “I think we will look at the existing legal framework for regulating platform providers”.

And Prime Minister Keir Starmer stressed that social media was “not a lawless zone” and warned: “After these riots, we need to look at social media more broadly.”

However, before moving on to reforming the OSA, it is crucial to understand what the system does.

In short, it imposes a series of obligations on online service providers that make them responsible for the safety of users of their platforms. Providers are legally required to develop and implement systems and processes that reduce the risk of their platforms being used to transmit illegal material. And when it does appear, they are legally required to remove it quickly. They must also take measures to protect children from certain types of legal but harmful material.

The story of how the Conservatives exempted their media supporters from laws against the spread of dangerous misinformation on the Internet

Julian Petley

The key point here, however, is that for the most part the OSA does not create new categories of illegal content – ​​it simply obliges companies to take action against material that is already illegal under the various laws in Schedules 5 to 7 to the Act. However, in certain cases it does introduce new clauses into existing laws, such as the Sexual Offences Act 2003.

But while there are laws against hate speech and incitement to violence, for example, there are almost no laws against the spread of misinformation and disinformation. The former is content that spreaders may not realize is false or misleading. The latter is false or misleading material spread with the intent to deceive and often to cause harm of some kind.

So what do the laws say about disinformation and misinformation? Two things.

EXCLUSIVE

Robinson is accused of stoking the unrest, but the tentacles of disinformation are global. Nafeez Ahmed uncovers a disturbing connection between tech platforms and far-right conspiracy – with links to Europe, Russia and the January 6 uprising.

Nafeez Ahmed

First, there is section 179(1) of the OSA, which reforms sections of the Malicious Communications Act 1988 and the Communications Act 2003 to create an offence of false communication, which occurs when someone sends a message containing information they know to be false with the intention of causing “substantial psychological or physical harm to a likely audience”.

Section 180 states that “a recognised news publisher cannot commit an offence” under this section, thereby completely exempting the online versions of British newspapers from liability.

The main problem here, however, is that in order to prosecute such an offence, it would have to be proven that the accused knew that the message was false when it was sent and that he also intended to cause “significant psychological or physical harm” – what is meant by this is unclear.

It is therefore very difficult to imagine how, for example, Bernie Spofforth could successfully sue under Section 179. She is accused of making the first post online that falsely described the Southport attacker, saying at the time that if the details were true, “all hell would break loose”.

We need to be honest and realize that it is not just fringe groups that are using this harmful rhetoric, writes Adeeb Ayton.

Adeeb Ayton

But that wasn’t the case, and that was after her post went viral. She later explained: “I didn’t make this up. I got this information first from someone in Southport,” but by that point the damage had already been done, although it should be pointed out that Spofforth never claimed she intended to cause harm in the first place.

The second relevant law is Section 13 of the National Security Act 2023, which criminalizes “endangering the security or interests of the United Kingdom,” including by participating in state-sponsored disinformation campaigns.

As Ofcom mentions in part of its consultation on the OSA, these can “undermine confidence in the democratic process” and, as we have seen in the UK over the past few years, disinformation of this kind is extremely worrying, whether it is state-sponsored or not.

Social media is particularly vulnerable to this type of activity because perpetrators can create fake profiles that can then be manipulated by bots and spread algorithmically.

Key elements of the misinformation about the Southport murders come from a social media post by Channel 3 News

Bethany Usher

The law is unlikely to address the kind of disinformation that caused such consternation in the unrest. Much of it was domestically generated, and even if it came from abroad, it would be extremely difficult to prove legally that it was the result of an actual government operation.

Furthermore, Ofcom tends to emphasise the harm that this type of disinformation causes to individuals – in line with the definition of harm in section 234 of the OSA, which refers to “physical and psychological harm”, as opposed to wider systemic harm affecting society as a whole.

The consultation states that “in most cases, the harms we examine primarily affect the individual affected.” And when discussing the harm caused by “miscommunications,” the consultation states: “We anticipate that miscommunications are used to provoke a response from someone. This may be to encourage someone to do a particular activity, or to cause a person to experience psychological effects such as fear and anxiety.”

It is instructive to compare this with the EU’s Digital Services Act 2022, which states: “Manipulative techniques can negatively impact entire groups and amplify societal harm, for example by contributing to disinformation campaigns or discriminating against certain groups.” Areas to be considered are highlighted as “the potential negative impact of systemic risks on society and democracy, such as disinformation or manipulative and abusive activities.”

How the media failed for days to call the right-wing extremist riots what they were – Islamophobic

Mathilda Mallinson and Helena Wadia

All this suggests that if the government is serious about curbing the kind of misinformation and disinformation that contributed to the unrest, it will need to go beyond the OSA and pass laws that can outlaw such content and then add it to the list of “priority offences” in Schedule 7 of the Act.

However, should a government attempt to do so, it will face an absolute determination by influential sections of the established press to ban from the law any measures they believe could be used against the kind of journalism in which they specialise.

Even before its concerted attack on the proposal in the 2019 White Paper on Online Harms to ban “disinformation” and “false or misleading information,” the News Media Association (NMA) had been on the warpath.

In 2017, the NMA used its evidence before the DCMS (Mission for the Medical Services) Select Committee to launch a fierce and highly personal attack on those who questioned the credibility of the journalism in some of its members’ newspapers.

A collage of the front pages of British newspapers on the subject of migration in Great Britain. Photo: Alamy

The NMA cited Stop Funding Hate, Hacked Off and IMPRESS, as well as individuals who work with them, as examples and complained that the term “fake news” was “used to attack real news, typically with the aim of intimidating the press, silencing dissent and shutting down debate”.

This was done by people “who long for the day when three national newspapers have to close their doors.” The NMA warned: “Although we take the phenomenon of fake news seriously, the branding of real news as ‘fake news’ is the greater threat to democracy at the moment.”

Without a doubt, this is the reaction the current government will face should it attempt to legislate against misinformation and disinformation that will affect what is published in newspapers such as the telegraph, Email, Sun And Expresswhether online or offline.

Meanwhile, Ofcom’s complete unwillingness or inability to implement its own Broadcast code in the case of GB News One wonders how on earth they intend to enforce the Online Safety Act when faced with the power of lawyers and the fabulous wealth of Silicon Valley tech titans.

Leave a Reply

Your email address will not be published. Required fields are marked *