The amended bill aims to safeguard freedom of expression whilst still protecting children and adult users in the online environment.

By Gail E. Crawford, Deborah J. Kirk, Alain Traill, and Victoria Wan

The Online Safety Bill (the Bill) was introduced by the UK government on 17 March 2022. The Bill aims to impose obligations on in-scope “user-to-user services” and “search engines” to implement adequate processes to protect users from illegal and harmful online content. Service providers are in scope if they are linked to the UK by either (i) having a significant number of users or targeting users in the UK; or (ii) being accessible by individuals located in the UK and posing a material risk of significant harm to these users. For more information, read Latham & Watkins’ summary of the Bill as initially drafted here and the previous amendments from September 2022 here.

The Bill returned to Parliament on 5 December 2022 with a series of major amendments compared to the previous draft, as detailed in the Written Ministerial Statement on 29 November 2022 and tabled here. The amendments follow criticisms from various stakeholders, including feedback that the obligations relating to “legal, but harmful content” are unclear and/or curtail free speech online and that the obligations relating to children are insufficient to ensure children’s safety online.

What the New Amendments Cover

  • Legal, but harmful content: The previous draft of the Bill imposed duties of care on in-scope service providers to address both illegal and legal, but harmful content. Since stakeholders have raised concerns regarding the impact of these rules on freedom of speech, the latest amendments replace the previous obligations to remove legal but harmful content with a “Triple Shield” of protection:
    • Removal of illegal content: Service providers would be under a duty to remove illegal content from their platforms, but not necessarily legal, but harmful content.
    • Removal of content in breach of terms and conditions: Service providers would also need to remove any content which breaches their terms and conditions. However, the Bill is unclear as to how much freedom service providers would have to decide what types of content will be considered a breach of their terms and conditions.
    • Greater user control over content: Service providers would need to provide empowerment tools to allow adult users to decide what content they wish to engage with.  Providers would need to specify what content relates to suicide, self-harm, eating disorders, hate based on discrimination, or other areas which could affect vulnerable users. The Bill considers the following tools, among others: blocking content flagged by other users, warning screens, and human moderation. Category 1[1] service providers would still need to provide tools to block anonymous or unverified users as per the previous draft of the Bill. These control tools aim to allow users to tailor their exposure to unsolicited content without supressing freedom of expression.
  • Children’s safety online: The previous draft required service providers to carry out risk assessments on the dangers posed to children, but it did not place a duty on the service providers to publish the risk assessments. The amendments would require Category 1[2] service providers to proactively publish their risk assessment for illegal or harmful material to children. They also require all user-to-user services to take proportionate measures relating to the design and operation of their service to effectively tackle child offences (e.g., by using age verification or another means of age assurance). Furthermore, if service providers specify a minimum age for users, they will also have to specify the measures used to enforce this in their terms and conditions. The amendments also name the Children’s Commissioner as a statutory consultee for Ofcom so that the codes of practice reflect robust measures to protect children.
  • New offences: The amendments introduce offences for epilepsy trolling (offences in relation to flashing images), encouraging or assisting serious self-harm, and sharing of people’s intimate images without consent. The offence for controlling or coercive behaviour already in the draft version of the Bill has now been added to the list of priority offences, for the purpose of protecting women and girls online.
  • More transparency and accountability: Category 1 and 2[3] service providers would need to produce an annual transparency report for Ofcom, related to harmful content to children and priority content that is harmful to adults by means of a service[4]. As part of the transparency requirements, Ofcom can also require service providers to publish the enforcement actions Ofcom takes against them.

Next Steps

Since the Bill has been substantially amended since the previous draft, the UK parliament has approved the motion to move the Bill backwards one step in the legislative process. Accordingly, the Bill returned to the “committee stage” in the House of Commons on 5 December 2022 so that certain clauses return to the Public Bill Committee for consideration and robust scrutiny.

The Bill will likely be passed from the House of Commons to the House of Lords in early 2023. Given the Bill was “carried over” from a previous parliamentary term, the government must complete the legislative process by the end of this parliamentary term (April 2023).

This post was prepared with the assistance of Alice Maresi in the London office of Latham & Watkins.

Endnotes


[1] The exact threshold conditions for Category 1, Category 2A, and Category 2B will be determined by Ofcom “as soon as reasonably practicable” after the first sections of the Bill come into force (see Section 83, the Bill).

[2] As above.

[3] As above.

[4] “Priority content that is harmful to adults” is defined as content of a description designated in regulations made by the Secretary of State as priority content that is harmful to adults (see Section 55, Bill).