The proposals includes fines for non-compliance of up to the greater of £18 million or 10% of a provider’s annual global revenue.

By Gail Crawford, Rachael Astin, Alain Traill, Katie Henshall, and Amy Smyth

On 12 May 2021, the UK government published the Online Safety Bill (the Bill), which aims to establish a new regulatory regime to address illegal and harmful content online, including fines and other sanctions in the event of non-compliance. While further developments and guidance are expected, the proposed regime seemingly will have significant implications for in-scope user-to-user services and search engines.

The Bill follows the publication of the Online Harms White Paper by the Home Office and the Department for Digital, Culture, Media & Sport in April 2019. An initial government response to the consultation was published in February 2020, and a full government response in December 2020. (For more information, see Latham’s blog posts on the White Paper launch; government interim response; and government full response).

Organisations and Services in Scope

The new regulatory regime will apply to providers of regulated services, specifically:

  • User-to-user services: Internet services that allow users to upload, generate, or share user-generated content or otherwise to interact online, i.e., social media platforms, online market places, and online forums
  • Search services: Services that allow users to search all or some parts of the internet

The Bill’s scope is subject to widely drawn exemptions for services deemed to pose a low risk of harm to users, or that are otherwise regulated, including email or text messaging-only services, internal business services (such as an organisation’s intranet), and services with only limited user-to-user functionalities. Content on news publishers’ websites is also excluded from the scope of the legislation.

The legislation is extra-territorial and will apply to regulated services with links to the UK. Such links are defined as either:

  • (i) having a significant number of users in the UK (“significant” is not defined in this context); or (ii) being targeted towards UK users; or
  • (i) being capable of being used by individuals in the UK; and (ii) giving rise to a material risk of significant harm to individuals in the UK arising from content on/via the service

Duty of Care

In line with the government’s response to the Online Harms White Paper, the Bill imposes a range of statutory duties of care on regulated services providers, broadly to protect users from illegal content generated and shared by other users. In relation to harmful content, there are additional safeguarding obligations for services “likely to be accessed” by children, and additional transparency requirements for services designated as Category 1 services (please see below). The proposed duty of care imposes requirements on providers, both in terms of processes they must implement and their moderation of specific content.

Illegal content is defined by reference to content that is, in fact, illegal, as well as content that the provider has “reasonable grounds to believe” is illegal (under UK law). Providers are required to take proportionate steps to mitigate and effectively manage the risk of harm caused by illegal content, and, additionally, to use proportionate systems and processes to minimise the presence of certain priority illegal content (to be defined in future regulation) and swiftly remove such content on notice.

Ofcom will identify Category 1 services at a later date (based on threshold conditions to be set out in secondary regulation, referencing the number of users, the functionality of the service, and the risk of harm from content). The regulations defining the specific Category 1 threshold conditions have not yet been released. In efforts to ensure a risk-based approach to applying the new regime, Category 1 services are subject to additional layers of duties, primarily risk assessment and transparency requirements in relation to content that is harmful to adults (these duties are less onerous than the harmful content safeguarding duties in relation to children). Harmful content includes content that the provider has “reasonable grounds to believe” gives rise to a “material risk” of a “significant adverse physical or psychological impact on an [adult/ child (as applicable)] of ordinary sensibilities”. Harmful content is expected to be further defined in future regulation.

The broad scope of regulated content means that regulated providers may be liable for content that is deemed illegal or harmful under the proposed legislation, but that is not in fact unlawful and does not give rise to liability for the publisher of the content. This potential to impose liability on platform providers, absent equivalent publisher liability for that content, marks a significant shift in risk allocation for user-to-user platforms.

In addition to duties to safeguard against illegal and harmful content, regulated services providers are also under parallel duties to have regard to freedom of speech and privacy rights, and, for providers of Category 1 services, the free expression of both journalistic content and “content of democratic importance”.

The proposed multiple layers of extensive duties of care, combined with the broad scope of regulated illegal and harmful content, are likely to prove onerous for regulated providers and may involve significant cost and resource implications for compliance.

Oversight and Enforcement

As anticipated, the Bill confers powers on Ofcom to regulate the regime, and requires Ofcom to identify Category 1 providers and to prepare codes of practice to assist regulated services providers in complying with their relevant duties of care.

Such codes of practice should be produced in line with the online safety objectives and should describe recommended steps to ensure compliance. Businesses must either comply with the steps recommended in the codes of practice or justify a departure from those steps.

Ofcom will have access to a range of sanctions, including: (i) imposing fines of up to £18 million, or 10% of a providers’ annual global revenue — whichever is highest; (ii) seeking a court order to disrupt the activities of non-compliant providers (or to prevent access to their services altogether); or (iii) pursuing criminal action against named senior managers whose companies do not comply with Ofcom’s requests for information (although this provision will not be effective immediately).

As previously envisaged in the consultation responses, a super-complaints procedure will allow certain eligible entities to make a complaint to Ofcom (eligible entities will be defined in future regulation, but are expected to include consumer rights organisations and similar). Although the Bill does not introduce new routes for individuals to bring claims directly against providers, the expanded regulatory regime and increased transparency requirements likely will make it easier for individuals to pursue claims in practice. In the long term, providers may find that the expanded regime and increased requirements result in a steadily increasing risk of damages claims, alongside regulatory enforcement.

Next Steps

The Bill will now be subject to pre-legislative scrutiny by a joint committee of MPs. A final version of the draft legislation is expected to be formally introduced to the UK Parliament later this year.

While many aspects of the regime and Ofcom’s codes of practice remain unclear, businesses should prepare for the introduction of the new regime now, bearing in mind the potential for significant operational impact. Aspects of the new regime, including content risk assessments, risk mitigation measures, and transparency, are also expected to apply in the EU pursuant to the EU’s draft Digital Services Act. Though similar in some areas, differences between the two proposed regimes do (and will) exist. Platform businesses operating in both the UK and the EU should therefore be alive to the requirements of preparing for, and complying with, both regimes in parallel.