The DSA has a broad scope and regulates many aspects of digital services.

By Gail E. Crawford, Jean-Luc Juhan, Susan Kempe-Mueller, Deborah J. Kirk, Lars Kjølbye, Elisabetta Righini, Sven Völcker, Ben Leigh, Victoria Wan, and Amy Smyth

The Digital Services Act (DSA) is a key part of the EU’s digital regulation strategy, which seeks to modernise legal frameworks and create a safer and more open digital environment.

The DSA entered into

The directives aim to assist claimants in proving the causation of damages and product defectiveness in complex AI systems, creating legal certainty for providers.

By Deborah J. Kirk, Thomas Vogel, Grace E. Erskine, Ben Leigh, Alex Park, and Amy Smyth

On 28 September 2022, the European Commission issued two proposed directives to reform and clarify liability rules on artificial intelligence (AI):

  1. The Directive on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence (AI Liability Directive) introduces rules on evidence and causation to facilitate civil claims for damages in respect of harm to end users caused by AI systems.
  2. The Directive on Liability for Defective Products (Revised Product Liability Directive) seeks to repeal and replace the 1985 Product Liability Directive (Directive 85/374/EEC) with an updated framework to better reflect the digital economy. The Revised Product Liability Directive proposes to explicitly include AI products within the scope of its strict liability regime and to modify the burden of proof for establishing defectiveness of technically or scientifically complex products like AI systems.

The UK government and regulators have taken several steps to implement a 10-year strategy published last year outlining the government’s pro-innovation national approach to AI.

By Deborah J. Kirk, Laura Holden, Nara Yoo, and Amy Smyth

The UK Department of Digital, Culture, Media and Sport (DCMS) published its 10-year National AI Strategy for the regulation and promotion of artificial intelligence (AI) in the UK (Report). DCMS seeks to build “the most pro-innovation regulatory environment in the world” 

The proposed Regulation will be the first EU legal framework specifically focused on the rapidly accelerating landscape of AI.

By Deborah J. Kirk, Elisabetta Righini, Laura Holden, Luke Vaz, and Amy Smyth

The feedback period for the European Commission (EC) proposal for the Regulation of artificial intelligence (AI) (COM (2021)206) (proposed Regulation) closed on 6 August 2021, during which time 304 pieces of feedback were received, marking another milestone in pursuit of the first EU

UK strives to “be a leader in AI technology” as it sets out its next steps for the regulation of artificial intelligence.

By Deborah J. Kirk, Laura Holden, and Victoria Wan

On 23 March 2021, the UK Intellectual Property Office (IPO) published the outcome of its consultation last year on artificial intelligence (AI) and intellectual property (IP) (the Response). The Response highlights the UK’s ambition to “be a leader in AI technology” by developing and adapting IP legislation in light of developments in AI technology, notwithstanding the fact that such developments may, post-Brexit, result in a divergence between the approach taken in the UK and the EU.

Since the UK’s withdrawal from the EU, the UK has clearly indicated that it will take its own path to encourage AI innovation while protecting IP rights to solidify its position as a leader in AI innovation. The UK has been active in its attempt to secure this position. The Response follows the AI Council’s publication of an AI Roadmap in January this year and the House of Lords Liaison Committee’s publication of AI in the UK: No Room for Complacency in December last year, two reports that recognise the importance of good governance and regulation for public trust while specifying that flexible regulation is critical. The Response is an example of one of the actions recommended by the House of Lords’ Liaison Committee for sector-specific regulators (such as the IPO) to identify gaps in regulation (such as IP legislation) to address issues raised by AI.

The long-awaited update to the e-Commerce Directive proposes new obligations for online platforms and changes to the ‘safe harbours’ from liability for infringing content.

By Jean-Luc Juhan, Deborah J. Kirk, Elisabetta Righini, Thies Deike, Grace E. Erskine, Alain Traill, and Amy Smyth

On 15 December 2020, the European Commission released a set of long-awaited proposals to create a safer and fairer digital space, including the Digital Services Act (DSA) and the Digital Markets Act

As IT vendors grapple with the impacts and risks of COVID-19, how can customers manage exposure when contracting for new services?

By Alain Traill, Christian F. McDermott, and Andrew C. Moyle

COVID-19 has — temporarily or otherwise — disrupted the status quo. For IT vendors the situation is no different, with many being forced to dust off their contracts and seek relief under force majeure provisions. So, where does this leave customers that are seeking to enter into new IT service arrangements, but find themselves faced with pushback from increasingly risk-averse vendors (not just in terms of COVID-19, but also future COVID-19-type scenarios)?

The question of where risk should lie is becoming a new battleground in negotiations. This post focuses on the customer’s perspective, setting out some key steps that customers can take to help achieve a balanced contract.

Insights from Latham’s flagship event: Managing the risk and promise of digitisation in financial services

Authors: Andrew Moyle, Nicola Higgs, Christian McDermott, and Kirsty Watkins.

The financial services industry is leading the way in outsourcing, with contract values in excess of US$10.7 billion in 2018, causing regulators to focus more than ever on the associated risks. Guidelines on outsourcing arrangements from the European Banking Authority (EBA), which came into effect on 30 September 2019, expand the requirements on institutions in this area, while both the Financial Conduct Authority (FCA) and the Prudential Regulation Authority (PRA) are also increasing their outsourcing supervision and enforcement activity.

We discussed the new requirements for financial institutions to maintain a register of outsourcing arrangements, and adhere to more stringent risk assessment and due diligence requirements at our recent event entitled Balancing the Scales: Managing the Risk and Promise of Digitisation in Financial Services.

With the explosion of AI applications, private equity houses and their portfolio companies must understand where key opportunities lie.

By Tom Evans, Kem Ihenacho, David Walker, Laura Holden, Hector Sants, Claudia Sousa, Catherine Campbell, and Patricia Kelly

Click for larger image.

Artificial intelligence (AI) developments provide increasing opportunities for private equity, including deal sourcing and portfolio company analysis/enhancement, particularly in businesses that can adopt a customer subscription model or leverage big data opportunities. However, the adoption of AI technologies, and investments in new AI businesses, pose significant challenges. To ensure that time and capital are deployed productively, firms must understand the market space and usage for these tools, and the workings and accuracy of any underlying technology. How technology models and algorithms work, where underlying IP resides, and where data is stored are key. Whilst the use of AI is often discussed, it is much less often understood; we are seeing an explosion of AI applications and PE houses and their portfolio companies need to understand where the opportunities are for them to exploit.

A Tool to Secure Deal Opportunities and Drive Portfolio Company Growth 

According to a survey conducted by Intertrust, 90% of private equity firms expect AI to have a transformative impact on the industry. AI-backed data analytics are playing a growing role in analysing and identifying deals. QuantCube Technology, for example, provides in-depth data analysis, drawing on customer reviews and social media posts to develop predictive indicators of events, such as economic growth or price changes. There are now companies offering AI-driven technologies that claim to help source PE deals. While this presents a potentially compelling use of AI for investors, it remains to be seen whether these technologies will deliver results. 

UK publishes White Paper with hard-hitting regulatory proposals to tackle online harms.

By Alain Traill, Stuart Davis, Andrew Moyle, Deborah Kirk and Gail Crawford

On 8 April 2019, the Home Office and the Department for Culture, Media and Sport (DCMS) published an “Online Harms White Paper”, proposing a new compliance and enforcement regime intended to combat online harms. The regime is designed to force online platforms to move away from self-regulation and sets out a legal framework to tackle users’ illegal and socially harmful activity. Although the regime appears to target larger social media platforms, the proposals technically extend to all organisations that provide online platforms allowing user interaction or user-generated content (not limited to social media companies or even ‘service providers’ in the traditional sense) and set out a potentially onerous and punitive compliance and enforcement regime for a broad set of online providers.