SPONSORED
Elevate Magazine
December 20, 2024

UK Launches Strict Online Safety Law for Tech Firms

uk launches strict online safety law for tech firms

Photo source: Light Reading

On Monday, the United Kingdom officially implemented its online safety legislation, introducing stricter oversight for harmful online content and imposing potential fines on major tech companies like Meta, Google, and TikTok.

Ofcom, the UK’s media and telecommunications regulator, released its inaugural codes of practice and guidelines for tech firms. These documents outline the expected measures against illegal activities on their platforms, including terrorism, hate speech, fraud, and child sexual abuse.

The Online Safety Act, which became law in October 2023, imposes “duties of care” on tech companies, making them accountable for harmful content shared on their platforms. While the act was previously passed, Monday’s development marks the official implementation of its safety duties.

Tech platforms have until March 16, 2025, to complete illegal harms risk assessments, effectively granting them a three-month window to align their platforms with the new regulations. Following this deadline, platforms must begin implementing measures to mitigate illegal harms risks, such as improved moderation, simplified reporting processes, and integrated safety tests.

Enforcement and Penalties

Under the Online Safety Act, Ofcom can impose fines of up to 10% of a company’s global annual revenue for rule violations. In cases of repeated breaches, individual senior managers could face potential imprisonment. For the most severe infractions, Ofcom may seek a court order to block UK access to a service or restrict its access to payment providers or advertisers.

The duties will encompass social media firms, search engines, messaging services, gaming and dating apps, as well as pornography and file-sharing sites. The first-edition code mandates that reporting and complaint functions must be more accessible and user-friendly. High-risk platforms will be required to employ hash-matching technology to detect and remove child sexual abuse material (CSAM).

Ofcom emphasised that the codes published on Monday are just the initial set, with plans to consult on additional codes in spring 2025. These future codes may include measures such as blocking accounts found to have shared CSAM content and enabling the use of AI to tackle illegal harms.

“If platforms fail to step up, the regulator has my backing to use its full powers, including issuing fines and asking the courts to block access to sites,” said British Technology Minister Peter Kyle.