The United Kingdom’s landmark Online Safety Act officially came into effect on Monday, imposing strict regulations on social media platforms like Meta’s Facebook and ByteDance’s TikTok to combat illegal activity and prioritize user safety by design.
The legislation, enacted last year, introduces robust standards to protect children and ensure the removal of illegal content. Media regulator Ofcom has published its first set of codes of practice, focusing on tackling illegal harms such as child sexual abuse material and content encouraging suicide.
Platforms have until March 16, 2025, to assess risks posed by illegal content to both children and adults. They are required to implement measures such as enhanced moderation systems, accessible reporting tools, and built-in safety mechanisms to mitigate these risks.
Ofcom Chief Executive Melanie Dawes stressed the importance of compliance, warning:
“We’ll be watching the industry closely to ensure firms meet the strict safety standards set under our first codes and guidance, with additional requirements to follow in the first half of next year.”
Under the new guidelines, platforms deemed high-risk must deploy tools like hash-matching and URL detection to identify and remove child sexual abuse material. They must also ensure their reporting and complaints systems are straightforward for users to access and navigate.
Failure to comply with the new regulations could result in severe penalties, including fines of up to £18 million ($22.3 million) or 10% of a company’s global annual revenue. UK Technology Secretary Peter Kyle described the measures as a “significant change,” stating:
“If platforms fail to step up, the regulator has my full backing to use its powers, including issuing fines and blocking site access through the courts.”
The introduction of the Online Safety Act represents a major milestone in strengthening online protections, holding tech companies accountable, and setting a global precedent for digital safety standards.