On 26 October 2023, the Online Safety Act came into force with a grace period after receiving Royal Assent. The Act aims to enhance online safety by imposing new responsibilities on social media companies. The Act adopts a zero-tolerance approach to safeguarding children online, making social media platforms accountable for the content they host. Social media platforms will be required to promptly remove illegal content, prevent access to harmful content for children, enforce age limits and age-checking measures, provide transparency regarding risks to children, and offer accessible reporting mechanisms for users encountering problems online. The Act also provides measures for protection for adult internet users, including requirements to remove illegal content, legal enforcement of platform promises in their terms and conditions, and the option to filter out harmful content like bullying. Further, the Act includes provisions to combat online fraud and violence against women and girls, making it easier to prosecute those sharing intimate images without consent and criminalising the non-consensual sharing of deepfake content. Additionally, social media companies will be required to prevent content on animal cruelty, even if it occurs outside the UK but is visible to users in the country. Non-compliant social media platforms could face fines of up to GBP 18 million or 10% of their global annual revenue and could result in even imprisonment for company executives. Communication offences relating to cyberflashing, epilepsy-trolling and fake news will be implemented 31 January 2024.
Original source