Skip to main content
Share via Share via Share via Copy link

The Online Safety Bill – what does it mean for schools?

28 March 2023

Today’s children and young people are growing up in an increasingly complex world, living their lives seamlessly on and offline, which presents opportunities as well as risks for the education sector. 

The Online Safety Bill (the Bill) aims to make the internet a safer place for all users, including children (i.e. those under the age of 18), by prohibiting providers of user-to-user services (i.e. social media platforms) from hosting illegal or harmful content. 

When passed, the legislation will mark a milestone in the fight to hold the biggest and most popular providers of social media platforms to account, in addition to providers of search engines and generally those companies whose services host user generated content such as images, videos, and comments. 

The draft Bill anticipates the legislation will be extraterritorial, which means that the legislation will apply to any site accessed from the UK regardless of where the site is based. It marks one of the biggest planned changes to date in the way online tech companies are held responsible for content that appears on their platforms.

Protection for children

The Bill primarily protects children, by imposing a duty on user-to-user service providers to:

  • Remove harmful content or ensuring that it does not appear in the first place
  • Enforce age limits and age-checking measures
  • Ensure risks and dangers to children’s safety are more transparent, including publishing risk assessments
  • Assist parents by making it easier for them to monitor their children's online activities, providing them with clear and accessible ways to report problems 

Any platform that is likely to be accessed by children will now have a duty of care which means the providers must take steps to protect children and young people from accessing content that is illegal and harmful. Some content, while not illegal, may be harmful or age-inappropriate for children. 

Harmful content

Harmful content that platforms providers will need to protect children from accessing is likely to include pornographic content, online abuse, cyberbullying, and online harassment, as well as content that does not meet a criminal level, but which promotes or glorifies topics such as suicide, self-harm, or eating disorders.

Some have been critical of this, as what is defined as harmful can be subjective. Some content may be viewed as harmful may be helpful to some individuals, e.g. self-harming content may be used by some young people to prevent them from engaging in the act. 

Many social media platforms only allow users over the age of 13 on their platforms. However, according to Ofcom’s ‘Children and parents: media use and attitudes’ report published in 2022, 33% of 5–7 year-olds and a staggering 60% of 8-11 year-olds said they had a social media profile. Organisations providing social media platforms will need to demonstrate that their age verification processes are robust enough to assess whether users are the appropriate age for the content.

Slippery slope or pertinent protection?

The Bill raises concerns regarding freedom of speech and privacy. It proposes measures to end anonymous browsing by requiring some online service providers to implement age-verification checks for users. This means that individuals will have to provide proof of their age and identity before being able to access certain content or services online. While this will help to protect children and young people from accessing inappropriate or harmful content, it could also have negative impacts on privacy and freedom of expression.

Implementation and enforcement 

All organisations in scope will need to tackle illegal content on their services and assess whether their services are likely to be accessed by children.

There are questions about how the Bill would be enforced and whether it would be effective in achieving its goals given the vastness of the internet. As regulator, Ofcom is to be granted enforcement powers including powers to impose substantial fines for breach of up to £18 million or 10% of their annual global turnover (whichever is greater). 

It is also anticipated that Ofcom may request an order from the courts to restrict access to the relevant service being provided by the platform provider. Company executives who fail to cooperate with Ofcom’s requests could also face prosecution or imprisonment.

The Bill will also mean that if harmful content is seen, it must be easy to report. Platform providers will also have a duty to report child sexual exploitation and abuse content to the National Crime Agency. 

What you can do 

For schools, the safeguarding duties are likely to remain the same. The Bill does not remove responsibilities from schools or mean safeguarding children and young people online is put into the hands of the tech companies instead. 

We’ve seen the online safety section of Keeping Children Safe in Education grow year on year and when the Bill is passed, we are likely to see further expansion, if not separate guidance. 

You all know the importance of online safety. Staff are made aware of the risks, policies cover it and through the curriculum schools teach children and young people about keeping themselves safe online and keeping their data safe, too. Filtering and monitoring systems then provide an additional layer of support and protection. 

The Bill will not make these approaches redundant; schools will still need to take the lead with online safety, but the Bill should make the platforms take more responsibility too. That can only be a good thing.

We provide a range of support and resources relating to safeguarding to help you meet the requirements of the latest Keeping Children Safe in Education (KCSiE) guidance.

Find out more

Key contact

Key contact

Dai Durbridge

Partner

dai.durbridge@brownejacobson.com

+44 (0)330 045 2105

View profile Connect on LinkedIn
Can we help you? Contact Dai

Related expertise

You may be interested in...