So, the online safety bill is once again back under consideration and already looking like its getting softer. The proposed dropping of the “legal but harmful” clause being another example of a focus on individual privacy winning out over monitoring and filtering in the interests of public, and child, safety.
Now I understand the challenge here of balancing individual privacy and public good. Individual privacy is enshrined in the principles of basic human rights, yet we want our governments, intelligence services, police and even schools to be able to monitor and filter content to keep people safe and to proactively identify potential threats to the lives and wellbeing of those under their care. These are opposing points on a continuum and each step made positively in one direction is usually at the expense of the other position. More privacy means less ability to monitor/filter in the interests of public good. More filtering/monitoring means less privacy and the risk of data being mis-used or leaked.
To me it is clear that there is a definite tendency towards individual privacy winning out in this argument. Apple quietly dropping its plan to monitoring iCloud accounts for Child Sexual Abuse Material (CSAM) and now the UK government looking to remove the “legal but harmful” clause being two good examples of how privacy is winning. I doubt this will change, at least for now, and especially as more and more organisations are seeing fines and reported issues as to how they are managing the data of individuals. So, what is the solution in particular in relation to schools where online safety is such a key and important topic and issue?
I think the key here is in establishing very clearly the need for social media vendors to look after children using their platforms. Maybe the “legal and harmful” clause is inappropriate when applied across the general population but surely we must be able to agree we need to protect our children and therefore identify some of the materials which might be legal yet harmful to them. And it isnt just the content that is the issue, but the medium and the algorithms feeding the content. Is it right to categorise a child, where children are more impressionable, and then field them a specific type of content constantly, based on trying to keep them hooked on an app? Might this not shape their world view such that they see things as rather binary rather than the more nuanced and complex nature of the real world and real life? Is it right to feed children almost constant streams of content, including potentially harmful content, or provide contact with unknown individuals? We need to make the vendors consider the medium they providing along with their algorithms and the potential impact they have rather than just pointing to the content as the issue which needs dealt with.
I will admit I saw problems with the Online Safety Bill from the outset, and even more so given it was first proposed as a draft in May 2021, over 18 months ago; In the technology world 18 months is a long time and a lot can happen so this highlights how legislation will always be playing catch up. My original concerns, I will admit, were more on the technical side of things. Privacy points towards end to end encryption and other security solutions which then hamper monitoring and filtering, plus there is the challenge that social media vendors cross geographic jurisdictions, where different governments may have different motives and ethical standards for the monitoring they may require or request. Also any weakening of security and privacy may in turn increase the likelihood of cyber criminals gaining access to data. So my concerns were that, although the bill might be well meaning, it would be difficult or impossible to effectively implement.
That said, something needs to be in place and I think this is the point we have now got to, that we need to accept something imperfect as a starting point and then hopefully build from there. I will also admit that the responsibility for online safety doesn’t just belong to the centralised provider of social media and other services, or to the centralised government of the nation within which a user resides. When we talk of online safety and children, parents and guardians also have their part to play, as do school pastoral teams, form group tutors and teachers, friends and other members of a child’s wider social and family circle. And maybe this focus on the online safety bill for a single answer may actually be having a negative impact in taking our eye of the need for a wider and collective effort to keep children safe.
I suspect the solution at this point is to get the online safety bill into law. Its better than nothing and can add to the wider efforts required, and hopefully be seen as a step in the right direction rather than an endpoint.