Walking the tightrope between online safety and privacy

 Walking the tightrope between online safety and privacy

What are governments and tech companies doing with regard to digital safeguarding, and how can we ensure that we achieve the right balance with other digital rights? At RAID H2 conference in October 2022 Prof. Victoria Baines, IT Livery Company Professor of IT, Gresham College opened the debate between Thomas Van der Valk, Privacy Policy Manager EMEA, Meta, Joanna Conway, Partner & Internet Regulation (Legal), Deloitte, Adv. Collen Weapond, Commissioner, Information Regulator (South Africa) and Andrew Carroll, Assistant Commissioner, Data Protection Commission (Ireland).

 

Prof. Victoria Baines, IT Livery Company Professor of IT, Gresham College: Thomas, what’s going on in terms of regulation and also in terms of what companies are doing to make the online space safer?

Thomas Van der Valk, Privacy Policy Manager EMEA, Meta: There’s a lot going on in terms of regulation around youth. Meta is investing a lot in this space in making sure that all users, but particularly youth, are protected on our services. If we want people to connect with each other across the globe, people need to feel safe in doing so using our services. So we are heavily invested in getting that right, but that is definitely a challenge.

For young people, it is also incredibly important to be able to go online in safe environments in order to educate themselves, in order to develop themselves and to access the right resources or services that they need in a responsible way. At the same time, we need to safeguard their safety. As with lots of things in life and in tech nothing is straightforward, so there are no single perfect solutions to getting to the right balance between, for instance, safety and privacy.

That’s why we’ve been working on a multi-layered approach when it comes to safeguarding particularly minors. So that involves age verification, parental supervision and also an age-appropriate experience. We have a whole suite of tools and safeguards that do just that. So you can think of privacy settings, of safety tools particularly aimed to protect minors, messaging controls, tools for people including minors to report violating content, antibullying tools, but also ad restrictions for minors, which is also being included by the recently adopted the Digital Services Act in the EU.

And we have invested a lot in educational resources for minors and their parents. So this year for instance on Instagram we’ve launched the family centre, which provides tools for parental supervision and also resources for teens and adults to make sure that the right tools are in place and that the conversation between teens and parents and guardians is facilitated.

And of course there’s the Online Safety Bill in the UK and KOSA in the US being discussed as well and also here we think this issue is so important that we support any measures that keep young people safe. Of course we also have our own insights on this, and we continue to work with regulators policymakers and experts alike on this and especially also teens and adults because we need to solve these issues together; we can’t do that in isolation.

 

Prof. Victoria Baines: And Joey, what are you seeing happening from a regulatory perspective?

Joanna Conway, Partner & Internet Regulation (Legal), Deloitte: I completely agree with what Thomas has said about this being a multi-layered issue, and it does require different stakeholders to play their part. There is a range of measures that are important here from laws and regulation, but also things like education, media literacy, the tools you put in place to empower your users, parental controls – all those great things that Thomas talked.

From a legal and regulatory perspective, what we’re seeing is a wave of regulation and laws coming in globally which touch on this issue of making the online space safer. A lot of that is principles-based regulation – and actually GDPR and the approach to personal data is a good analogy. We’ve seen developments in that space over the years, we’ve learnt from that and we’re now seeing similar regulation coming in, but tackling other issues that are relevant to safety. Privacy obviously is one of those, but we’re seeing regulations around online content for instance, around interoperability and security.

So what we’re seeing is a new layer really of obligations being put on predominantly on the platform companies. There will be potential laws, depending on the jurisdiction, which enable a user who’s been harmed to take action against that user who has harmed them.

And there is also generally a principle of that platform who is hosting that content potentially being liable for it, depending on shields that are in place et cetera. That’s always played a part in online safety, but now with these principle-based regulations coming in, we’re seeing these obligations on platforms themselves saying, you need to be responsible, you need to risk assess for instance illegal content – and then you need to take mitigating measures to prevent the harms. And that’s new, that’s different and it’s an extra additional layer that’s going towards making the online space safer.

 

Prof. Victoria Baines: Yes, we’ve moved on haven’t we from a world in which “notice and take-down” was deemed to be sufficient, and where there is a more proactive role in creating safer environments, be that by content regulation or by ensuring a certain level of privacy. I’m going to take a couple of national perspectives now; so Collen, you’re on the sharp end of operationalizing online safety at a national level. I’d love to hear your perspective on what you think the key issues are now.

Adv. Collen Weapond, Commissioner, Information Regulator (South Africa): Thank you so much and yes I do agree with my fellow panel members. What a UNICEF report found is that 95% of children in South Africa have access to the internet and one third of those 95% are actually at risk of experiencing violence, exploitation et cetera. So that then underpins the type of exposure and risk that children in South Africa are exposed to from the online perspective.

So what is very important is that we’ve got a Cybercrimes Act that came into force in 2020. It does not provide for the full extent of protection specifically of children, but it more or less provides a broad law landscape that is all inclusive. It is not yet been tested fully.

I also do agree that various stakeholders have to play a role; but what I think is very important is that we need to at some stage analyse, through a group of stakeholders, the extent to which each stakeholder influence is required to ensure online safety.

Just by way of an example, the South African information regulator had a discussion with TikTok, and it was then just a preliminary education of the information regulator about the type of measures that they were putting in place to ensure the safety of children when they use TikTok. During the presentation, the chairperson and the commissioners made various critiques which TikTok has subsequently taken back to ensure that the measures that they are putting in place are enhanced and very responsive. As technology progresses, there will always be that gap, but the monitoring part becomes critical in this instance.

 

Prof. Victoria Baines: I think that’s a really interesting point that the needs of children and young people or of users in different markets may not respond to a one-size-fits-all provision of online safety and that going out and speaking to regulators in different countries means that you can actually make platforms safer as a whole by incorporating some of those different needs and getting those that tailored feedback from those markets themselves. So Andrew, if we’re thinking about some of the most heinous behaviours to which children might be subjected on the internet – things like child sexual exploitation and abuse, which universally all countries and platforms have outlawed – it’s one thing operationalising that on its own isn’t it, but how do we achieve a balance between privacy and data protection, and ensuring that we deliver safety for children and young people?

Andrew Carroll, Assistant Commissioner, Data Protection Commission – Ireland: It’s important to recognise first and foremost that there is a trade-off between individual privacy and safety, because in my experience this often isn’t acknowledged enough. So the more absolutist approach that we adopt, to upholding individual privacy rights in all circumstances, the less safe the most vulnerable users such as children will be, because that will entail by implication respecting the privacy rights even of individuals suspected of creating and sharing CSAM content.

But on the other hand, it is an open question as to how much of our privacy and anonymity we are collectively willing to give up to keep children safe. So I’m sure our audience will be aware of the various legislative proposals in the UK and the EU which would among other things create significant new monitoring obligations on providers of end-to-end encrypted messaging services, which have raised understandable anxieties about the long-term consequences for individual privacy and whether this is the thin end of the wedge – and what the consequences will be for freedom of speech and freedom of information rights and others. Because if you bring in these rules for detecting CSAM content, that maybe one day we will expand them into fighting other online harms such as extremist content or content that incites violence or content which encourages self-harm or suicide. So these are understandable concerns, and they illustrate the complexity of the rights-balancing exercises that are required for regulation in this area.

We also have to think of the risk of so-called false positives and the potentially extremely damaging consequences for individuals who are wrongly flagged as having shared CSAM on social media or information society services. I’m sure everyone would have seen that article recently about the father whose account with a major online platform was wrongly suspended because he was sharing pictures of his infant son with his doctor to try and get a medical concern diagnosed. So this is just an example of the complexity of the issues that we are all grappling with.

But from my perspective it’s clear that the status quo is not an option, because of the prevalence of this sort of harmful content; it’s a large-scale problem which can’t be ignored. I believe in 2021 the US National Centre for Missing and Exploited Children received about 30 million reports of online exploitation of children per year, almost all of which came from self-reporting by the large online service providers.

So it’s clear that we can’t turn a blind eye to this; we need regulation in this area and that regulation will involve difficult decisions and trade-offs. Data protection law in Europe – the GDPR – is clear that the right to data protection should be considered in relation to its function in society and balanced against other fundamental rights and freedoms; and that presumably includes the rights of children to be free from all forms of physical and mental violence, as per Article 19 of the UN Convention of the rights of the child.

So in terms of how we manage these trade-offs, I think we need to embrace and encourage innovative solutions including artificial intelligence for detecting and removing CSAM from social media platforms. We need to diversify toolkits, but we need to make sure that all of these tools are designed in full compliance with data protection law, such as the principles relating to the processing of personal data in relation to lawfulness, fairness and transparency, and purpose limitation towards security.

We also need transparency around these tools, that they are doing their job in detecting and removing CSAM content, but also transparency in terms of how they are designed, so that we know the necessary rights-balancing exercises have been carried out and any threats to individual rights and freedoms have been identified and mitigated as far as possible.

We also need better education and public awareness around these around these tools. So in the European context is good to see legislation being tabled with the European Commission’s regulatorily proposal for detecting and monitoring CSAM unveiled earlier this year. It’s now received a number of submissions including an opinion from the European Data Protection Board and it’s for the institutions of the European Union, the law-making institutions, to now further develop and refine those proposals, taking into account all the feedback that they have received, which will bring much needed clarity for everyone operating in the digital space as to where the line falls between these two objectives.

Follow the rest of the panel discussion here: https://www.raid.tech/raid-h2-2022-recordings