More on Apple’s “child protection” features

Regarding Apple’s upcoming feature to scan children’s Messages content for sexually explicit media, there’s a detail I had missed about how it worked: The “phone will rat on you to your parents” function is only going to happen when the child is under 12. For older children, the Messages app will just warn them in advance that the content is explicit, and they can dismiss the warning and nothing further will happen.

That does indeed make me a bit less concerned about this feature, and that age cutoff feels pretty reasonable.

However, the bigger problem remains here, and that is that Apple’s goal is to prevent child sexual assault, but the tool they’re deploying to do this is an image classifier that tries to detect sexually explicit material.

Even if you pretended for a second that this AI classifier is 100% accurate (and I promise you, it isn’t), it’s still an incredibly inaccurate tool for the job because the AI is incapable of understanding the full context behind something.

I think Apple’s reasoning here is that by having some on-device AI, it’s like your device has your back. In a world full of unsolicited dick pics, I can imagine this being a truly useful feature for both minors and adults alike.

But for that to be useful, it needs to materially be putting the user in control of their device. It would be one thing if I was 17 and my phone warned me that an incoming message from my boyfriend might be sexually explicit, and I could tell the phone not to warn me about further messages from this contact because I want those images, but the warnings remain on for other friends who I’m not expecting nudes from. That’s entirely reasonable!

But when the feature is just turned on for me by my parents, and I get a message with something sexually explicit, whether I wanted it or not, it’s a little unsettling to realize that my phone’s been looking at these messages and analyzing them the whole time, even if I do trust that the phone’s not going to tell my parents. Even if I’m 100% confident it’s just internal AI on my phone doing the scan and it’s not telling anyone, it still leaves me with the feeling that I’m not truly having a private conversation with someone on Messages.

I think for this feature we could meaningfully make it configurable in such a way that it truly does feel like it’s got your back. I think for younger children the defaults Apple has picked are reasonable.

But my larger concerns still remain. Well-intentioned or not, Apple has built out surveillance infrastructure, and they’re going to face pressure from less scrupulous governments to use it more abusively. Machine learning still isn’t amazing at what it does, and this will inevitably lead to some awkward situations when there is a false positive. And importantly, a nudity classifier doesn’t know the difference between consensual sharing of images between friends and someone trying to groom your child to be trafficked.

I said it before, and I’ll say it again: we expect our private conversations to be private when we’re in our homes talking to someone three feet away, and there’s no reason that we should expect any less just because we communicated using a messaging app (especially after a pandemic just forced our communications to be online). Surveillance is surveillance, even when it’s just your own phone doing it locally and not telling anyone.

Leave a Reply

Your email address will not be published. Required fields are marked *