Are Apple’s New Updates A Poisoned Fruit For Kids?
By: Glen Pounder, COO Child Rescue Coalition
Some of the upcoming changes announced by Apple are a welcome step forward from a company that has previously, arguably, shied away from proactively protecting children using its platforms and devices.
In 2020, Apple reported a total of 265 suspect reports to the National Center for Missing and Exploited Children – NCMEC. This is far lower than other online platforms who report thousands and even millions. For example, Facebook reported over 20 million to NCMEC in 2020.
With this new update, Apple has rightly been recognized, including by Child Rescue Coalition, for its move towards the identification of known Child Sexual Abuse Material (CSAM) in its iCloud services. This is a move which is overdue and which, I believe, should be compulsory by law for ALL online companies and not those who only volunteer. We hope other companies such as Amazon presenting the same concerns to parents, will soon follow suit.
Another of Apple’s changes will use innovative technology to identify potentially harmful images received by a young child’s device. All very commendable and useful for parents of young children. Providing a phone to our child is something that can give us peace of mind if the device is used sensibly, for example, for when our children need to contact us in an emergency.
However, this apparent move towards more child protection must not be counter balanced by a conscious decision, in the same iOS update, to sacrifice safety on the altar of privacy nor to ignore child sexual abuse material based on an arbitrary threshold which to me makes no sense in real world scenarios.
APPLE UPDATE 1
First, the good news. What is it and why do I like it?
For a child between the ages 0 and 12, parents will receive an optional and free built-in upgrade. They will be notified if their child receives ,and then goes on to open a potentially harmful image. The child is given an initial piece of advice and asked if they want to proceed. This helps with the education of our children in the online world. If they choose to go on and open the image, they are told that their parents will be notified. The technology is complex but is based only on the device itself, it protects privacy by not notifying Apple and therefore remains between the parent and the child.
For a child between the ages of 13 and 17, the first part of the technology provides the child with the same warning, but does NOT notify the parent.
There are competing arguments here around the child’s right to privacy and a parent’s right to protect their children. I can see reasonable points to both sides of this debate. What is right for one 13-year-old may be inappropriate for another less mature or knowledgeable 13-year-old.
When we dig a little further into what some of the other Apple updates mean, when translated into real world scenarios, then the alarm bells begin to sound.
APPLE UPDATE 2
Apple will review images being shared to the iCloud to check whether these images are known Child Sexual Abuse Material (CSAM).
So, what’s the problem?
At first glance, this is a very positive step in the right direction. The bad news is that Apple will only review any account when there are 30 matched files that are classified as CSAM.
By creating this unusually high threshold, Apple has deliberately put themselves into the position where they will never have to review cases. This means even when a child abuser has 29 files containing horrific, and already confirmed CSAM – the sexual abuse and often the violent rape of children. Those offenders, suspected criminals, will never be reported to NCMEC, and those reports will never be sent onto law enforcement to investigate whether a criminal offense has taken place. We may never know whether that suspect is abusing children today and for many years to come.
Let’s compare this “threshold” to other areas where a crime may have occurred?
How many times are human fingerprint matches “false positives”?
One study found a false positive rate of around 0.1% (about 1 in 1,000)
For a real-world example of what can happen; one completely innocent American, Brandon Mayfield, was arrested after an error, a false positive, in a fingerprint analysis. He stood accused of being involved in a terrorist attack in Madrid! Fingerprint false positives are most often associated with human error somewhere in the process. Thankfully, after two weeks in custody, the error was corrected and Mr. Mayfield was set free.
How about mistakes in Human DNA matching?
This is more complicated, and mistakes are most often caused, unsurprisingly, by human error – as opposed to a true “false positive”.
When there is a “near match” with human DNA this may not be a false positive in the way it might first appear. This is because human DNA can be a match to the profile of a person based on their ethnicity.
What do I mean by that? Here’s the best type of example and the reason ancestry checks are an interesting choice of gift for loved ones.
My wife buys me an ancestry kit so I can research my family background. It’s easy enough, I do the mouth swab, send off my results and eagerly wait to see how many races make up my (very personal) DNA. But what if my brother is a long-wanted suspect whose DNA has been recovered at the scene of several murders? So far, he’s never been found as a match because his DNA has never been submitted to a database.
Now, what if the suspect’s DNA is run through the same database as an “ancestry” service and they find a near match for a sibling, me? BINGO – my serial killer brother is caught!
This is the scenario which played out for the golden state killer who would arguably never have been caught without DNA comparison evidence.
Most people are aware of fingerprint and human DNA evidence “catching the bad guy” because we watch CSI or we read about it in the news when a criminal is convicted or sometimes thankfully set free because of DNA evidence. Neither human fingerprints nor DNA on their own should convict anyone of a crime; too many mistakes are possible – human error!
Equally, and despite the hyperbole of many media reports on this issue, NOBODY is suggesting that Apple need to decide the guilt or innocence of a suspect – that is the job of the criminal justice system built upon over many hundreds of years.
The “digital fingerprint” or “hash” of an image is many times more accurate than either fingerprints or human DNA. For PhotoDNA, the technology industry standard, the chances of a false positive are around one in 10 billion or written down with the zeroes included looks like this:
1:10,000,000,000
You’d have many times more chance of winning a major lottery than for the match to be incorrect. Unsurprisingly, unlike fingerprints and human DNA, I’ve not heard of even one case where anyone in the world has been arrested because of a false positive involving this technology.
I believe that has never happened because there are other steps that take place before cops can start to think about an arrest. This is something not I’ve not seen mentioned by those pleading with Apple not to do this “scanning”.
From the many millions of reports that other technology companies have voluntarily made to NCMEC, law enforcement’s subsequent investigations have resulted in many thousands of arrests and, crucially, many thousands of children being saved directly because of this important work.
With such a tiny chance of a false positive, it seems like Apple could check whether the machine has it wrong. Remember any “false positive” in the Apple scenario would only result in an Apple employee looking at something that isn’t CSAM. They wouldn’t make a report to NCMEC and nobody would know anything about the mistake. Indeed, the hash creating the “false positive” in question could be scrubbed from the system to avoid future errors.
But Apple have decided that they don’t want to be notified until there’s a 1 in a trillion chance of a false positive. With all twelve zeroes that probability looks like this:
1:1,000,000,000,000
Let me ask you a question another way; in a real-life scenario:
If your 5-year-old child’s teacher, pediatrician, or coach has 2, 9 or 29 child sexual abuse files in their iCloud would you want Apple to send a report to NCMEC for onward investigation by trained law enforcement officers or would you rather Apple wait for their user’s CSAM collection to reach 30 files?
We know of many cases, from law enforcement use of Child Rescue Coalition technology, where offenders had only one or two CSAM files – that we knew of. Much like Apple, we have no way of detecting not yet classified CSAM, only files which have been seen and confirmed as child sexual abuse material – the digital recording of a terrible crime.
I always explain to people “what we know is the minimum amount of suspected crime, it still requires great police work and a criminal justice system to complete a case.” Subsequent law enforcement investigations have often revealed the suspects to be actual hands-on abusers of children. One particularly disturbing case involved an 18-month-old baby being abused by her father. That one single file led to a very important arrest and the safeguarding of a defenseless infant.
US law requires technology companies to report to NCMEC as soon as reasonably possible after obtaining actual knowledge of the suspected offense. In my opinion actual knowledge in this context must exist at a much lower number than 1 in a trillion.
APPLE UPDATE 3
From Apple’s website:
“When browsing with Safari, Private Relay ensures all traffic leaving a user’s device is encrypted, so no one between the user and the website they are visiting can access and read it, not even Apple or the user’s network provider. All the user’s requests are then sent through two separate internet relays. This (sic) first assigns the user an anonymous IP address that maps to their region but not their actual location. The second decrypts the web address they want to visit and forwards them to their destination. This separation of information protects the user’s privacy because no single entity can identify both who a user is and which sites they visit.”
Read the Apple updates here.
So, what’s the problem?
Apple is designing in a way to automatically mask a user’s IP address and make it IMPOSSIBLE for Apple, the Internet service provider, or anybody else to trace the user, even when a Judge agrees that they have to be identified. This change means harmful predators will be automatically hidden and completely undetected.
It’s not safety by design, it’s cloaking by design.
WHAT DOES THIS MEAN FOR YOU?
Remember, law enforcement is NOT watching you. They have neither the time, resources, nor interest in knowing which website you visit or why. Companies want that data so they can target advertising at you.
Investigators must justify, via extensive legal processes and with built in oversight, intrusion into your privacy. They must have reasonable grounds to suspect crime is taking place and the intrusion must be proportionate to that suspected crime. If you’re involved in child sexual abuse you give up the right to privacy. But perhaps, very soon, not any longer.
Justice is often described as being “blind”. We all want and demand impartial and fair justice for all. Unfortunately, our law enforcement agencies may soon be blind too, and unable to resolve a suspect’s IP address to their true location or to their identity.
Was law enforcement or those charged with investigating child sexual abuse consulted about these fundamental changes? Sadly, no.
Only elected governments should be able to decide policy on the safety of their citizens. Currently, we are leaving these decisions to board rooms and to those whose focus is understandably more towards share price and profitability. That’s wrong in my opinion and soon it could be too late.
Please Apple, think this through! Children deserve their respect and their privacy too. They certainly don’t deserve their abuse images being distributed or for them to be re-victimized without any thought for the safety of other children.