Home » Apple, child pornography controls and privacy risks

Apple, child pornography controls and privacy risks

by admin

Apple announced two new child protection software features that will be activated later this year on iOS 15, iPadOS 15, watchOS 8 and macOS Monterey.
As part of a plan to extend the protections of minors, Apple’s operating systems will be able to detect the sending of sexual material on Messages and will be able to verify the presence of child pornography content within the photo library of the user. All this, according to Apple, will take place in respect of privacy thanks to advanced verification systems based on Artificial Intelligence. However, the decision has attracted vehement criticism from activists and researchers, according to which the functions introduced by Apple are to be considered as a dangerous tool, which authoritarian governments will be able to exploit for much less noble purposes.

Posts and pornographic content
One of the two operational innovations announced by Apple concerns the Messages app. Starting with the next system updates, parents will be able to activate a function on the iPhone of their underage children that detects the exchange of pornographic material via the messaging app.

“When this type of content is received, the photo will appear blurry and the child will be notified, will be given helpful resources on the subject and will be reassured that there is no problem if he does not want to see that photo,” Apple explains. “As an additional precaution, the child may also be notified that, for his or her safety, parents will be notified with a message if they view the photo.”

The function will be part of the parental controls of the iPhone, iPad and Mac and can be activated or deactivated by parents. Apple ensures that content control will take place via a machine learning algorithm installed directly on the device, without any data or content being sent to Apple’s servers.

Privacy according to Craig Federighi: “For Apple it is freedom of choice”

by Bruno Ruffilli


New levels of control
The intention is clear: to provide parents with new control tools, to prevent the little ones from being exposed to content that is not suitable for their age. Or worse, that the iPhone’s secure messaging systems can be used by sexual predators to circumvent them.

As the Electronic Frontier Foundation points out, however, Apple’s choice is an irremediable vulnerability in the marketing narrative that describes Messages as one of the safest and most private messaging platforms on the market.

See also  "Good that they took it, now I'm waiting for the trial"

“Since detecting a sexually explicit image will use machine learning on the device to analyze message content, Apple will no longer be able to honestly say that Messages uses end-to-end encryption (the one in which only the sender and receiver can know the content of the message, ed) ”, Say from EFF. “Apple can argue that scanning before or after a message is encrypted or decrypted keeps the promise of end-to-end encryption intact, but it would be a semantic ploy to cover a huge shift in the company’s stance on strong encryption. “

Photo controls against child pornography
Among the two software protections announced by Apple, however, the one that has caused the most discussion is another. Also starting from the next software updates, the Cupertino company will be able to scan the photos uploaded to iCloud Photos to identify content of a child pornography nature. Apple claims to have developed a system capable of detecting problematic content without compromising user privacy.
The system will not scan the photos online, but will check all the photos on the devices by comparing them with the child pornography content databases provided by the National Center for Missing and Exploited Children (NCMEC) and other American bodies for the protection of minors. The search for correspondence will not take place by comparing the images themselves, but their “hashes”, ie alphanumeric codes derived from the file based on the content and further processed by Apple starting from the authorities’ databases.
“Before the image is uploaded to iCloud Photos, a matching process is performed on the device to verify that the image matches the known hashes,” Apple explains.
A cryptographic technology called “private set intersection” also allows you to do this without any match being revealed. A new code is generated in case of positive content, and uploaded to iCloud along with the photos. At this point, Apple uses yet another technology to only verify accounts that exceed a certain threshold of alarm codes. In that case, and only then, the content indicated as problematic will be manually verified by a human, and in case of positive the user’s account will be blocked and a report will be sent to the NCMEC.

It takes at least an hour to read the app leaflets

Bruno Ruffilli


An improper weapon for authoritarian governments
Apple ensures that a user can still appeal at this point if they think their account has been blocked by mistake. Moreover, they say from Cupertino, thanks to these technologies, the risk of “flagging” an account by mistake has a probability of less than one in a billion billion. Clarifications that are not enough to reassure and dispel doubts about a technology that could be used improperly. The new functions for now will in fact only be available in the United States, in fact, but nothing prevents authoritarian countries such as China or Saudi Arabia, in which Apple is active in compliance with local laws, from imposing the use of the same systems for find out who shares photos of the Tiananmen Square protests, or LGBTQ content, respectively. If Apple is able to recognize child pornography images from a list of hashes, there is no reason why the system shouldn’t work with just any photo database.

See also  Apple's car accident detection found many false alarms, the helicopter flew out, and the result was all right – yqqlm

The existence and implementation of this technology, therefore, is itself the only confirmation that an authoritarian government needs to force Apple to verify user content on iPhones and iPads. At that point the Cupertino company will have only two options: oppose the use of technology for control purposes, with the risk of losing access to an entire market, or comply with the request of the authoritarian government on duty and become a controller of any content deemed subversive.

The problem of false positives also remains unsolved. Apple’s reassurance that the technology has a one-in-a-billion-billion chance of failure is an empty claim, if not backed up by complete transparency about the technology used. The fact that the machine learning algorithm used to check image matches is neither public nor publicly verifiable is a problem. There are too many cases of false positives related to the misuse of AI-based control and verification technologies for Apple to get away with in this case with a simple “trust us”.

The reason for these choices
The reasons behind Apple’s choice to introduce these new features are unclear. Above all, it is not clear why the technology company that more than any other has made privacy one of its values ​​and marketing pillars has decided to implement such controversial solutions. Defending minors from sexual abuse is a commendable and necessary operation, but it is also usually an easy facade justification for the introduction of liberticidal regulations and laws by authoritarian governments. It is therefore not a good sign that Apple connects a review of its privacy positions to this type of issue.

The impression is that Apple has partially succumbed to the growing pressure of those who would like to find a way to weaken the communication systems based on cryptography. It is a common temptation not only for authoritarian governments, but also for many Western democracies, including European countries. These new child protection features have all the air of a compromise: instead of putting a wall against the wall against the anti-privacy and anti-encryption demands of government and institutional actors, Apple has given in in its own way by creating control systems that do not compromise completely the security and privacy of your platform. The risk, otherwise, was that of meeting stricter regulations that could compromise the very nature of the Apple safe ecosystem.

See also  Medicine, aspiring trainees in Padua halved: «The First Aid? Better to avoid "

It’s the only logical explanation that can be found for a decision that, wherever you look at it, undermines Apple’s public image as a champion of data privacy. In Cupertino, we are sure, they are perfectly aware of the image risk related to this decision. In fact, how is it possible that the same Apple that has strenuously opposed the implementation of a backdoor to make iPhones less secure, leading to the institutional clash with the FBI, now willingly accepts to implement a system against child pornography that in fact can it equate to a backdoor in disguise? With this position, Apple is admitting that there is a limit to the privacy of the individual, and that there are crimes for which one can think of weakening the cryptographic protections of an otherwise very effective system. The fact that a cryptographic system can defend civil rights activists and journalists as well as hiding the crimes of drug dealers and pedophiles has always been the profound contradiction of a radical position in defense of privacy and encryption in private communications.

The very difficult question is this: in order to catch a pedophile, is it right to jeopardize the privacy of the data of millions of other individuals, making possible a capillary control by governments and authorities? So far, Apple’s response had been no, so much so that it refused to unlock the iPhone of a declared terrorist, but perhaps in Cupertino they changed their mind. It’s an understandable stance, appropriate to societal changes and pressure from governments, but one that will inevitably make all of Apple’s other pro-privacy claims, statements and software features less credible. It will also make Tim Cook and all the other Apple executives a little less credible when they try to tell us once again that, according to them, the privacy of data and communications is an inalienable and fundamental human right.

.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy