It’s now more than a year since Apple announced plans for three new child protection features, including a system for detecting known child sexual abuse material (CSAM) images stored in iCloud Photos, explicit sexual abuse in the Messages app. Includes an option to blur photos, and child abuse resources for Siri. The latter two features are available now, but Apple is silent about its plans for a CSAM detection feature.
Apple initially stated that CSAM detection would be implemented in updates to iOS 15 and iPadOS 15 by the end of 2021, but the company eventually shelved the feature based on “feedback from customers, advocacy groups, researchers, and others”. Gave.
In September 2021, Apple posted the following update on its child safety page:
Previously we announced plans for facilities that help protect children from predators who use and exploit communication tools and help limit the spread of child sexual abuse material. Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to gather input and make improvements before releasing these important child safety features.
In December 2021, Apple removed all references to the above update and CSAM detection plan from its child safety page, but an Apple spokesperson informed ledge That Apple’s plans for the feature hadn’t changed. However, to the best of our knowledge, Apple has not publicly commented on the plans since that time.
We’ve reached out to Apple to ask if this feature is still planned. Apple did not immediately respond to a request for comment.
Apple moved forward with implementing its own child safety features for the Messages app and Siri with the release of iOS 15.2 and other software updates in December 2021, and it has rolled out the Messages app feature with iOS in Australia, Canada, New Zealand and the UK. be extended. 15.5 and other software released in May 2022.
Apple said its CSAM detection system was “designed with user privacy in mind.” The system will perform “on-device matching using a database of known CSAM image hashes” from child protection organizations, which Apple will “turn into an unreadable set of hashes that is securely stored on users’ devices.”
Apple plans to report iCloud accounts containing known CSAM images to the National Center for Missing and Exploited Children (NCMEC), a non-profit organization that works in collaboration with US law enforcement agencies. Apple said there would be a “threshold” that would ensure a “less than one in a trillion chance per year” of an account being incorrectly flagged by the system, plus a manual review of accounts flagged by a human.
Apple’s plans were criticized by a wide range of individuals and organizations, including security researchers, the Electronic Frontier Foundation (EFF), politicians, policy groups, university researchers, and even some Apple employees.
Some critics argued that Apple’s child protection features could create “backdoors” into devices that governments or law enforcement agencies could use to survey users. Another concern was false positives, which included the possibility of someone intentionally adding CSAM imagery to another person’s iCloud account in order to flag their account.
Note: Due to the political or social nature of the discussion on this topic, the discussion thread is located in our political news forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.