Apple clarified about the child abuse scanning feature they introduced recently after a huge backlash from the public. Apple has announced a new tool with two features designed to protect children from disturbing content like child porn and abuse content. This tool can identify sexual content and images related to children and can notify the parents, based on the child age 12 years and younger. This feature decides whether to send or receive such type of content.Â
Another feature named CSAM (Child Sexual Abuse Material) detector helps to scan the images that are getting uploaded to the iCloud. Authorities will be notified by apple if they detect such kind of Abuse Material from the users.
Apple clarified on how this works:
Apple has its safety measures in place to stop the scanning if it does not detect any sexual abuse images. There is a list of images that are banned has been provided to National Center for Missing and Exploited Children and for the other organizations also which are dealing with child abuse.
These feature does not affect the user’s privacy and safety as everything they scan has been listed out clearly. The government has been seeking help from apple to add non-CSAM images and deploy government-mandated changes to their list. But Apple has refused to fulfill its demands as user’s privacy is the priority. These features only help in scanning the images, Apple does not scan the messages for sexually explicit material.
The feature does not share the information with Apple or law enforcement, the company has its tools to particularly focus on sexual images. We have other technologies already build for the same kind of issue, but it has been remodeled to create a database of terrorist content that companies can access for banning it they are used for such purposes. Apple security is different from other companies, so the users can use these features in order to stop the CSAM.