Apple plans iPhone scanning for child porn – security researchers alarmed

Share your love

With iPadOS 15 and iOS 15, Apple wants to introduce a new function with which the photo library of iPad and iPhone users can be examined for child pornographic material (“Child Sexual Abuse Material”, CSAM for short) and the content can then be examined by the responsible person To report to authorities. Security researchers fear that the function is too far-reaching and could also be used on encrypted content on the devices in the future – another new feature confirmed the latter assumption.

The planned scanning on third-party devices is carried out in one White Paper that Apple released Thursday night. The group emphasizes that the CSAM detection is planned in such a way that “the user privacy remains in view”. But this is exactly what arouses criticism: Instead of scanning the content directly in the cloud – i.e. on its own servers – as Google, Twitter, Facebook or Microsoft do, Apple wants to carry out on-device scanning, i.e. the content directly check on the user’s device.

The files are matched against a database of known hash values ​​of CSAM content collected by the US non-profit National Center for Missing and Exploited Children (NCMEC). However, Apple uses its own method called “NeuroHash”, which, according to the white paper, should be able to find not only unique hits, but also “almost identical” recordings that differ from the original in terms of size or compression quality.

The NeuroHash check takes place before the upload to the iCloud. The question arises as to where the picture comes from – usually only new pictures that have just been taken with the iPhone are uploaded. For underage users, Apple even wants to scan their iMessage content (see below).

Read Also   Games industry: Tencent buys sumo for 1.1 billion euros

If there is a hit in the local search, the image is provided with a cryptographic flag (“Safety Voucher”) and uploaded – along with some metadata. Apple does not initially want to decipher these flags itself. Only if a certain “limit value of known CSAM material” is exceeded is a message sent to the company, which is then allowed to decrypt the safety voucher. Apple does not say exactly how the threshold is established.

The limit value is intended to keep the probability of incorrectly classifying a customer as a child porn owner “extremely low”. Also, Apple manually examines every hit the system makes to confirm that there is a hit. The account is then blocked and a report is sent to the NCMEC, which in turn informs the authorities. “If a user feels that their account has been unjustifiably flagged, they can file a complaint to have it released again.”

Johns Hopkins University in Baltimore, cryptography professor Matthew Green, on Twitter Alerted to Apple’s plans at an early stage had feared that Apple’s infrastructure could be misused later – for example to scan current end-to-end encrypted content on the device such as iMessages. “Apple wouldn’t build something so complex and complicated if they didn’t have other uses for it,” he told Mac & i.

“They already have perfectly functioning scanning systems on the server side.” He also sees problems with missed hits, because it is known that it is possible to cause hash collisions with the help of AI systems – even with images that have nothing to do with child abuse. New forms of attack and blackmail attempts would then also be conceivable.

In fact, Green’s fears about scanning encrypted content seem to be justified. However, only for children’s Apple ID accounts. In addition to the new CSAM scanning function, the iPhone manufacturer is announcing new ones “extended protective measures for children” on. These include scanning images locally in iMessage “to warn kids and their parents” when the little ones are “receiving or sending sexually explicit photos.”

The system should initially display the corresponding images in a blurred manner, then a warning is issued in connection with “helpful resources”, which should help in the event of misuse, for example. If a child actually looks at the picture or sends it off, the parents receive an automatic warning. The function can be activated for iCloud family accounts. It should also be part of macOS 12 aka Monterey.

NSA-Whistleblower Edward Snowden commented on Apple’s new feature on Twitter with the wordsThe company wants to “modify the iPhone so that it constantly scans for banned goods”. Ross Anderson, a professor of security engineering at Cambridge, thinks Apple’s plan is a “terrible idea.” This is how the distributed mass surveillance “of our phones and laptops” begins, he said.

Green also fears that the technology could fall into the wrong hands and that Apple’s argument of privacy protection will not be used. “It’s like driving an electric car to a place where you light a big fire – and then you say you are driving in a climate-friendly way.”

More from Mac & i

More from Mac & i

More from Mac & i

More from Mac & i


(bsc)

Article Source

Share your love