選單
中文 (繁體)

Is there a price too high for children's safety? Our comment on Apple's new controversial feature

There is a noisy scandal around privacy and Apple going on recently (yes, again). The world has been divided into alarmists and optimists (most of whom by wild coincidence work at Apple) by a new function called CSAM detection, aimed at preventing child abuse.

We at AdGuard are thinking hard about this novelty: the safety of minors is important, but there is such a great associated privacy abuse potential that some ways to protect from it must emerge. And maybe we can help.

UPD (September 3, 2021):

Apple takes a step back on their CSAM detection initiative. Massive public backlash that took place after the announcement of Apple's controversial child protection program resulted in a statement from the media giant, saying that it's going to:
"...take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features".
No doubt we will hear more about CSAM detection later, but right now it feels good to see public opinion have some real power for once.

A basic overview

And no, before you ask — it wasn't a typo, scam is scam, and CSAM stands for Child Sexual Abuse Material. "We want to protect children from predators who use communication tools to recruit and exploit them", says Apple in a voluminous PDF document (obviously this form of communication was chosen as the most convenient to the reader).

Such a good intention, who can be against protecting little children from a cruel fate? What is the outrage about?

It's about the way they are going to do it. What it looks like: Apple will scan all the photos on your device belonging to the ecosystem (iPhone, iPad, MacBook, and so on) and check them for the signs of child abuse. And then report to the police if any such signs are found.

Well, they will not be viewing your photos like we do, turning paper or scrolling digital pages of a real or a virtual photo album. Here is how they'll do it:

  • They take a set of images collected and validated to be CSAM by child safety organizations. So it's manual work at this stage.

  • They turn this bunch of images into hashes. A hash is a string of symbols that describe what is on a picture, and it will stay the same even if the picture gets altered, cropped, resized, and so on.

  • They upload to your device a collection of hashes for the pictures in the CSAM base. CSAM base is hardcoded in OS image, hence you will always have this base in your device. Although original photos won't store on your device, neverthless, who wanna store base of the CSAM hashes?

  • They compute hashes for all photos that are to be uploaded to Apple iCloud.

  • And compare computed hash to the base of CSAM hashes. The comparison happens on the device.

  • If any matches are found, Apple reviews the account manually. If the reviewer confirms this is actually a photo from the CSAM base, Apple reports the case to NCMEC (National Center for Missing and Exploited Children) backed by the US Congress. It works in collaboration with law enforcement agencies across the United States.

What can possibly go wrong?

Robots can make mistakes

The output of any AI algorithm that involves recognition based on machine learning represents a probabilistic characteristic ("the following action is most likely or likely to occur in the photo"). AI is never 100% sure. There is always a certain share of false-positive results (in our case, there will be situations when a photo is considered to belong to the CSAM collection by mistake).

There might be other flaws in the work of recognition algorithms. For example, neural networks often suffer from overfitting. This happens when they learn — analyze a set of data to understand patterns and correlations and find them later in other data. Overfitting can happen when a training set of data is too small or too refined. And here you are, your AI can't tell a chihuahua from a muffin.

Users have already found pairs of different images with the same hash values. This is called natural occurring NeuralHash collisions. And this is exactly the moment we are worried about.

Image source: Roboflow

Okay, Apple have thought of that and is going to involve people in questionable cases. Who are those people, whom do they report to, what they are responsible for, how pure are their intentions?

People have made mistakes

Just a year ago it has been revealed that Facebook has been paying hundreds of outside contractors to transcribe clips of audio from users of its services. "The work has rattled the contract employees, who are not told where the audio was recorded or how it was obtained — only to transcribe it. They're hearing Facebook users' conversations, sometimes with vulgar content, but do not know why Facebook needs them transcribed", Bloomberg reported.

Actually, it's common practice. Facebook got criticized just for organizing the work so chaotically. In the same August 2019, Google, Amazon, and Apple allowed people to opt-out of human reviews of voice assistants' recordings. Yes, right, random people listen to what other people ask their siris and alexas for and then report it to Apple and Amazon respectively.

It was also found out that voice assistants are more often than you think get mistakenly triggered by more than 1000 words that sound like their names or commands. Among those are "election" for Alexa, for example. Do you want an Amazon contractor listening to what you say about elections thinking no outsiders are near?

Dive deeper: How it works and what are the odds

So when exactly happens the moment when Apple's human moderators start looking at your photos instead of just robots comparing hashes?

Here comes the previously-mentioned probability.

An image's hash is compared to CSAM hashes. The result of this comparison is stored in a so-called safety voucher, a set of data "that encodes the match result along with additional encrypted data about the image", Apple says.

What could this additional data be? Well, just the image itself, as simple and creepy as that. Apple's official documentation on CSAM detection mentions "visual derivative". A picture that shows the picture. A thumbnail. Your photo in somewhat lower quality. This is how it's formulated, nobody knows what exactly it will look like:

This voucher is uploaded to iCloud Photos along with the image.

Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

One-trillion chance to lose looks like a good bet, does it not?

Note that this refers to flagging an account. Flagging comes after a moderator reviews a photo and confirms that it matches one from the base. The possibility of a false-positive hash match signal for one given photo is several orders of magnitude more.

So it's roughly about one in a billion chance that the threshold gets crossed mistakenly. Every chance is one photo. How many photos do people upload to iCloud? We can estimate that. Knowing, for example, that "over 340 million photos are being uploaded on Facebook every day" in 2021, a billion photos will be uploaded in less than a year, and there are more Facebook users than Apple users.

But first, the number of uploaded photos is growing exponentially, and second, if there is a billion-to-one chance of being eaten by a shark, you still don't want to be the "lucky" one. Because it is very serious. You get suspected not of smoking pot in the office restroom, you get suspected of child sexual abuse.

By the way, possessing images from the CSAM base is a crime in the US and several other countries, and Apple would never want to be held responsible for complicity or involvement. So it's a one-way trip: once Apple started looking for them, it will just develop more and more sophisticated ways to do it.

Why should people be worried about CSAM detection?

Let's sum up, the list won't be short.

  1. Possible mistakes of algorithms with devastating consequences to lives and careers.
  2. Software bugs. Don’t confuse with point one: it is considered normal for robots to make mistakes at this state of technological advancement. Actually, bugs are normal too, there is no software without them. But the price of a mistake varies. Bugs that lead to personal data leaks are usually among the most "expensive" ones.
  3. No transparency of the system (Apple is notorious for their reluctancy to disclose how their stuff works). Your only option is to trust that Apple's intentions are good and that they value your privacy high enough to protect it.
  4. Lack of faith. Why should we trust Apple after all their (and others') privacy flaws and crimes?
  5. Possible extrapolation of the technology to analyzing and detecting other types of data. Under the umbrella of child protection, a lot of opportunities for companies to dive into your information can be introduced.
  6. Abuse possibilities. Can an enemy or a hacker inseminate your iPhone with a certain picture that would match a picture from a certain set (so convenient there is a ready-made collection)?

The photo on the right was artificially modified to have the same NeuralHash as the photo on the left. Image source: Roboflow

This explains why we are pondering ways to give users control over the way Apple analyzes their photos. We ran a couple of polls in our social network accounts, the absolute majority of subscribers (about 85%) would like to be able to block CSAM scanning. Hard to believe that all these people plan to abuse children, they just see the risks.

We consider preventing uploading the safety voucher to iCloud and blocking CSAM detection within AdGuard DNS. How can it be done? It depends on the way CSAM detection is implemented, and before we understand it in details, we can promise nothing particular.

Who knows what this base can turn into if Apple starts cooperating with some third parties? The base goes in, the voucher goes out. Each of the processes can be obstructed, but right now we are not ready to claim which solution is better and whether it can be easily incorporated into AdGuard DNS. Research and testing are required.

Otherwise, the only way would be to block iCloud access. It's quite radical to do so for all AdGuard DNS users, but we can make it optional. The question rises, why would you not to just disable iCloud on your device? And you know what, with how things are going, we actually recommend considering this.

喜歡這篇文章嗎?
19,180 19180 使用者評論
極好的!

AdGuard for Windows

Windows 版 AdGuard 不只是廣告封鎖程式,它是集成所有讓您享受最佳網路體驗的主要功能的多用途工具。其可封鎖廣告和危險網站,加速網頁載入速度,並且保護兒童的線上安全。
透過下載該程式,您接受授權協定的條款
閱讀更多
19,180 19180 使用者評論
極好的!

AdGuard for Mac

Mac 版 AdGuard 是一款獨一無二的專為 MacOS 設計的廣告封鎖程式。除了保護使用者免受瀏覽器和應用程式裡惱人廣告的侵擾外,應用程式還能保護使用者免受追蹤、網路釣魚和詐騙。
透過下載該程式,您接受授權協定的條款
閱讀更多
19,180 19180 使用者評論
極好的!

AdGuard for Android

Android 版的 AdGuard 是一個用於安卓裝置的完美解決方案。與其他大多數廣告封鎖器不同,AdGuard 不需要 Root 權限,提供廣泛的應用程式管理選項。
透過下載該程式,您接受授權協定的條款
閱讀更多
19,180 19180 使用者評論
極好的!

AdGuard for iOS

用於 iPhone 和 iPad 的最佳 iOS 廣告封鎖程式。AdGuard 可以清除 Safari 中的各種廣告,保護個人隱私,並加快頁面載入速度。iOS 版 AdGuard 廣告封鎖技術確保最高質量的過濾,並讓使用者同時使用多個過濾器。
透過下載該程式,您接受授權協定的條款
閱讀更多
19,180 19180 使用者評論
極好的!

AdGuard 內容阻擋器

AdGuard 內容阻擋器將消除在支援內容阻擋器技術之行動瀏覽器中的各種各類廣告 — 即 Samsung 網際網路和 Yandex.Browser。雖然比 AdGuard for Android 更受限制,但它是免費的,易於安裝並仍提供高廣告封鎖品質。
透過下載該程式,您接受授權協定的條款
閱讀更多
19,180 19180 使用者評論
極好的!

AdGuard 瀏覽器擴充功能

AdGuard 是有效地封鎖於全部網頁上的所有類型廣告之最快的和最輕量的廣告封鎖擴充功能!為您使用的瀏覽器選擇 AdGuard,然後取得無廣告的、快速的和安全的瀏覽。
19,180 19180 使用者評論
極好的!

AdGuard 助理

AdGuard 桌面應用程式的配套瀏覽器擴充功能。它為瀏覽器提供了自訂的元件阻止的功能,將網站列入允許清單或傳送報告等功能。
19,180 19180 使用者評論
極好的!

AdGuard DNS

AdGuard DNS 是一種不需要安裝任何的應用程式而封鎖網際網路廣告之極簡單的方式。它易於使用,完全地免費,被輕易地於任何的裝置上設置,並向您提供封鎖廣告、計數器、惡意網站和成人內容之最少必要的功能。
19,180 19180 使用者評論
極好的!

AdGuard Home

AdGuard Home 是一款用於封鎖廣告 & 追蹤之全網路範圍的軟體。在您設置它之後,它將涵蓋所有您的家用裝置,且為那您不需要任何的用戶端軟體。由於物聯網和連網裝置的興起,能夠控制您的整個網路變得越來越重要。
19,180 19180 使用者評論
極好的!

AdGuard Pro iOS 版

除了在 Safari 中之優秀的 iOS 廣告封鎖對普通版的用戶為已知的外,AdGuard Pro 提供很多功能。透過提供對自訂的 DNS 設定之存取,該應用程式允許您封鎖廣告、保護您的孩子免於線上成人內容並保護您個人的資料免於盜竊。
透過下載該程式,您接受授權協定的條款
閱讀更多
19,180 19180 使用者評論
極好的!

AdGuard for Safari

自 Apple 開始強迫每位人使用該新的軟體開發套件(SDK)以來,用於 Safari 的廣告封鎖延伸功能處境艱難。AdGuard 延伸功能可以將高優質的廣告封鎖帶回 Safari。
19,180 19180 使用者評論
極好的!

AdGuard Temp Mail

免費的臨時電子郵件地址產生器,保持匿名性並保護個人隱私。您的主收件匣中沒有垃圾郵件!
19,180 19180 使用者評論
極好的!

AdGuard Android TV 版

Android TV 版 AdGuard 是唯一一款能封鎖廣告、保護隱私並充當智慧電視防火墻的應用程式。取得網路威脅警告,使用安全 DNS,並受益於加密流量。有了安全性和零廣告的使用體驗,使用者就可以盡情享受最喜愛的節目了!
已開始下載 AdGuard 點擊箭頭所指示的檔案開始安裝 AdGuard。 選擇"開啟"並點擊"確定",然後等待該檔案被下載。在被打開的視窗中,拖曳 AdGuard 圖像到"應用程式"檔案夾中。感謝您選擇 AdGuard! 選擇"開啟"並點擊"確定",然後等待該檔案被下載。在被打開的視窗中,點擊"安裝"。感謝您選擇 AdGuard!
在行動裝置上安裝 AdGuard