Rent-a-founder

The Problem with Perceptual Hashes

Apple just announced that they will use “perceptual hashing” to detect illegal photos on iPhones. I have some experience to share on this technology.

At my company, we use “perceptual hashes” to find copies of an image where each copy has been slightly altered. This is in the context of stock photography, where each stock agency (e.g. Getty Images, Adobe Stock, Shutterstock) adds their own watermark, the image file ID, or sharpens the image or alters the the colours slightly, for example by adding contrast. For our customers, i.e. the photographers who upload their photos to these agencies and sell them through their platform, we need to find all of their photos’ copies on those agencies. We use perceptual hashes to find them.

Most publicly documented perceptual hash algorithms work by reducing the image data to a bit vector (the “hash”) which is then compared to a database of hashes, typically by applying a distance metric, which is, in many casess the so-called Hamming Distance, the number of bits by which the two hashes differ. Image data reduction usually includes resizing the image, discarding its colour information, filtering in the frequency domain (often using DCT or Wavelets), in some cases (supposedly the proprietary PhotoDNA algorithm) calculating a histogram of these values, and in Apple’s case using a convolutional neural network to reduce image information to a floating point vector, then quantizing that vector into the final hash bits.

For our product, I’ve evaluated Average Hash, pHash, dHash, and the algorithm described in the paper Fast Multiresolution Image Querying which is also used in the imgSeek software. We currently have to match approximately 100 million photos and found that the latter one worked best, i.e. it resulted in the fewest false positives and fewest false negatives. (I published a Golang implementation here.)

So What’s the Problem?

None of these algorithms are waterproof. Let me give you an example of a false positive. We try to find a copy of the following image:

Photo by Addictive Stock

The best the algorithm could find was this image:

Photo by unkreatives

It shouldn’t come as a surprise that these algorithms will fail sometimes. But in the context of 100 million photos, they do fail quite often. And they don’t fail in acceptable ways: It’s easy to see that the general composition of these two images is similar (beige background with dark accents in the top right center). But their content is completely different. The collisions encountered with other hashing algorithms look different, often in unexpected ways, but collisions exist for all of them.

When we deal with perceptual hashes, there is no fixed threshold for any distance metric, that will cleanly separate the false positives from the false negatives. In the example above, we can apply a stricter threshold but this will cause actual copies of many photos to go undetected. Simple modifications such as watermarks or slight colour corrections would lead to the algorithm concluding that “these two images are not the same”.

Especially when a Hamming Distance is used to compare hashes, there is not a lot of room to fine tune the threshold. Let’s assume we’re dealing with 256-bit hashes: The Hamming Distances for which two images are the same are likely within just a few bits, i.e. a Hamming Distance of, say, 16, will already indicate a lot of differences. In the context of millions of pictures, moving the Hamming Distance threshold up or down by one will have a huge effect on the number of false positives or false negatives generated.

How Does This Affect Apple’s New Technology?

Even at a Hamming Distance threshold of 0, that is, when both hashes are identical, I don’t see how Apple can avoid tons of collisions, given the large number of pictures taken every year (1.4 trillion in 2021, now break this down by iPhone market share and country, the number for US iPhone users will still be extremely big).

According to Apple, a low number of positives (false or not) will not trigger an account to be flagged. But again, at these numbers, I believe you will still get too many situations where an account has multiple photos triggered as a false positive. (Apple says that probability is “1 in 1 trillion” but it is unclear how they arrived at such an estimate.) These cases will be manually reviewed. That is, according to Apple, an Apple employee will then look at your (flagged) pictures. Does Apple have the manpower in place to iron out their algorithm’s shortcomings? We can only hope so.

Conclusion

Perceptual hashes are messy. The simple fact that image data is reduced to a small number of bits leads to collisions and therefore false positives. When such algorithms are used to detect criminal activities, especially at Apple scale, many innocent people can potentially face serious problems.

My company’s customers are slightly inconvenienced by the failures of perceptual hashes (we have a UI in place that lets them make manual corrections). But when it comes to CSAM detection and its failure potential, that’s a whole different ball game. Needless to say, I’m quite worried about this.