The SafeToNet Foundation's Safeguarding podcasts

Digital Crime Scenes : Digital DNA with Professor Hany Farid

Listen on

Episode notes

In this Safeguarding Podcast with Hany Farid, Professor at the University of California, Berkeley: PhotoDNA, what is is and how it works, what PhotoDNA doesn't do, what are Hashes and do they work in an End-to-End Encrypted world, is Apple's NeuralHash child safety proposal the incipient slippery slope as many claim, Apple's Secret Sharing Threshold and why that's a problem, and "WhatsApp's hypocrisy".

Links to other relevant content:

The Good, the Bad and the Ugly of Apple's Curate's Egg:

https://safetonetfoundation.org/2021/08/12/apples-curates-egg/

CSI Apple: The Omnibus Edition

https://safetonetfoundation.org/2021/08/26/csi-apple-the-omnibus-edition/

You've Already Agreed to Apple's CSAM Detection but you just didn't know it:

https://safetonetfoundation.org/2021/08/18/youve-already-agreed-to-apples-csam-detection-but-you-just-didnt-know-it/

Safeguarding Podcast with Glen Pounder CCO Child Rescue Coalition:

https://safetonetfoundation.org/2021/08/28/safeguarding-podcast-jane-with-glen-pounder-coo-child-rescue-coalition/

Apple's notice on Expanded Protections for Children:

https://www.apple.com/child-safety/

WhatsApp's website: on-device scanning for contraband content:

WhatsApp automatically performs checks to determine if a link is suspicious. To protect your privacy, these checks take place entirely on your device, and because of end-to-end encryption, WhatsApp can’t see the content of your messages.

https://faq.whatsapp.com/android/security-and-privacy/suspicious-links/?lang=en

WhatsApp’s website on CSAM detection:

Our detection methods include the use of advanced automated technology, including photo- and video-matching technology, to proactively scan unencrypted information such as profile and group photos and user reports for known CEI. We have additional technology to detect new, unknown CEI within this unencrypted information. We also use machine learning classifiers to both scan text surfaces, such as user profiles and group descriptions, and evaluate group information and behavior for suspected CEI sharing.

Using these techniques, WhatsApp bans more than 300,000 accounts per month for suspected CEI sharing.

https://faq.whatsapp.com/general/how-whatsapp-helps-fight-child-exploitation/?lang=en