WhatsApp “end-to-end encrypted” messages aren’t that non-public

74 0

Enlarge / The security of Facebook’s popular messaging app leaves some pretty important hecks in the details.

Yesterday, the independent newsroom ProPublica published an in-depth article examining the privacy claims of the popular WhatsApp messaging platform. As is well known, the service offers “end-to-end encryption”, which is interpreted by most users in such a way that Facebook, owner of WhatsApp since 2014, can neither read messages itself nor forward them to the law enforcement authorities.

This claim is refuted by the simple fact that Facebook employs around 1,000 WhatsApp moderators whose entire job is – you guessed it – to review WhatsApp messages that have been marked “improper”.

End-to-end encryption – but what is an “end”?

This snippet from WhatsApp's <a href=Security and privacy Page seems to be easily misinterpreted. “Src =” https://cdn.arstechnica.net/wp-content/uploads/2021/09/whatsapp-end-to-end-screenshot-640×141.png “width =” 640 ” height = “141” srcset = “https://cdn.arstechnica.net/wp-content/uploads/2021/09/whatsapp-end-to-end-screenshot.png 2x”/>Enlarge / This excerpt from WhatsApp’s security and data protection page seems to be easily misinterpreted.

The gap in WhatsApp’s end-to-end encryption is simple: the recipient of any WhatsApp message can mark it. Once marked, the message will be copied to the recipient’s device and sent as a separate message to Facebook for review.

Messages are typically flagged and reviewed for the same reasons they would be on Facebook itself, including claims of fraud, spam, child pornography, and other illegal activity. When a message recipient marks a WhatsApp message for review, that message is merged with the last four previous messages on that thread and then sent as attachments to a ticket to WhatsApp’s review system.

While there is nothing to suggest that Facebook is currently collecting user messages without manual intervention by the recipient, it should be noted that there is no technical reason not to do so. The security of “end-to-end” encryption depends on the endpoints themselves – and in the case of a mobile messaging application, this includes the application and its users.

advertising

For example, an “end-to-end” encrypted messaging platform could perform automated AI-based content scanning of all messages on a device and then automatically forward flagged messages to the platform’s cloud for further action. Ultimately, privacy-conscious users must rely just as much on guidelines and platform trust as they do on technology bulletins.

Content moderation under a different name

As soon as a review ticket arrives in the WhatsApp system, it is automatically placed in a “reactive” queue for human contract workers to review. AI algorithms also feed the ticket into “proactive” queues that process unencrypted metadata – including names and profile pictures of user groups, phone numbers, device fingerprints, associated Facebook and Instagram accounts and more.

WhatsApp human reviewers process both types of queues – reactive and proactive – for reported and / or suspected policy violations. The reviewers only have three options for a ticket: Ignore, set the user account to “Monitor” or completely block the user account. (According to ProPublica, Facebook uses the limited number of actions as a justification for not allowing reviewers to “moderate” “content” on the platform.)

Although WhatsApp moderators – forgive us reviewers – have fewer options than their counterparts on Facebook or Instagram, they face similar challenges and obstacles. Accenture, the company that Facebook has a moderation and review contract with, hires people who speak a variety of languages ​​- but not all languages. When messages arrive in a language that moderators cannot master, they have to rely on Facebook’s automatic language translation tools.

“In the three years that I was there, it was always terrible,” a moderator told ProPublica. Facebook’s translation tool offers little to no guidance on slang or local context, which is no surprise given that the tool often even struggles to identify the source language. A shaving company that is currently selling razors may be falsely labeled as “selling arms” while a bra maker may be labeled as a “sexually oriented company”.

advertising

WhatsApp’s moderation standards can be as confusing as its automated translation tools – for example, decisions about child pornography may require comparing a naked person’s hipbones and pubic hair to a medical index table, or decisions about political violence may require guessing whether a seemingly severed head is in a video is real or fake.

Unsurprisingly, some WhatsApp users are also using the labeling system itself to attack other users. A moderator told ProPublica that “we had a few months where AI banned groups to the left and right” because users in Brazil and Mexico changed the name of a messaging group to a problematic name and then reported the message. “In the worst case,” the moderator recalls, “we probably got tens of thousands of them. They found some words that the algorithm didn’t like.”

Unencrypted metadata

Although WhatsApp’s “end-to-end” encryption of message content can only be undermined by the sender or recipient device itself, a wealth of metadata associated with these messages is visible to Facebook – and to law enforcement agencies and others who Facebook would like to share it – without such reservations.

ProPublica has found more than a dozen Justice Department cases looking for WhatsApp metadata since 2017. These requests are known as “pen register orders,” a terminology derived from requests for connection metadata on landline phone accounts. ProPublica rightly points out that this is an unknown fraction of the total requests made during this period, as many such orders and their outcomes are sealed by the courts.

Also, since the pen jobs and their results are often sealed, it is difficult to tell exactly what metadata the company passed. Facebook refers to this data as “Prospective Message Pairs” (PMPs) – a nomenclature anonymously passed on to ProPublica, which we were able to confirm in the announcement of a course for employees of the Brazilian Ministry of Justice in January 2020.

While we don’t know exactly what metadata is in these PMPs, we do know they are very valuable to law enforcement. In one particularly high-profile case in 2018, whistleblower and former Treasury Secretary Natalie Edwards was convicted for sharing confidential bank reports via WhatsApp with BuzzFeed that she mistakenly believed to be “safe”.

FBI Special Agent Emily Eckstut detailed that Edwards exchanged “approximately 70 messages” with a BuzzFeed reporter “between 12:33 PM and 12:54 PM” the day after the article was published; the data helped secure a conviction and a six-month jail sentence for conspiracy.

Leave a Reply