After Years of Controversy, the EU’s Chat Control Nears Its Final Hurdle: What to Know

by Christoph Schmon, Thorin Klosowski
Electronic Frontier Foundation
Published on 12/3/2025
View Original

After a years-long battle, the European Commission’s “Chat Control” plan, which would mandate mass scanning and other encryption-breaking measures, at last codifies agreement on a position within the Council of the EU, representing EU States. The good news is that the most controversial part, the forced requirement to scan encrypted messages, is out. The bad news is there’s more to it than that.

Chat Control has gone through several iterations since it was first introduced, with the EU Parliament backing a position that protects fundamental rights, while the Council of the EU spent many months pursuing an intrusive law-enforcement-focused approach. Many proposals earlier this year required the scanning and detection of illicit content on all services, including private messaging apps such as WhatsApp and Signal. This requirement would fundamentally break end-to-end encryption.

Thanks to the tireless efforts of digital rights groups, including European Digital Rights (EDRi), we won a significant improvement: the Council agreed on its position, which removed the requirement that forces providers to scan messages on their services. It also comes with strong language to protect encryption, which is good news for users.

But here comes the rub: first, the Council’s position allows for “voluntary” detection, where tech platforms can scan personal messages that aren’t end-to-end encrypted. Unlike in the U.S., where there is no comprehensive federal privacy law, voluntary scanning is not technically legal in the EU, though it’s been possible through a derogation set to expire in 2026. It is unclear how this will play out over time, though we are concerned that this approach to voluntary scanning will lead to private mass-scanning of non-encrypted services and might limit the sorts of secure communication and storage services big providers offer. With limited transparency and oversight, it will be difficult to know how services approach this sort of detection.

With mandatory detection orders being off the table, the Council has embraced another worrying system to protect children online: risk mitigation. Providers will have to take **“**all reasonable mitigation measures” to reduce risks on their services. This includes age verification and age assessment measures. We have written about the perils of age verification schemes and recent developments in the EU, where regulators are increasingly focusing on AV to reduce online harms.

If secure messaging platforms like Signal or WhatsApp are required to implement age verification methods, it would fundamentally reshape what it means to use these services privately. Encrypted communication tools should be available to everyone, everywhere, of all ages, freely and without the requirement to prove their identity. As age verification has started to creep in as a mandatory risk mitigation measure under the EU’s Digital Services Act in certain situations, it could become a de facto requirement under the Chat Control proposal if the wording is left broad enough for regulators to treat it as a baseline.

Likewise, the Council’s position lists “voluntary activities” as a potential risk mitigation measure. Pull the thread on this and you’re left with a contradictory stance, because an activity is no longer voluntary if it forms part of a formal risk management obligation. While courts might interpret its mention in a risk assessment as an optional measure available to providers that do not use encrypted communication channels, this reading is far from certain, and the current language will, at a minimum, nudge non-encrypted services to perform voluntary scanning if they don’t want to invest in alternative risk mitigation options. It’s largely up to the provider to choose how to mitigate risks, but it’s up to enforcers to decide what is effective. Again, we're concerned about how this will play out in practice.

For the same reason, clear and unambiguous language is needed to prevent authorities from taking a hostile view of what is meant by “allowing encryption” if that means then expecting service providers to implement client-side scanning. We welcome the clear assurance in the text that encryption cannot be weakened or bypassed, including through any requirement to grant access to protected data, but even greater clarity would come from an explicit statement that client-side scanning cannot coexist with encryption.

As we approach the final “trilogue” negotiations of this regulation, we urge EU lawmakers to work on a final text that fully protects users’ right to private communication and avoids intrusive age-verification mandates and risk benchmark systems that lead to surveillance in practice.