The European Union (EU) may soon require software publishers to disclose unpatched vulnerabilities to government agencies within 24 hours of an exploitation. But many IT security professionals want this new rule, set out in Article 11 of the EU’s Cyber Resilience Act (CRA), to be reconsidered.
The rule requires vendors to disclose that they know about a vulnerability actively being exploited within one day of learning about it, regardless of patch status. Some security professionals see the potential of governments abusing the vulnerability disclosure requirements for intelligence or surveillance purposes.
In an open letter signed by 50 prominent cybersecurity professionals across industry and academia, among them representatives from Arm, Google, and Trend Micro, the signatories argue that the 24-hour window is not enough time — and would also open doors to adversaries jumping on the vulnerabilities without allowing organizations enough time to fix the issues.
“While we appreciate the CRA’s aim to enhance cybersecurity in Europe and beyond, we believe that the current provisions on vulnerability disclosure are counterproductive and will create new threats that undermine the security of digital products and the individuals who use them,” the letter states.
Gopi Ramamoorthy, senior director of security and GRC at Symmetry Systems, says there is no disagreement about the urgency of patching the vulnerabilities. The concerns center on publicizing the vulnerabilities before updates are available, which leaves organizations at risk of attack and unable to do anything to prevent it.
“Publishing the vulnerability information before patching has raised concerns that it may enable further exploitation of the unpatched systems or devices and put private companies, and citizens, at further risk,” Ramamoorthy says.
Prioritize Patching Over Surveillance
Callie Guenther, senior manager of cyber threat research at Critical Start, says the intent behind the CTA is commendable, but it’s vital to consider the broader implications and potential unintended consequences of governments having access to vulnerability information before updates are available.
“Governments have a legitimate interest in ensuring national security,” she says. “However, using vulnerabilities for intelligence or offensive capabilities can leave citizens and infrastructure exposed to threats.”
A balance must be struck wherein governments prioritize patching and protecting systems over exploiting vulnerabilities, Guenther says, proposing some alternative approaches for vulnerability disclosure, starting with tiered disclosure.
“Depending on the severity and impact of a vulnerability, varying time frames for disclosure can be set,” she says. “Critical vulnerabilities may have a shorter window, while less severe issues could be given more time.”
A second alternative Guenther describes concerns preliminary notification, where vendors can be given a preliminary notification with a brief grace period before the detailed vulnerability is disclosed to a wider audience.
A third way focuses on coordinated vulnerability disclosure, which encourages a system where researchers, vendors, and governments work together to assess, patch, and disclose vulnerabilities responsibly.
Any rule must include explicit clauses to prohibit the misuse of disclosed vulnerabilities for surveillance or offensive purposes, she adds.
“Additionally, only select personnel with adequate clearance and training should have access to the database, reducing the risk of leaks or misuse,” Guenther says. “Even with explicit clauses and restrictions, there are numerous challenges and risks that can arise.”
When, How, and How Much to Disclose
Responsible disclosure of vulnerabilities is a process that has, traditionally, included a thoughtful approach that enabled organizations and security researchers to understand the risks and develop patches before exposing the vulnerability to potential threat actors, notes John A. Smith, CEO at Conversant Group.
“While the CRA may not require deep details about the vulnerability, the fact that one is now known to be present is enough to get threat actors probing, testing, and working to find an active exploit,” he cautions.
From Smith’s perspective, the vulnerability should also not be reported to any individual government or the EU; that requirement will reduce consumer confidence and damage commerce due to nation-state spying risks.
“Disclosure is important — absolutely. But we must weigh the pros and cons of when, how, and how much detail is provided during research and discovery to mitigate risk,” he says.
An alternative to this “arguably knee-jerk approach,” Smith says, is to require software companies to acknowledge reported vulnerabilities within a specified but expedited time frame, and then require them to report back on progress to the discovering entity regularly, ultimately providing a public fix within a maximum of 90 days.
Guidelines on how to receive and disclose vulnerability information, as well as techniques and policy considerations for reporting, are already outlined in ISO/IEC 29147.
Impacts Beyond EU
The US has an opportunity to observe, learn, and subsequently develop well-informed cybersecurity policies, as well as proactively prepare for any potential ramifications if Europe moves forward too quickly, Guenther adds.
“For US companies, this development is of paramount importance,” she says. “Many American corporations operate on a global scale, and regulatory shifts in the EU could influence their global operations.”
The ripple effect of the EU’s regulatory decisions, as evidenced by the General Data Protection Regulation’s influence on the California Consumer Privacy Act and other US privacy laws, suggests that European decisions could presage similar regulatory considerations in the US, she points out.
“Any vulnerability disclosed in haste due to EU regulations doesn’t confine its risks to Europe,” Guenther cautions. “US systems employing the same software would also be exposed.”