
Created using Black Forest Labs Flux Fast 1.1
In a Nutshell:
Zero-day vulnerabilities (security flaws that are unknown or unpatched by software vendors) are fuelling some of the highest-stakes debates in cybersecurity today. Recent examples like the SharePoint server zero-day vulnerability highlight the tension: should bugs be immediately disclosed to the public to protect the many, or kept secret for research, corporate, or national security reasons? As zero-days grow increasingly valuable and dangerous, how we decide to handle them will shape the future of digital trust, safety, and warfare.
Who (is impacted by zero-day disclosure decisions)?
At one end of the spectrum are hackers (both ethical and criminal) who actively hunt for these flaws. On the other, the tech giants – Microsoft, Apple, Google, Oracle – that develop the software in which these vulnerabilities are embedded.
Sitting in the crossfire are operators of critical infrastructure: hospitals, utilities, energy- and transportation networks. For them, disclosure timing can spell the difference between business-as-usual and catastrophic shutdown or even loss of life.
Then there are government agencies, who seem have developed a love/hate relationship with zero-days, stockpiling them for defence and intelligence purposes while simultaneously encouraging transparent and responsible disclosure for everyone else.
Finally, there are the businesses who rely on the aforementioned vendors, individuals whose data is processed by these businesses, researchers, and the ever-present crowd of legal advisors and cyber insurers trying to keep up.
What (is at stake)?
Current disclosure practices vary widely, ranging from coordinated disclosure (i.e., waiting for a vendor patch before making a vulnerability public) to government stockpiling of zero-days, to outright black-market sales. The question of how zero-days should be handled is not just about patching code or racking up bounty checks – it comes down to striking a delicate balance between the conflicting interests of all stakeholders.
Gone are the days when disclosure debates were confined to cybersecurity mailing lists. For example, the United States’ Vulnerabilities Equities Process (VEP) made waves by trying to arbitrate what the government must disclose vs. what it can quietly keep. Across the Atlantic, GDPR and other EU regulations require prompt data breach notifications to respective authorities. Meanwhile, China, Russia, and a handful of other states have been fingered for cultivating zero-day stockpiles.
Big Tech has its own spin: Google’s Project Zero and Microsoft’s bug bounty platforms set the tone for coordination between government authorities, vendors, and ethical hackers, but even here, timing, tone, and follow-through vary widely. Despite international standards promoting Coordinated Vulnerability Disclosure (CVD), policy consistency is still uncommon, resulting in a patchwork of overlapping, often contradictory rules and guidance.
Immediate disclosure can put defenders on the front foot (after all, how are security teams supposed to patch a vulnerability if they are not aware that it exists). The problem though, is that disclosure is also likely to tip off attackers, raising the risk of mass exploitation. Keep things under wraps, however, and trust can be eroded when flaws are eventually outed by journalists or adversaries. Add to this the growing financial incentives around vulnerabilities (bug bounties or exploit resale on the black market) and policy makers are left to navigate a truly complex ethical and legal environment.
As the zero-day market heats up and AI tools turbocharge both attackers and defenders, organisations are left scrambling to manage risk, protect reputations, and keep up with increasingly sophisticated adversaries.
When (do these decisions come into play)?
Timing is everything in the zero-day lifecycle, adding a further layer of complexity:
-
Discovery Stage: When a researcher or agency uncovers a zero-day, key questions arise – disclose, delay, sell, retain, or weaponize?
-
Active exploitation: If an exploit is observed ‘in the wild’, thousands or potentially millions of users are at risk – speed becomes imperative, disclosure urgency spikes, and measured debate is rendered untenable.
-
Legislative & standards debates: As attacks spike, governments worldwide are considering new rules for disclosure, especially for critical infrastructure and public agencies where the stakes are highest.
- Patch and mitigation cycles: As vendors rush to keep up with escalating discovery and exploitation, many organisations have opted to test the effectiveness of their disclosure policies on a regular (often monthly) basis.
The Debate, Tensions, and Trade-offs
There are three major approaches to disclosure – responsible, full, and coordinated – each bringing its own pros and cons. Responsible disclosure is the industry darling: it buys vendors time before the big reveal, hopefully keeping attackers in the dark. Full disclosure, on the other hand, throws it all out there, prioritising transparency and forcing quick action, sometimes at the expense of user safety. Coordinated disclosure tries to strike a balance – managed handoffs between researchers, vendors, and authorities – though it can quickly devolve into a communications juggling act.
-
Responsible Disclosure: Most security professionals advocate for notifying vendors privately and giving them time to fix the vulnerability before any public revelation. This aims to protect end users and minimise opportunities for malicious exploitation. However, it also results in the lowest levels of urgency among the three approaches to disclosure – if I know that no one knows about my vulnerability, and that the vulnerability is not going to be disclosed publicly until I’ve patched it, then there may be a temptation to prioritise other, ‘more pressing’ investments.
-
Full Disclosure: Some experts argue that the public has a right to know immediately when they are at risk, further arguing that publicly exposing vulnerabilities as soon as they are discovered pressures vendors to act swiftly. Critics, however, warn that this could enable attackers to exploit the flaw, which they may otherwise not have been aware of, before a fix is issued.
-
Coordinated Disclosure: As a hybrid approach, coordinated disclosure advocates push for proactive collaboration between vendors, Computer Emergency Response Teams (CERTs), and other stakeholders (e.g., researchers and/or government authorities) to manage risk and ensure that patches are ready prior to public awareness. The challenge here though lies in timing and communication – as is often the case, the more parties are involved the longer coordination tends to take, resulting in extended user- and vendor-exposure.
National Security vs. Public Safety
Governments and intelligence agencies have an interest in keeping some zero-days undisclosed in the name of national security or cyber operations. But what if an adversary independently stumbles onto the same bug? The delayed fix (which may have come earlier if vendors were made aware of the vulnerability) could mean open season for criminals and spies alike, thereby endangering the public. The ethical tightrope is real, and public safety doesn’t always come out on top.
Ethics and the Black Market
Selling zero-days can be lucrative, especially on the darker corners of the internet. While bug bounty programs offer a legitimate (though admittedly less lucrative) route, not every white-hat gets the recognition or reward they feel they deserve. Sales to brokers and third parties with unclear end users dwell in a legal and ethical haze, pushing some toward shady deals despite best intentions.
Transparency and Accountability
The industry holds up documentation and clear communication with vendors to minimise harm and protect end users as best practices. At the same time however, we all know that the media is prone to putting individual recognition and publicity above the collective good and community safety. As a result, publicity-driven leaks are still all too common.
Why (SHOULD zero-days be disclosed promptly)?
-
User Safety: The longer a vulnerability remains weaponizable, the higher the risk. The shorter the window of exposure, the better for everyone’s security posture. Early disclosure enables patch development, reducing the window of exposure and protecting users.
-
Trust and Transparency: Open communication to the public of risks and fixes builds trust in digital tools, vendors, and institutions while empowering users to protect themselves.
-
Criminal Prevention: Early warning can stop abuse before it starts (provided that a patch can be deployed quickly), preventing criminals and antagonistic governments from quietly stockpiling or exploiting vulnerabilities.
-
Industry Learning: Open disclosure leads to better, industry-wide defensive practices as vendors and security teams can learn from each other’s mistakes.
- Market Pressures: Researchers are likely to favour responsible or coordinated disclosure of zero days that they uncover so long as they are adequately rewarded for their disclosures (e.g. bug bounty programs) and so long as they feel protected from retaliation.
Why (SHOULDN’T all zero-days be disclosed immediately)?
-
National Security/Intelligence: Intelligence communities use zero-days for strategic advantage and stockpiling can be vital for law enforcement or intelligence gathering, including thwarting adversary threats.
-
Risk of Premature Exploitation: Public disclosure without a patch could be akin to handing a loaded gun to bad actors, risking mass exploitation if attackers move faster than defenders.
-
Complex Vendor Readiness: Not every bug is quick to fix and rushed disclosure may do more harm than good. Some vulnerabilities require extensive collaboration to fix and immediate disclosure may undermine effective mitigation.
-
Unintended Impact: Not all vulnerabilities are equal—blanket disclosure policies may overwhelm vendors or sow unnecessary panic.
-
Market Pressures: The high value of zero-days can drive researchers to black markets, especially if they feel unrewarded (e.g. absence of bug-bounties) or unprotected. Many jurisdictions (e.g. the U.S., Germany, U.K., Malta) don’t clearly differentiate between ethical and malicious hacking, resulting in potential prosecution even for well-intentioned researchers and leaving them with little incentive to pursue responsible or coordinated disclosure.
Practical and Enforcement Challenges
Conclusion
The zero-day disclosure debate remains as unresolved as it is essential. Balancing transparency, public protection, researcher motivation, and national priorities is a difficult, moving target. No single policy can reconcile all interests – but that doesn’t mean clarity, coordination, and coherence are impossible. What is needed now is honest collaboration and alignment between industry, governments, and the research community. In the meantime, organisations should brace for an era where zero-days are part of everyday security calculus and where disclosure practices remain in flux.
find out more:
- The ultimate guide to zero-day vulnerabilities: How threat intelligence prevents attacks in 2025
- Are governments using zero-day exploits?
- Zero-day vulnerabilities are powerful cyber weapons: Use them or patch them?
- Zero Day Initiative
Interested in what mandated Zero Day Disclosure would mean for your company?
Joshua Bucheli (cyberunity AG) looks forward to hearing from you!
stay tuned for more – keep an eye out for our next cyberbyte issue!