A quiet marketplace now sets the tempo of digital conflict by turning obscure software flaws into time-sensitive leverage, where a single zero-click iPhone chain can fetch millions precisely because it offers covert access before a patch closes the door and the advantage disappears. That window, not the bug itself, has become the core commodity, and it explains why polished exploit frameworks are treated like perishable goods whose value decays with every telemetry blip and every vendor hotfix. Against that clock, brokers court researchers with multimillion-dollar offers, stitch multiple weaknesses into turnkey packages, and deliver service-level assurances that resemble defense contracting more than hacker lore. The modern trade also lives in legal half-light: firms tout export-compliance checklists and allied-only sales until sanctions, lawsuits, or investigative reporting force a reckoning. Leakage then does the rest. Once tools are deployed, they tend to surface, splinter, and reappear inside criminal campaigns—from crypto-themed watering holes to mobile infostealers—collapsing the boundary between state secrets and street-level crimeware. Recent episodes, including sweeping iOS chains tied to Coruna and DarkSword, showed how fast elite tradecraft can move down-market when code leaks or targets change.
How the Market Operates
Dual-Track Supply and Demand
The zero-day economy today runs on a dual-track pipeline. In the primary channel, brokers act as system integrators: they acquire raw bugs, add exploit reliability features such as sandbox escapes and kernel primitives, and chain them into stable infection paths tested against real devices and carrier configurations. Deliverables often include command-and-control scaffolding, telemetry suppression, and update support scoped to OS point releases. Buyers—predominantly intelligence and military agencies—seek exclusivity windows measured not just in months, but in operational milestones: a campaign phase, a targeted investigation, or a geopolitical event. The value proposition is certainty under pressure. If an operator needs a one-tap or zero-click iOS compromise that survives a reboot and resists crash logging, the broker’s premium covers more than code; it covers predictable performance in the wild.
Running in parallel is the secondary world, where those same techniques—once fielded—bleed outward. Capture occurs through incident response, honeypots, or endpoint detections that snag payloads and partial chains. Reverse engineers then reconstruct primitives, swap out brittle components, and repurpose them against softer targets. This drift is not aberrant; it is structural. The more a chain is used, the more likely components will be observed on endpoints or in network artifacts, and the more likely variants will surface in “lookalike” campaigns. In this secondary market, exclusivity collapses into imitation, and the business model shifts from surgical access to volume. Crimeware groups fold pieces into traffic distribution systems and malvertising kits. The original buyer’s investment, focused on stealth and one-off access, effectively subsidizes the broader ecosystem’s learning curve.
Players and the Smartphone Prize
Three roles keep this machine running. Independent or boutique-team researchers find vulnerabilities with fuzzers, symbolic execution, and careful audit of attack surfaces like WebKit, IPC mechanisms, and baseband parsers. Brokers act as capital allocators and product teams: they front cash, enforce nondisclosure with aggressive NDAs, and transform a proof-of-concept into a resilient exploit stack. Government and state-adjacent customers then buy the result as a capability, expecting support matrices that cover OS versions, hardware generations, and language locales. The smartphone sits at the center because it compresses intelligence value into a single telemetry node: location history, sensors, end-to-end encrypted chats (post-compromise plaintext), authentication tokens, and photos. A well-tuned zero-click chain against iOS or Android delivers access more discreetly than a human source and at far lower diplomatic risk.
Why phones, and why now? Patch cadence and platform hardening paradoxically increase prices while concentrating demand. Apple’s Lockdown Mode, pointer authentication codes (PAC), mitigations like BlastDoor, and rapid out-of-band patches narrowed attack corridors but did not close them. As a result, brokers offer higher premiums for chains that thread all those needles without crashing SpringBoard or triggering analytics. On the Android side, fragmentation complicates life for both defenders and buyers, leading to per-vendor exploits or tactics targeting OEM-specific layers like Samsung’s proprietary services or baseband stacks from Qualcomm and MediaTek. Across both platforms, “zero-click” remains the crown jewel because it removes the weakest operational link: human interaction. If nothing needs to be tapped or installed, the chain not only works more reliably in the wild, it also leaves fewer user-facing artifacts that might spark a support ticket—or a forensic report.
Law, Legitimacy, and Sanctions
Compliance Claims Versus Public Fallout
Legality in this space has long been a moving target framed as export compliance bolstered by customer vetting. That narrative strained when U.S. sanctions in February targeted Operation Zero, a Russian-linked broker operating under Matrica, alleging sales to non-NATO services and the weaponization of stolen, government-built exploits. Authorities cited a supply path that included an Australian freelancer accused of lifting eight zero-days from a U.S. defense contractor and selling them for cryptocurrency over multiple years. The case marked a shift from cleanup after the fact to visible deterrence meant to warn would-be suppliers and to stigmatize buyers. It also highlighted the fragility of claims that “compliance” equals legitimacy. The paperwork stack mattered less once the tools were tied to activity against U.S. interests.
History reinforced the point that reputational gravity governs enforcement. Sanctions had previously landed on NSO Group after Pegasus was linked to surveillance of journalists and activists, and on Predator’s makers, Intellexa and Cytrox, once reporting traced campaigns to political targets. The pattern remained: the red line is crossed when Western political fallout becomes intolerable. Between those flare-ups, many firms operated in a tolerated twilight—until headlines forced a new line-drawing exercise. In this environment, legal posture was a contingent status, not a shield. A broker might sell through licensed channels and court allied clients for years, but one provable misuse could retroactively redefine the entire portfolio as illicit in the eyes of regulators and payment rails. Insurance underwriters and banks then become de facto enforcers, cutting off services and accelerating a firm’s collapse.
The “White Zone” Is Conditional
Some players have managed to portray themselves as the industry’s “white zone,” citing jurisdictional discipline and traditional arms-trade controls. Crowdfense used public-facing acquisition programs with headline budgets, formalized export screening, and a customer roster it describes as aligned with Five Eyes and partner law enforcement. The firm’s pitch mirrored defense procurement: negotiated exclusivity, quality assurance, post-sale support, and explicit no-resale clauses. Researchers valued not only the payouts but also payment certainty, legal cover, and the ability to stay anonymous outside a narrow NDA circle. In short, it looked like a legitimate supply chain, not a backroom deal. Yet that legitimacy hinged on outcomes, not inputs. If a campaign traced to one of its tools targeted domestic political figures or reporters in allied nations, the “white zone” label would evaporate overnight.
That conditional status shaped how brokers manage risk. Some embedded compliance triggers into contracts—auto-termination upon credible misuse, jurisdictional walls, post-deployment audits—while others steered away from buyers with opaque oversight. However, technical controls travel poorly once a capability ships. Code obfuscation and watermarking can deter casual resale, but they cannot prevent a buyer from running the tool in ways that create collateral damage or from losing it to a capture operation. Firms therefore competed on narrative as much as on engineering: who you sell to, how you vet, and how you respond when a scandal erupts. Meanwhile, watchdog groups, platform security teams, and investigative journalists acted as informal regulators. Their reports often served as the first draft of an indictment, setting the stage for policy action that reclassified a vendor’s conduct after the fact.
Economics and Incentives
Price Escalation and Shrinking Shelf Life
Money follows time. Because Apple and Google now ship rapid security responses and telemetry improvements that shorten the discovery-to-patch arc, brokers feel pressure to outbid rivals for chains that will almost certainly decay faster than in prior years. Zerodium once defined the market with price lists that topped out around $2.5 million for elite iOS chains, a figure that now looks conservative next to offers from entrants willing to post up to $7 million. Those numbers reflect more than headline inflation. They price in the cost of failure—chains that stop working mid-campaign when a point release lands—and the engineering overhead of keeping capabilities viable across modem firmwares, device families, and regional builds. Reliability has turned into a managed service, with SLAs, bugfix windows, and refresh clauses that nudge prices higher still.
The shelf-life squeeze also changes how exclusivity is sold. Buyers demand narrow, time-bound exclusives coupled with refresh options when vendors release new mitigations. Brokers sometimes structure deals as subscriptions: pay for a rolling capability against a platform slice—say, iOS 17.0–17.2.1—and receive maintenance and retooling if a WebKit fix or a PAC-hardening change breaks the chain. That reduces buyer risk but shifts more uncertainty to the supplier, who must either hold reserves to acquire fresh bugs or run an internal research track to generate replacements. Competition reinforces the cycle. A newcomer will occasionally overpay to win flagship chains and signal market strength, forcing incumbents to raise caps or risk researcher defection. As those bids climb, the distance between opaque-market rates and corporate bug bounties grows into a canyon.
Why Responsible Disclosure Struggles to Compete
Against that backdrop, responsible disclosure programs make a hard sell. Google paid about $17 million across all products in the last cycle—real money, but a fraction of what a single broker might allocate to acquire two or three top-tier mobile chains. Even record-setting submissions under Android’s VRP, such as the $605,000 award for a complex chain in 2022, lag behind current opaque-market valuations for functionally similar access. The Zero Day Initiative offers a middle path, buying bugs and then notifying vendors on fixed timelines while rewarding researchers through a points system, Pwn2Own purses that can reach seven figures in rare aggregate scenarios, and community prestige. For some, that combination of legal safety and public credit offsets the lost upside. For many others, especially independents outside well-funded labs, the math favors silence and sale.
Incentives shape outcomes. When a researcher finds a kernel memory corruption primitive with a reliable path to code execution on the latest iPhone, the choice is no longer between a few thousand dollars and a hackathon trophy. It is a choice between life-changing income and a professional gold star. That gulf invites gray behavior even within disclosure: late reports that raise eyebrows, vague advisories that minimize details to preserve resale value, or selective sharing within private circles. It also nudges talent away from corporate security roles and toward freelance hunting funded by opaque buyers. The result is a market that rewards secrecy with cash and punishes transparency with relative austerity. This is not a moral judgment; it is an observation about price signals. Unless public programs narrow the gap—with tiered rewards matching operational value or with grants that stabilize researcher income—responsible disclosure will continue to leave high-end supply on the table.
Leakage, Crimeware, and Systemic Risk
Coruna and DarkSword: From State-Grade to Street-Level
Leakage is not a bug of the system; it is the system’s expected endpoint. Coruna made the case bluntly this year when Google’s team documented a framework with 23 exploits, including five complete iOS zero-day chains spanning versions from 13.0 to 17.2.1. Lineage analysis suggested ties to Operation Triangulation-era techniques and hinted at source code that passed through a defense contractor before being laundered via brokers. Coruna first surfaced in the hands of UNC6353 (Star Blizzard) for targeted espionage, but it did not stay there. Within weeks, China-based UNC6691 repurposed delivery into finance- and crypto-themed watering holes, where victims’ Safari sessions silently ran exploit chains that dropped the PLASMAGRID stealer. The payload combed device data, session tokens, keychain items, and browser-stored wallet extensions—evidence of a rapid pivot from national-security targeting to monetizable theft.
DarkSword followed a similar arc with higher stakes. Initially deployed as a zero-click iPhone framework that combined a remote code execution entry point with a sandbox escape and a kernel-level privilege escalation, it delivered full device access without any user interaction. Attribution pointed first to UNC6353 and then to copycats who integrated custom infostealers branded GHOSTBLADE, GHOSTKNIFE, and GHOSTSABER, each tuned to rip financial artifacts from apps and mobile browsers. The inflection point arrived when DarkSword’s codebase leaked on GitHub after its developer reportedly faced financial distress. Overnight, a capability that once required deep pockets and political connections became available to anyone able to compile and host the infrastructure. Enterprises scrambled to harden MDM policies, mobile vendors accelerated mitigations, and incident responders braced for a wave of derivative campaigns that swapped in new lure pages and rebranded payloads.
Bugs, Backdoors, and Open Systems
Debate over intent intensified as these cases piled up. Certain signals tend to raise suspicion: nonstandard cryptographic constructions that invite niche attacks; unusually complex logic in security-critical paths where simpler designs would have sufficed; or opaque changes introduced through dependencies in a supply chain that make review harder. Yet intent is notoriously difficult to prove at scale. Large codebases inevitably harbor mistakes, and sophisticated backdoors are designed to look like them. Vendors can patch, issue CVEs, and credibly claim error, while detractors point to patterns that seem too convenient for coincidence. The ambiguity is functional. It grants plausible deniability to those who would seed an intentional weakness and forces defenders to evaluate risk on behavior and impact, not motive. This is where openness is often treated as a hedge: more eyes should mean fewer bugs.
Even that hedge has limits. In April, researcher Loïc Morel highlighted an off-by-one behavior in Bitcoin’s difficulty adjustment: when recalibrating every 2016 blocks, the algorithm omits the prior window’s final timestamp, effectively assessing a 2015-block span. Under extreme conditions—say, a dominant miner capable of manipulating inter-block timing—this quirk could enable a time-warp dynamic, slashing difficulty and accelerating block production until network watchdogs react. The episode reminded observers that open review and battle testing do not guarantee immunity to subtle edge cases with systemic implications. It also carried a lesson for buyers and builders of exploit capabilities: complexity is both the attacker’s canvas and the defender’s blind spot. Stronger controls—reproducible builds, minimal trusted code in critical paths, and adversarial review that treats “unlikely” as “eventually”—had been the practical response, even if they raised costs and slowed shipping timelines. In this market, those frictions were cheap compared with the price of a leaked chain or a protocol-level shock to financial infrastructure.






