Not much is known about the security researcher behind the moniker "Chaotic Eclipse" who published a working exploit called “Bluehammer”. He did not provide any explanation along with the exploit he posted. Instead, he wrote “You geniuses can figure it out yourselves.” In the only post on the subject, he also gives a sarcastic, mocking shout-out to the leadership of the MSRC (Microsoft Security Response Center), and specifically its head Tom Gallagher, "for making this possible". The exploit named “BlueHammer”—presumably a reference to the infamous “EternalBlue” exploit—appears to work “not 100% reliably, but well enough,” as Will Dormann, himself a well-known and sought-after security researcher, found out and posted on Mastodon.
But why was the vulnerability and the corresponding exploit made public in the first place? I have a few assumptions and educated guesses about that.
To do so, we first need to take a look at how vulnerability reports are processed.
Big wheels keep on turning...slowly
Anyone who reports a security vulnerability to Microsoft, for example, cannot automatically expect an immediate patch or that a disclosure will follow within a few days. After submission and review, it can take between 40 and 200 days for a patch to be released. In other words, it can take up to nearly seven months after the initial report for a vulnerability to actually be fixed.
Numerous friction points arise along the path to a patch—from the acceptance of the report (or lack thereof), to the assessment of its criticality, to the time it takes until a fix is available. This requires a great deal of sensitivity. Even in detailed matters, opinions sometimes differ significantly. For example, in some cases a particular function within a program may be theoretically vulnerable, but the developer may have already anticipated this and blocked the attack elsewhere (for example, through separate checks or input validation). In that case, no exploitable vulnerability would exist. The final verdict in such a case: “Working as intended, won’t fix.”
Criticality is a frequent point of contention—while the reporter considers a vulnerability highly critical, the manufacturer does not necessarily share this assessment, as certain prerequisites must be met for an attack to succeed, which are not automatically fulfilled and may require additional exploits or user privileges.
Filtering and sorting
With the rise of AI technologies and their suitability as tools for analyzing program code, the number of vulnerability reports has increased dramatically for many vendors. While there may indeed be useful and legitimate vulnerabilities among the reports, the main problem is the sheer volume of submissions.
The incoming reports are often well formatted and sound convincing at first glance—but upon closer inspection, it turns out that no exploitable vulnerability exists. However, this verification must be carried out either by a human or by a human using a properly configured AI-supported analysis environment and correctly interpreting the results. This takes time and resources—even with AI assistance. On its own, AI cannot provide reliable assessments, as my colleague Karsten Hahn wrote in his blog article.
So we are dealing with a flood of reports that are easy to create but require significant effort to review and categorize.

Victims of restructuring
Letting vulnerability reports go through a screening process first makes perfect sense. From the Mastodon post mentioned above, it can be inferred that Microsoft in particular has recently introduced an additional “filter layer” intended to pre-sort incoming reports and weed out unsubstantial submissions. However, Microsoft cannot assign highly specialized developers to this task. Positions have even been cut, so according to Dormann, collaboration with the MSRC has become significantly more difficult than in the past.
It can therefore be assumed that the first person to see a vulnerability report is not an expert in malware.
At this point, the aforementioned “video proof” may come into play. Apparently, this has only recently been required when submitting a report.
From a technical standpoint, a video is not strictly necessary to demonstrate the effectiveness of an exploit, but it does make it more tangible—even for non-experts.
„Pics or it didn’t happen“
This could be the first reason for requiring video proof: it allows less experienced reviewers to see whether a report actually contains potentially valuable information. If a video is included, a submitted report gains perceived quality from Microsoft’s (and other similarly acting vendors’) perspective—even though it does not add real technical value.
Of course, a video can also be faked or manipulated. However, that would not be in the interest of a security researcher who wants to report a legitimate, real vulnerability. For such a person, creating a video is essentially an unnecessary step involving additional effort (and potential frustration due to the extra time required). It is understandable that someone becomes frustrated when a legitimate report is rejected solely due to the lack of a video—especially when they are providing their findings pro bono and are not seeking a bug bounty.
On the other hand, an AI agent tasked with finding and automatically reporting as many vulnerabilities as possible would be overwhelmed by this requirement. And even if generating a (fake) screen recording were possible, the costs would be disproportionately high—especially given the risk that the report might still be rejected at a later stage.
Frustration and bias
From the perspective of someone who wants to prevent harm by reporting a vulnerability—and who may not even be interested in a bug bounty—the requirement for video evidence forces them to jump through an additional, unnecessary hoop. This creates maximum frustration for people like the discoverer of BlueHammer, to the point where they give up and simply publish their findings immediately on their own.
Was this an impulsive act driven by frustration to force a reaction from Microsoft? Maybe. Maybe not. We don't really know at this point as the available information is very scarce.
In any case, the “BlueHammer” case is a signal to vendors to closely examine their reporting and review processes to avoid throwing the baby out with the bathwater.