Bruce Schneier is a genuine cybersecurity legend. The author of Applied Cryptography, inventor of the term “security theater,” and long-time blogger wrote a short piece about USB sticks in 2007. I was reminded of this post while walking down the aisles at this year’s RSA conference.
Schneier’s “A Security Market for Lemons” proves, with references to Noble Prize-winning research, that in cybersecurity, it’s not the best product that beats the competition. The nature of our industry is such that weak solutions routinely get purchased over stronger alternatives. This unfortunate dynamic dominates the SIEM space, where buyers rarely measure and compare threat detection performance—even for one of the largest items in their budget. That needs to change for the sake of the industry and everyone’s security posture.
A Security Market for Lemons, Explained
In his post on the “market for lemons” phenomenon, Schneier asks, “Why are there so many bad security products out there?” That question is as relevant today as ever. The disruption caused by recent breaches at UnitedHealth and First American Financial Corp must have involved several layers of failed protection. Why are industry-leading solutions failing our defenders?
Schneier explains that the issue is not just that cybersecurity is a challenging problem. The cybersecurity market suffers from an acute case of “information asymmetry.” In cryptography, threat detection, and much of the cyber industry, the vendor knows much about its product’s effectiveness that the buyer doesn’t. The analogy is to a used car salesman who knows he’s selling a “lemon” model where the transmission tends to fall out after a few hundred miles on the highway. He knows the defect, but the poor soul who just stepped onto the lot does not.
This imbalance is more dangerous than it seems. An American economist won a Nobel prize in the 1970s for showing what happens when buyers can’t spot the lemons. Since good products are inherently more costly than lousy ones, they’re priced out by the seemingly identical lemons. A vicious cycle of “adverse selection” emerges; eventually, the whole market is lemons.
Among the most cited and downloaded economic journal papers of all time, the market for lemons theory applies to various areas of our lives. But cybersecurity effectiveness is uniquely hard to judge. No performance benchmarks are trusted for comparing competing products, and the test scores in advertisements seem quite fishy when compared as a group. If a high school class averaged 99% on an exam, that test was probably left in the copy machine the day before.
It speaks volumes that the SIEM space hasn’t seen an equivalent attempt at comparative effectiveness analysis. In his article, Bruce Schneier describes the effect that the market for lemons dynamic has on the competitive landscape.
In the late 1980s and early 1990s, there were more than a hundred competing firewall products. The few that “won” weren’t the most secure firewalls; they were the ones that were easy to set up, easy to use and didn’t annoy users too much.
When buyers can’t accurately assess the value or effectiveness of a product in advance, the market opens up for lemons. Other conditions that drive lemon markets include an incentive for sellers to pass off a low-quality product as a higher-quality one, limitations on data sharing between market participants, buyer pessimism, and lack of regulation and warranties. Each of these conditions is present in cybersecurity, especially in the SIEM market.
Spotting the Lemons
I often encounter “adverse selection” when speaking with security leaders evaluating SIEM solutions. Their selection criteria revolves around feature availability (“Does the product do UBA? Is there a management API?”) and content quantity (“How many integrations? How many rules?”). Both appeal to decision-makers because they’re easy to check before purchasing. Unfortunately, the link between these selection criteria and the product’s effectiveness at threat detection is weak at best.
The perils of buying detections by the pound were laid out in my post on The Detection Responsibility Handshake. Regarding feature availability, missing features can be a dealbreaker if, for example, your plan is to detect threats across multiple log repositories in two or more clouds (it’s pretty cool how Anvilogic does that). However, security leaders should distinguish between feature availability and product effectiveness. An outcome-based approach means performance testing across the people, processes, and technology involved in the SIEM deployment. Specifically, threat detection performance.
Take, for example, this recent exchange between an irate financial services SOC director and one of my colleagues. The SOC had engaged a third-party red team to drill an attack on the cloud environment. As usually happens, the red team achieved their objectives and captured the metaphorical flag. The SOC director was mad because his team had received no alerts during the exercise. No warnings that may have given them a chance to fight back. What the hexadecimal happened?!
Detection failure can result from a range of causes. These causes can be identified with a breach postmortem or testing during the SIEM selection process. Obviously, it’s better to know about these issues proactively. They include:
Visibility failure: Was something (e.g., license or capacity limitation) preventing the relevant log data from being collected?
Rule failure: Was the logic needed to surface an event of interest missing or disabled?
Bandwidth failure: Was the alert buried in a queue where the SOC could not keep up?
Triage failure: Was the alert too vague or lacking the context needed for proper escalation?
Retention failure: Was the data needed for detection and response no longer accessible when relevant intelligence was received or the investigation began?
When the SOC director and my colleague reviewed what happened during the red team exercise, the root cause surprised them. The log events capturing the attacker’s tracks had been received. These events triggered rules designed to flag the TTPs used in the exercise, and alerts were issued. So why didn’t the SOC know about the attack?
It turned out that the alert notifications were dropped at the SOC’s outsourced service provider. The MSSP didn’t pass the alerts along because they had applied overly broad allowlists that automatically suppressed them. In this case, it was a triage failure, but any of the causes described above could have been responsible.
What had started as a frustrating experience for the SOC director turned into an opportunity to examine the performance of their SIEM and its dependencies. This level of testing is crucial for measuring performance and identifying shortcomings.
Let’s Talk About SIEM Performance
The outcome of a drill like the one described above should include a performance score that quantifies the SOC's success at spotting attacks. Performance metrics will depend on the solution, its coverage for the environment, its fit for the team, and many other factors that depend on the vendor and the organization doing the testing. That’s why we may never see standard scores or reliable third-party comparisons in the SIEM space. But that shouldn’t discourage you from testing SIEM performance upfront and repeatedly after deployment.
Armed with an outcome-based evaluation, you shrink the vendor’s information advantage. You can start spotting the lemons through red team exercises, adversary emulation tooling, or tabletop simulations. You might find that you are relying on a lemon today or that the next-next-gen solution with the great conference swag is actually a lemon. A committed movement towards SIEM testing will level the playing field and encourage the development of products and features that make a difference in the fight against threat actors.
Great post, I especially like the detection failure breakdown. Nailed it!