The Inside Scoop on Insider Risk
Avoid Insider Risk Project Failure with Mencken’s Law of Threat Detection
Ask an experienced CISO which initiatives have had the least success in making it from plan to production, and you’re likely to hear about Insider Risk or Insider Threat Detection. There are good reasons to invest in stopping disgruntled or dishonest employees from misusing their access… it’s just hard to do. Meanwhile, the allure of machine learning for solving the problem drives organizations down the path to project failure time and again. In this post, I’ll introduce Mencken’s Law of Threat Detection to explain why this keeps happening and how you can swing the odds in your favor.
Insider Threats Remain a Top Challenge
H. L. Mencken was a famously pessimistic American journalist and critic. A hundred years ago, he wrote classic gems like "No one ever went broke underestimating the intelligence of the American public" and "Bachelors know more about women than married men; if they didn’t they’d be married too." His relationship pessimism might be misguided, but it’s right on the money for insider threats.
How serious is the problem? In 2023, nearly 3 out of 4 organizations reported feeling moderately to extremely vulnerable to insider threats. Rightfully so, says the report, with over 30% of data breaches attributed to insider attacks. Another study found that these attacks have increased by around 50% over the past two years. And the financial impact of an insider-related breach averages $17 million in North America.
A well-documented insider incident hit Tesla in late 2018 when an employee, Guangzhi Cao, stole thousands of sensitive files before joining a Chinese competitor. The case was settled in April 2021, with Cao agreeing to pay an undisclosed amount to Tesla and cooperate in the investigation. Notably, the security investigation began only after Cao resigned from his role and was not triggered by any activity related to the data theft.
Insider threats pose a unique challenge to security teams because they skip many of the initial steps in the cyber kill chain. They know the environment, they’re provided some level of authorized access, and they don’t need to get directions from an external Command & Control (C2). They can go straight to taking Actions on Objectives.

Threat detection depends on breaking the cyber kill chain before the threat actor can complete their objectives. Detection engineers consider ways to spot recon activity, malware execution, and other signs of an attack. A threat that can jump to the last stage in the chain is at a distinct advantage.
As a result, many security teams become receptive to alternative approaches. User Behavior Analytics (UBA), involving machine learning and anomaly detection, is closely associated with combatting insider threats. The idea is that the algorithms will learn what “good” looks like, and anything else will be flagged as likely malicious. For large enterprises, this becomes the basis of a data science project for the insider risk program. Smaller SOCs buy and deploy off-the-shelf solutions for the same goal.
Mencken’s Law of Threat Detection
From large organizations with teams of data scientists to smaller ones buying commercial UBA products, everyone seems to be struggling to pull off their insider threat initiatives. I’ve spoken with security leaders over the years at enterprises ranging from large banks to travel tech. Their insider risk initiatives always seem to be in science project mode, with production launch just a few quarters beyond the horizon. Commercial UBA products have disappointed all but the blissfully unaware. The result is that most CISOs have not achieved an effective insider threat detection program.
Could a pessimistic approach hold the key to more successful outcomes? I propose we turn to that old downer H. L. Mencken. Considering all the failed data science projects and UBA deployments for insider risk, he might inspire a new guiding principle for addressing this challenge.
Mencken’s Law of Threat Detection:
Never rely on machine learning to detect a threat you haven’t modeled.
At first glance, this is a depressing rule. Why should machine learning success depend on the SOC’s threat modeling? I previously wrote about the challenge of implementing cybersecurity anomaly detection and achieving a consistent relationship between unusual and anomalous event categorization. Lacework, for example, attempted to eliminate detection rules with machine learning but was itself eliminated as a private entity when Fortinet eventually acquired it at a fraction of the $8.3 billion valuation it once held.
The same challenges apply to insider threat detection, where there are no training datasets for insider threat activity, and “unusual” things happen every day across an even wider range of systems and exfiltration paths.
While machine learning may be able to detect some cases of insider TTPs, it is wildly optimistic to assume that it would detect something we haven’t explicitly set out to catch. There are too many potential points of failure, from access to the relevant log data, to successful baselining, to tuning to a degree of accuracy where a SOC analyst would review the resulting alert. How would a pessimist like Mencken go about detecting insider threats?
Pessimistic or Realistic?
Mencken’s Law points us to a more limited but potentially successful path to insider threat detection. Something very interesting happens when we start with threat modeling early in the insider risk initiative. For a primer on threat modeling and how it supports detection engineering in general, check out The Detection Responsibility Handshake:
Insider threat modeling is critical because it often reveals a surprising path to detection. It turns out that many insider threat techniques can be effectively detected without machine learning. And where detection implementation avoids the need to train and maintain an ML baseline for each employee, the required effort is dramatically reduced. The odds of overall project success are improved accordingly.
For example, take the previously mentioned scenario from Tesla, in which an insider copied 300,000 files to his personal iCloud account. In evaluating this TTP, we can analyze historical file transfer records and chart the occurrence of high-volume transfers. If a transfer of over 1,000 files is rare in the environment, then that can serve as a fixed threshold in a static detection. There would be little additional value in benchmarking individual employees and alerting when users transfer an “unusually high” number of files. Both approaches would have spotted the insider threat that evaded detection at Tesla.
In other cases, threat modeling would reveal detection strategies where a single event achieves a similar outcome without building and maintaining an ML black box. For example, a team might model an insider performing data exfiltration via email forwarding. UBA-style profiling of anomalous forwarding activity could be one way to improve detection coverage for this TTP. However, alternative techniques, such as alerting when a user creates a mailbox-level email forwarding rule or comparing destination email addresses to the sender’s name, could catch many instances of this threat.
A guiding principle for mature detection engineering teams is that coverage is never absolute, and tradeoffs always exist. The time it takes to build and maintain a detection can be spent working on another detection- so tradeoffs must be considered. Threat modeling allows SOC teams to evaluate options for insider threat detection and consider the full range of effective alternatives, including highly-quality detection libraries, threat-informed static detections, and multi-dimensional correlations.
Many insider threat scenarios can be addressed without the effort and uncertainty of ML-based user behavior analytics. Mencken himself once said, "All men are frauds. The only difference between them is that some admit it. I myself deny it." Insider threat detection poses an enduring challenge for security operations teams after years of disappointing products and scuttled data science projects. Security leaders should instead consider making incremental progress toward insider risk reduction through investments in threat modeling and detection engineering.