Adversary Emulation vs. Bad Copycats

Tim MalcomVetter
3 min readSep 21, 2020

Previously, I discussed adversary emulation vs simulation and introduced an approach to make emulation more appealing: false flags. Today, I want to discuss what happens when you take emulation too far, but first a comparative story.

You may be familiar with the Zodiak Killer and it’s references in pop culture, such as this note pictured below:

Zodiak Killer — https://en.wikipedia.org/wiki/Zodiac_Killer

The Zodiak Killer case is one which inspired copycat murderers, like this one. One of the ways these copycats were differentiated from the original criminal is variations of TTPs, specifically those that were not publicly disclosed. The copycats get 98% of the details correct, but that last 2% just feels off. As the investigators dig in, turns out those differences typically are in areas that were not publicized, so the copycat never had the chance to learn them — they just guessed wrong.

How does this relate to simulating adversaries in the cyber world? Well, turns out it’s exactly the same…

Recently, my team intended to emulate (actually false flag) a specific adversary group, replicating a specific attack chain and set of TTPs they used just a couple months prior. We went all out on this one — more than normal. We acquired certain items that were identical to the ones used by this adversary as published in a handful of open source reports on the activity. Our lures were very similar, even worded the same. Our sock puppet identities were even the same. Our payloads and weaponization were very similar. There were just a couple minor details that were off in the delivery — details that my team did not and could not know, based on the public intelligence — these were private details only available through private intelligence sources. We were about 98% identical, but that last 2% just didn’t quite fit. We had to make guesses and color outside the lines.

If your organization does not have a mature threat intelligence program, these details won’t matter. But if they do, you’ll find out how something about your attack will feel off, which in this case resulted in a series of questions why this was so similar, but different. The only plausible explanation was: it is probably our red team. While it was still a useful exercise overall, it probably could have been more useful if the red team attribution could have been prevented or at least delayed until much later in the exercise.

During our collaborative debrief, we all agreed that if the red team had pursued an 80% identical set of TTPs, and deliberately aimed for about 20% variation, then the attack would have felt like the original adversary group — whom we were emulating — had just decided to iterate and evolve their TTPs a bit, as adversaries frequently do for a variety of reasons.

So I propose to you a new spin on the 80/20 rule: never emulate an adversary group with higher accuracy than approximately 80% in case your blue team opponent knows something about that adversary that you do not know. Leave a 20% variation factor.

The threat intel community will probably identify the value of having private intelligence as it played out in this example story. They’re not wrong.

And yes, I still prefer adversary simulation to emulation, for reasons such as this. However, we have many tools in the toolbox, and each has a time and place for its usefulness. Use all of the tools.

--

--