Choose Your Own Red Team Adventure: Execution Guardrails

Tim MalcomVetter
4 min readMay 15, 2019

This is a continuation of a Choose Your Own Red Team Adventure series. If you don’t know how you got here, start at the beginning. Otherwise, continue reading …

You are careful. You always want to know where your phishes land. You really don’t want to have a tangle with the business end of the Computer Fraud and Abuse Act (CFAA). You watched a conference talk once about wrapping payloads with execution guardrails — basically just a conditional check to ensure the host is running on the right domain. It’s been your preference ever since.

The problem is, you can’t predict exactly what the domain is before you get there, so before this phish ever landed you sent in a really simple “recon phish” — basically a phish with no malicious payload. It grabbed the User ID, Domain, and a few small details about the host and leaked them to a web service you control.

You sent it yesterday, and just got the results back this morning.

The user is Bob and the domain is CORP. You prepared your REAL payload with the guardrail to only execute if the domain == CORP. Then you sent in the phish, which is now beaconing home to your command and control.

You start looking around on this new host. It seems empty. No real files or signs of life. It doesn’t seem to be able to query the domain it’s joined to, nor can it see any other hosts in its subnet. The user isn’t an admin. You don’t see anything of value, but you keep poking at this host for a day or two.

Eventually, you resend your phish to a different person, also with the guardrails for the CORP domain. A half day later or so, you get another callback, this time the user is Alice, but her machine also is basically lifeless and empty.

What’s going on? This repeats 3 or so times, then you’re out of time.

GAME OVER.

Post Analysis

Your “recon phish” sailed through the email security stack without issue, because it was only leaking a few details and not executing anything obviously malicious. Your second (malicious) phish, however, flagged on a YARA rule looking for common techniques to identify the domain combined with the presence of one of many strings containing identities of the company. The security operations team put that YARA rule into place a couple years ago after they experienced a similar attack from a different Red Team.

Guardrails like checking for a specific domain name strongly indicate a very targeted attack — those attacks indicate either an Advanced Persistent Threat (APT) or a Red Team. In this case, intel analysts quickly identified that the domains and IP address ranges used by your phishes are not actively used in any other known attacks, which is more likely to indicate a Red Team than an APT group — though it’s not a guarantee.

Incident Responders decided to take the opportunity to learn more about the threat actor who is actively targeting them, so they opened the phish payload inside a custom sandbox environment designed to look similar to the corporate network. This is where you played for several days — which explains why you saw no real data or other hosts, while they recorded your TTPs and tried to discover your intent. You never had a shell on a real host.

While watching you, they observed you deploy Cobalt Strike as a secondary implant — this typically indicates a Red Team, but real threat actors tend to use it, too, so they analyzed deeper. The responders captured the EXE file you dropped to the host and extracted the malleable C2 profile from it. From that artifact, they observed that this is a licensed copy of Cobalt Strike which more likely indicates a Red Team or somebody moonlighting with licensed tools (APT groups tend to overwrite the license details to obscure how they source the software). The responders extracted your Cobalt Strike encryption key. The third-party threat intel team working with the enterprise tracks thousands of common C2 servers, such as Cobalt Strike, and they actively extract the encryption keys as a way of linking together the instances they discover.

In this case, they found the 7 instances of Cobalt Strike your firm has deployed across 3 common virtual private server (VPS) providers — all with the same encryption key. Since your Red Team engagement was contracted explicitly by a high-ranking executive within the company who chose not to inform anyone in the security team, the incident responders had no choice but to treat this as a legitimate hostile threat actor. They extracted the DNS domains from the malleable C2 profiles from each instance and submitted abuse requests to both the DNS registrars and the VPS providers. The domains were all registered to a single registrar using a single account.

You don’t know it yet, but you’re locked out of your DNS registrar account and the domains have been seized, which is a real bummer, because the other 6 servers are in use by other members of your team on engagements at other organizations — they just lost their shells elsewhere, too.

Two of the three VPS providers have zero tolerance for terms of service violations, and they suspended your accounts immediately — even if you were a paying customer.

Defenders can be hackers, too, and they definitely just hacked you.

THE END

--

--