Choose Your Own Red Team Adventure: What’s Running via the Windows API

Tim MalcomVetter
7 min readMay 15, 2019

--

This is a continuation of a Choose Your Own Red Team Adventure series. If you don’t know how you got here, start at the beginning. Otherwise, continue reading …

You know all about detections that come from command line commands. They each spawn new processes. Spawning processes is expensive for an adversary. Each one risks being so noisy, especially with command line options. Yes, you’re aware of techniques to start a benign process in suspend mode, and then replace the actual target of that process after the process is already in the process tree. It’s a neat trick to get around EDR products that only watch process trees, but some EDR products hook the same Windows API calls you need to create the process in suspend mode so they can notice the suspend flag, and, well, that’s too risky for you.

So the first thing your implant does is use the Windows API to enumerate what processes are running, what services are installed, even what drivers are present on the system — all using very vanilla Windows API calls that so many legitimate business applications use for legitimate purposes, so you’re confident it will never get flagged.

Your implant doesn’t spawn a new process to do that. It doesn’t even inject code into itself to do it — it’s not safe to try any of that until you know what’s running, so your implant brings that functionality with it — all part of its initial stage. “Smart” you say to yourself.

In just a few seconds, your implant dumps a list of all running processes, and starts comparing them to a list of known security products. Three immediately stand out. Then it goes back for drivers for completeness, and does the same — no additional security products, just the three. You’re mostly familiar with their defensive feature sets. You feel confident.

You begin to slowly and methodically review the rest of the host to see what’s present. “No need to rush these things and make a mistake,” you say to yourself.

Precisely 93 minutes after the initial callback from the first host, a second implant calls back. It must have the same egress IP address — because you set a filter on your C2 server preventing any other IP addresses from communicating to your server. This is curious — you only sent one phish. Maybe Bob opened it twice? You begin to triage this host — you see it has a different hostname. You initiate the implant’s tasking to enumerate via the Windows API all of the running processes, services, and drivers. All of the results come back curiously empty. “How’s that possible?” you wonder.

Then both implants’ callbacks are suddenly late. You have excellent jitter and randomness, so you it couldn’t be a network-based detection, could it? The implants are both definitely dead. It’s a shame, too, because you are really curious about the second one.

You send two more phishes to two more employees. The engineer on your team who build the payloads put tracing into their execution. Within 30 minutes, you can see requests to a telemetry server indicating the payloads were delivered and the attachments opened, but no callbacks were made. You begin to suspect your callback domain has been burned, so you swap to a second payload that uses a second domain. Same results.

Now, you’re wondering what else is burned. You spend the next several days trying to isolate, one at a time, all of the variables that may be getting your payload detected. You eventually run out of time.

GAME OVER

Post Analysis

You were spot on about your philosophy — spawning processes is expensive for the attacker, especially until you understand what is running on the host and what normal looks like. But you forgot one detail: your malicious document is spawning your payload. It’s a separate process. Yes, using the Windows API to enumerate what is running is the right thing to do, but doing it after you already spawned a process is like giving antibiotics to a cadaver. You’ve already made an expensive choice, and in this case, it got you caught.

All new processes and their arguments are captured by the EDR product, logged centrally, and out-of-the-box analytics pop rare and unusual processes to the Security Operations analysts via a nice SEIM integration. They quickly triage and pass the tickets to the appropriate team.

In this case, they snagged a copy of the executable from the EDR product’s cache, quickly scanning it locally with their own anti-virus (they don’t upload to the cloud as a general OPSEC rule). Seeing that resulted in no detection, they took a SHA-256 hash of the file and queried Virus Total — there were no hits, so VT has never seen this file before. The ticket was passed to the malware reverse engineers for review.

It was approximately 39 minutes after the initial execution by the time the malware reverse engineer took the ticket off the top of the queue. Not bad, really. She quickly recognized the artifact was a packed PE file. Rather than reverse it statically, which would take too long, she decided to “detonate” it in their sandbox environment. It immediately fired off a connection to your C2 domain, which she noted in the ticket. She asked a partner to check web logs for any other signs of hosts talking to that domain — just the one that prompted the unusual process ticket. The malware didn’t appear to be doing much, so she captured a PCAP of the traffic and opened it in Wireshark. The C2 server returned 404 — page not found, for each request. Remembering a case from one of her peers a year ago, she quickly fired up “curl” from a linux host that uses the corporate egress IP address. She made sure to set the same user agent string, headers, etc. She got a different result — a 200 status OK response. Her hypothesis was correct — the sandbox host they use for malware analysis had a private Internet connection. This malware has been targeted for their corporate network.

She spun up a different sandbox system — this time a full Windows system with special modifications that keep malware chained down so it cannot make changes to the operating system, but on the corporate network’s egress IP address. It was 93 minutes after the initial host executed the unusual process when she executed the malware. The malware sprang to life. A quick look at the PCAP looks very similar to the PCAP that was pulled from the original production endpoint device. No other domains or IP addresses were involved in the communication, so she escalated to the incident response team.

The incident response team quickly reviewed the case history and blocked the domain at the network’s edge and verified the connections dropped.

The malware analyst quickly deployed a rule to block execution of all files with the same SHA-256 hash. The rule deployed just in time. Approximately 12 and 18 minutes later, respectively, two more phishes containing the same executable attempted to run and were blocked by the EDR product. She reviewed the PCAP some more, crafting a quick signature for the network traffic, just in case a variant with a slightly different hash landed in a future hash — this was very likely given the binary was packed, making it easy to tweak a minor detail — just a single bit difference and the hash wouldn’t match. After a peer review, the signature was deployed to all sensors throughout the enterprise.

Meanwhile, the SMTP domain that sent the email was blocked. A YARA rule for the novel execution technique was created and then shared to other organizations in and industry threat intel sharing group. Within 24 hours, more than 100 organizations had the YARA rule deployed. One of them was another consulting client going through a similar exercise. As is their process when they receive new YARA rules from the community, they do a cursory scan against mail that arrived in the past 72 hours to see if the new technique was being used in their environment. It was. They detected the initial access vector of your teammate — who was much further along towards his objectives at that company and had 2 forms of persistence deployed already. Within a day, they made up lost ground and contained your teammate before he could exfiltrate the objective data. They chalked it up as a win for threat intelligence.

Within 96 hours, a malware analyst at a large security company wrote a blog about the execution technique, which was now burned forever. It now only works at companies with really immature security programs, but so does about a hundred other techniques.

All in all, Spaceley’s Sprockets was proud of their performance and glad to have the challenge — it is not every day that you see custom malware using new techniques. The exercise justified the company’s investment in the EDR products, their malware analysts, and their threat intel sharing program. The defenders definitely had higher blood pressure during the event and during the debrief, they discussed a few opportunities to make their response times shorter.

You didn’t hit their objectives, but that’s OK. Your client went from being a tough target to an even tougher one.

THE END

--

--

Tim MalcomVetter
Tim MalcomVetter

Written by Tim MalcomVetter

Cybersecurity. I left my clever profile in my other social network: https://www.linkedin.com/in/malcomvetter

No responses yet