Challenges In Post-Exploitation Workflows
2023-8-3 01:19:12 Author: posts.specterops.io(查看原文) 阅读量:5 收藏

Will Schroeder

Posts By SpecterOps Team Members

In our previous post, we talked about the problem of structured data in the post-exploitation community. We touched on the existing relationship between our tools and data and covered some of the domain-specific challenges that come with offensive data collection. We ended with the question “If all of our offensive tools produced and worked with structured data, what would be possible?” This post shifts gears a bit and demonstrates some of our current challenges in post-exploitation workflows, some of which could be helped with structured data.

Tl;dr for those who won’t want to read the whole post: there are lots of challenges surrounding red teaming, and our new project Nemesis that we’re releasing at BlackHat Arsenal next week can help with some of them! Join us Wednesday August 9, 2023 from 14:30–16:00 at Arsenal, Station 2 or Thursday August 10, 2023 from 14:00–15:00 at the SpecterOps BlackHat booth to learn more.

First, however, let’s provide a bit more background.

Anyone currently operating knows the times they are a-changin’ for offensive operations and many actions have gotten more difficult than they once were. In general, to get the same information we’ve traditionally gathered from hosts, we see a few main approaches:

  1. Find ways to neutralize the endpoint detection and response (EDR) or the specific telemetry-collecting sensor that’s flagging our behavior; this ranges from token-stomping to anti-malware scan interface (AMSI)/event tracing for Windows (ETW) unhooking, or any other method that allows us to run the same tools on-host as we have in the past
  2. Obfuscate our existing tools in a way (statically or in some cases behaviorally) in order to avoid AV/EDR detection
  3. Rewrite our tools into other, generally less detectable forms; rewriting various post-exploitation functionality into Beacon-Object-Files (like TrustedSec’s awesome BOF Situational Awareness Toolkit) is a great example
  4. Pull the data in an unprocessed form and process it off the target host; a theoretical example would be writing a custom small lightweight directory access protocol (LDAP) client to dump all of an Active Directory (AD) domain’s data and transform it offline to a format compatible with BloodHound rather than running the (often statically and behaviorally signatured) SharpHound binary. Tunneling tools through a SOCKS proxy for remote host collection also falls into this category

Our thesis is that we should in general be moving towards the fourth approach, using the lightest touch we can on a host if possible. However, this almost always involves more manual operator processing. After all, there’s a reason we all built these fancy on-host tools to only give us the information we want (think SharpUp’s targeted output). Like we mentioned in our previous post, output from many offensive tools is currently structured for human analysis versus being structured for computers, meaning we now have an operational gap area. Some of the examples in this post will show how better tool interoperability/data structure would help ease some of this data correlation.

Other examples will hopefully make the case for automating other various workflow components. We often ended up downloading a fair number of files on an engagement, from Excel spreadsheets found on a compromised system’s desktop, to source code repos, custom written .NET assemblies, and more. A lot of this file triage has tended to be highly manual–and yes, there are some tools that can help with some specific types of triage; however, they’re often built for a very specific purpose and most need to be run on host.

Also, shoutout to Jackson T. for their work in this area, specifically around OODA loops! Their post series on “Operators, EDR Sensors, and OODA Loops” is a great read for additional information in this subject area.

https://twitter.com/Jackson_T/status/1395047347722330113

Note: We’re going to do our best to be agnostic towards tool specifics or command and control (C2) frameworks, except to highlight what the current general state is for some of these problems. As always, we’re not claiming complete 100% coverage of everything (i.e., we’re sure that we missed some things). This is just our perspective and an honest faith effort to explore some of these limitations and future possibilities.

So let’s get to some of our common workflow challenges (keyword some, not all : )

On almost every engagement we end up mining file shares, or other document stores like Confluence, for information relevant to our specific objective(s). This is where a lot of us cut our teeth in red teaming: pouring through document after document searching for a clue to the next step in the attack chain.

An Actual Picture of Me After My First Red Team

Let’s assume we’re searching documents from a target system/document store through our favorite C2 and agent. These are the likely steps we’d need to take:

  1. Task your agent to download a potentially “interesting” document
  2. Transfer the document from the C2 controller to an attacker system
  3. Optional for Office documents: move the document to a virtual machine (VM) that’s not Internet connected, just in case the document contains canaries
  4. Ensure you have the appropriate software (OpenOffice, etc.) installed to view whatever format the document is
  5. Open the document and either read or skim depending on the length
  6. If the search is targeted, search for specific terms/names
  7. Optional: examine the document’s metadata, if appropriate
  8. Record the document as triaged somewhere if operating with a team of multiple people so work isn’t duplicated
  9. Record appropriate notes in some store about the document or remember enough detail for future analysis
  10. Repeat approximately 10,000 times

A similar scenario is pulling documents from internal resources like SharePoint. This is most often done through proxying in browser traffic via SOCKS in our agent and browsing/downloading files one by one. In this case, most of the steps are the same, though we do need to manually record the site we retrieved the file from for logging.

But wait! The situation is even more nuanced and complicated than what we described above! Whether a file or bit of information is “interesting” varies based on the phase of the engagement and evolves as you spend more time in the target network and gain additional information regarding its structure, jargon, etc. That is, a piece of information may seem inconsequential at the beginning of an engagement but may become vitally important weeks in. This isn’t hard to imagine; finding a document full of project codewords on a target’s desktop two days into an engagement might be written off until an additional Confluence page breaking down each project’s name and purpose is found the following week.

With dozens to hundreds (or more) documents and pieces of information being downloaded over the course of an engagement, it’s a challenge to a) sort through everything, and even more of a challenge to b) remember relevant details days to weeks later as you gain more context. However, this is one of the skills that separates really great offensive operators from decent ones: the ability to sort through and synthesize huge amounts of information derived from a network in a way that lets you find flaws and continue to move towards your objective.

There’s another wrinkle as well: every operator will look at a file in a different way based on their own level and type of experience. As operators perform this type of triage over the years, patterns emerge in data that they knowingly or unknowingly internalize and reuse in future operations. While aspects of this can be taught, people will internalize different patterns based on the data they’ve encountered.

Experienced Operators When They Pull up a Fresh File Share to Triage

As a more concrete example, imagine finding a PowerShell script with a database connection string. One operator may validate the credentials and realize the database seems unimportant, storing basic IT inventory information. Another operator may dive more into the data source and realize that that data can be used for user location correlation for building attack paths, as well as realizing there’s a SQL link they could crawl to a more privileged database. Another situation we’ve had before: while triaging a host, a particular third party application showed up that most people ignored. One operator remembered from a previous engagement that this software was used for some type of IT management task, so they downloaded the file, pulled it apart in dnSpy, and realized it stored credentials for a privileged account in the registry. A quick decryption script later and the team had the breakthrough they needed.

So the TL;DR of all of this? File triage is something that’s very commonly done, but it’s usually highly manual and honed over years of operating. While everyone has their own workflows and tools for dealing with this type of challenge, there doesn’t seem to currently be one workflow-or-tool-to-rule-them-all, at least in the post-exploitation side of the house.

One very common step in many attack chains is privilege escalation, be it local or network based. Local privilege escalation usually either involves running an automated tool that tells you common misconfigurations, e.g. PowerUp/SharpUp, or performing a heuristic based approach for more “bespoke” misconfigurations. This is something we teach in more depth in our Red Team Operations and Vulnerability Research for Operators courses, but we’ll cover an overview of the process here.

One of the common ways we tend to escalate privileges is through insecure non-default (i.e., non-Microsoft) software installed on a host, including both in-house custom software and third party software. This usually involves finding a program that’s running with some type of elevated privileges that has some way for us to influence its execution through attacker-controlled data inputs ranging from permissive named pipes, to memory pages, inter-process communication (IPC) mechanisms, etc.

In order to do this, we generally need to gather these types of data sources (this list is not guaranteed to be complete):

  • Process listings, including integrity levels and backing file paths
  • Service listings, including security descriptor definition language (SDDL) information and backing file paths
  • Named pipe listings, including (if possible) originating processes and SDDLs
  • Open TCP/UDP ports and their originating processes
  • Drivers currently installed, including backing file paths/associated SDDLs
  • SDDLs for any relevant/linked file paths and their containing folders
  • Existing handles with any overly permissive access rights
  • File types for any relevant/linked file paths — script type, binary type (.NET/etc.)
  • Recent temporary files and SDDL information (including ownership) if possible
  • Various registry settings, including environment variables
  • Any relevant downloaded scripts/binaries linked to the previously mentioned data sources

We don’t have the time to analyze every binary and script on a system, so we need to make choices based on the data we’ve gathered. The next stage of analysis usually involves manually analyzing binaries/scripts we’ve downloaded with the other sets of information in order to contextualize everything and prioritize where we should spend our efforts.

Custom .NET programs (third-party or internally developed) are a great example that we tend to go after: they’re easily “decompiled” to source through something like dnSpy, effectively turning their analysis into a source code audit instead of a binary reversing challenge. It helps that several vulnerabilities like .NET deserialization and remoting abuse often make our job easier.

I Am so Good at Reversing

As you can probably guess, many of these data sources come to us in different forms depending on the tools and C2 used. The entire process, if done on a live engagement (as opposed to a “golden image” type analysis where we can install whatever analysis tools we want) almost always involves wrangling bits of information into a fairly manual analysis workflow that can vary operator by operator.

This is definitely a process that could be improved by a) structured tool output to help with the data correlation, and b) some type of automatic file analysis to help find low hanging fruit.

We’re big champions of offensive data protection application programming interface (DPAPI) abuse. We teach it in detail in our Adversary Tactics: Red Team Operations training course, we’ve blogged about it before, and our tool SharpDPAPI (part of GhostPack) is still under continuing development. Sidenote: shoutout to Kiblyn11 for recently integrating rpc masterkey retrieval into SharpDPAPI! Offensive DPAPI abuse has been one of the big game changers for SpecterOps over the last few years, as it provides an alternative way to retrieve sensitive data, like cookies or even user credentials in some cases, without having to be elevated or touching local security authority subsystem service (LSASS).

One of the useful things regarding DPAPI abuse is the domain DPAPI backup key. This key can be retrieved remotely from domain controllers (DCs) with domain administrator (or equivalent) privileges and can decrypt any domain user’s DPAPI masterkey, which itself is used to protect the sensitive data we want. This decryption can be done offline as the entire process is just crypto, so our recommendation for operators is to always download all DPAPI masterkeys and associated blobs we can access for future decryption in case we retrieve the backup key.

But how does this actually look in practice?

The main DPAPI-specific abuse tools we use are Mimikatz, SharpDPAPI, and Impacket (with the occasional BOF thrown in); which one we choose is dependent on the target environment, but they all do mostly the same things in regards to DPAPI. Some parts of our workflow require code execution on host, specifically retrieving a domain DPAPI backupkey remotely, using CryptUnprotectData for specific blobs, or retrieving a user’s masterkey via MS-BKRP. However other components, such as decrypting master keys given a backup key, or decrypting blobs given a decrypted master key, can all be done offline on an attacker host.

Sidenote: yes, we know there are other tools that can perform DPAPI abuse, but we’re just highlighting the main tools we usually use. The list isn’t meant to be exhaustive etc. etc. etc.

Here lies the tradeoff. Users can, and almost always do, have multiple DPAPI masterkeys and there are a large number of various target data sources protected with DPAPI (in fact, we don’t even know them all!) If we don’t want to run a packaged up tool to decrypt and correlate everything on the host itself, and rather we want all of the associated files downloaded, we need to build out scripts that instruct our C2 to download sets of dozens (or more) files per user we’re triaging. Here’s how that process might look:

  1. Trigger a custom-written DPAPI data collection script in your C2 agent that queues up the downloads of a dozen+ files (masterkeys + target data sources)
  2. Download all of the files to your attacker system from your C2
  3. Either:
    – Use Mimikatz/SharpDPAPI or piped-in Impacket to use Microsoft Backup Key Remote Protocol [MS-BKRP] to retrieve the plaintext of every master key file for every user we have context control (i.e., stolen a token, etc.); we need to manually get execution in the context of every logged in user we have control of on the system and if not using SharpDPAPI, we need to manually specify each master key file path one at a time
    – Wait until we get domain admin (or equivalent) privileges so we can retrieve the domain DPAPI backupkey using Mimikatz/SharpDPAPI/Impacket
  4. If we retrieved the domain backup key, we will usually manually move all downloaded masterkeys to a single folder and use SharpDPAPI to decrypt all masterkeys in the folder using the backup key; this can be done with other tools, but SharpDPAPI makes this specific workflow a bit easier
  5. At this point, we have a large number of decrypted GUID:MASTERKEY values; we’ll usually use these keys as a lookup table to triage individual DPAPI protected files with known structure (credential/vault/Chrome state files, etc.) individually

This process is non-trivial. We have to remember to download all DPAPI master keys and associated target data files as well as dealing with a fairly manual triage process at a later point. Yes, this process can be automated with things like SharpDPAPI or something like dploot either locally or remotely against a system you have administrative access to, but the reason we go through this process is if we don’t yet have the backup key, we want to be able to retroactively decrypt data we already retrieved.

Once the appropriate keys are decrypted, DPAPI gives us a great backwards and forwards decryption of target blobs, but as we’ve seen there are a lot of moving pieces and a lot of steps that need to be done manually.

Obligatory Meme for This Section

By now, anyone reading this article has probably assumed that we didn’t write this to just complain about our current problems. We’ve been thinking about these types of things for years and have had a number of different ideas for solutions. Around a year ago, we finally started working on our proposed answer: a system we call Nemesis.

At a high level, Nemesis is an offensive data enrichment pipeline. It contains an abstracted API that any tool can post to (we’re including connectors for several popular C2 platforms) and performs a number of processing and enrichment steps for a variety of data types. Beyond performing various automations that help with various workflow issues like the ones described in this article, Nemesis lets us centralize all operational data for an engagement in structured, semi-structured, and unstructured forms. Nemesis will help enable the emerging discipline of offensive data analytics, provide structured data for future data science/machine learning (ML) opportunities, provide a method for data-driven operator assistance, and more possibilities we haven’t imagined yet.

If you’re as interested in these types of things as we are, come check us out Wednesday August 9, 2023 from 14:30–16:00 at BlackHat Arsenal Station 2 or Thursday August 10, 2023 from 14:00–15:00 at the SpecterOps BlackHat booth to learn more! We also have a number of posts coming out in the next few weeks that give more details about Nemesis from its architecture, to collection mechanisms, specific enrichments and automations, its analytic engine, and more.

We’ve spent a lot of blood, sweat, and tears into building this project and we’re very excited to share it with everyone. It will be released open-source at https://www.github.com/SpecterOps/Nemesis just before our BlackHat Arsenal presentation.


文章来源: https://posts.specterops.io/challenges-in-post-exploitation-workflows-2b3469810fe9?source=rss----f05f8696e3cc---4
如有侵权请联系:admin#unsafe.sh