Credential dumping refers to the systematic extraction of usernames, passwords, and other sensitive information from operating systems by malicious actors. This technique poses a significant threat, particularly within Linux environments, where successful credential extraction can lead to unauthorised access to user accounts, services, and network resources. Understanding the mechanisms of credential storage in Linux, the vulnerabilities that may expose these credentials, and the strategies for safeguarding them are essential for enhancing system security.
This blog aims to elucidate the mechanisms by which Linux systems manage user credentials, the prevalent techniques employed by malicious actors to compromise these credentials, and the strategies for mitigating such security threats.
In a comprehensive document authored by David Howells, the intricacies of credential management within the Linux operating system are meticulously detailed. This document elucidates the foundational role of credentials in enforcing security protocols and delineates how various system objects, including files and processes, adhere to established security frameworks.
Objects and Credentials in Linux Security
Linux operating systems utilise a robust framework for managing security contexts through the concept of “credentials” associated with various entities, such as processes and files. These credentials are integral to defining the system’s security posture and are composed of several key components that collectively determine the capabilities and permissions of tasks within the environment.
Components of Linux Credentials:-
Subject and Access Controls
Linux employs two primary access control models: Discretionary Access Control (DAC) and Mandatory Access Control (MAC).
Discretionary Access Control (DAC) is implemented through user identifiers (UIDs) and group identifiers (GIDs). Under this model, resource owners have the authority to set permissions on their resources, allowing them to dictate who can access or modify their files.
Mandatory Access Control (MAC), on the other hand, enforces access policies that are centrally managed and cannot be altered by individual users. This is achieved through systems such as Security-Enhanced Linux (SELinux), AppArmor, and Smack. These MAC systems utilise labels and rules to provide fine-grained control over access permissions.
Role of Labels and Rules
In MAC systems, security labels are assigned to both subjects and objects. These labels serve as identifiers that dictate the level of access each subject has to various objects within the system. For instance:
SELinux employs a label-based approach where security contexts are assigned to files and processes. This allows for detailed policy definitions that specify what actions subjects can perform on objects based on their security labels.
AppArmor, in contrast, utilises a path-based model where profiles define the allowed interactions for specific applications with the filesystem. This approach simplifies the management of security policies while still providing robust protection.
Subjective vs. Objective Qualifications
The distinction between subjective and objective qualifications is crucial in understanding how credentials are applied within these access control frameworks:
Subjective credentials refer to those associated with tasks or processes (subjects). These credentials govern what actions a subject can undertake during its execution.
Objective credentials, however, pertain to the attributes of objects such as files. When accessing these objects, the system relies on objective criteria, namely, the established permissions defined by DAC or enforced by MAC policies.
Task Credential Management: Each task has a cred structure in task_struct. Alterations to a task’s credentials require a copy-modify-replace process to maintain immutability and use RCU (Read-Copy-Update) for safe access by other tasks.
Altering and Managing Credentials: Tasks can change their credentials, but not others. Metamorphic functions such as prepare_creds() and commit_creds() deal with these changes. RCU locking is required to access credentials for other tasks.
Open File Credentials: When a file is opened, the task’s credentials at that moment are attached to the file structure, ensuring consistent access permissions.
Overriding VFS Credentials: Under certain scenarios (like core dumps), it’s possible to override the default credential handling in the Virtual File System (VFS) to specify which credentials to apply to actions.
This system of credentials offers Linux a good level of security control, guaranteeing that permissions correspond to user-based and policy-based requirements across the system.
If you want to get more details, you can read the full document here.
Credentials in Linux systems come in various forms, each serving a unique purpose:
User Credentials: Authorisation is accomplished using credentials, including usernames and passwords usually hashed. This includes system administrators, application users, and other normal system users.
Service Credentials: Services such as a database, web server, or API use these to validate requests between applications.
SSH Keys: SSH is a private-public key pair that enables secure terminal login to systems without a password, suitable for programs with secure authentication requirements.
Environment Variables are temporary storage locations in the system environment that hold data such as API tokens, database credentials, or other sensitive information.
Every credential type is also accompanied by certain security considerations because some may be accidentally leaked in files, scripts, or logs.
The /etc/shadow file is a critical component of Linux security architecture, designed to securely store hashed passwords and related metadata. This file is accessible exclusively by the root user or processes running with superuser privileges, effectively minimising the risk of unauthorised access. By restricting access to this file, Linux systems enhance their defences against potential intrusions and malicious activities aimed at compromising user credentials.
The structure of the /etc/shadow file is composed of lines representing individual user accounts, each containing several fields separated by colons. The key fields include:
This format supports robust system-wide security policies by enforcing password strength requirements and managing reset intervals effectively
Common Vulnerabilities
Despite its security advantages, the /etc/shadow file is not impervious to threats. Attackers may exploit privilege escalation vulnerabilities or other weaknesses to gain access to this file. If successful, they can extract hashed passwords and attempt to crack them offline using various methods, including brute force attacks or pre-computed hash tables (rainbow tables).
To mitigate these risks, it is essential to utilise strong hashing algorithms such as bcrypt or SHA-512. These algorithms significantly increase the complexity of cracking attempts, making it more challenging for attackers to reverse-engineer passwords from their hashes. Regularly updating hashing algorithms and enforcing stringent password policies are critical strategies in safeguarding against unauthorised access.
In summary, while the /etc/shadow file plays a vital role in enhancing Linux system security through controlled access and structured data storage, it remains a target for attackers. Therefore, continuous vigilance and proactive security measures are necessary to protect sensitive user credentials effectively.
Overview of PAM
Programs that grant users access to a system use authentication to verify each other’s identity (that is, to establish that a user is who they say they are).
Historically, each program had its own way of authenticating users. In Linux, many programs are configured to use a centralised authentication mechanism called Pluggable Authentication Modules (PAM).
PAM uses a pluggable, modular architecture, which affords the system administrator a great deal of flexibility in setting authentication policies for the system. PAM is a useful system for developers and administrators for several reasons:
The Pluggable Authentication Module (PAM) is a critical component in Linux systems, responsible for managing user authentication across various services. One of its functionalities includes the caching of credentials in memory during an active user session. This caching mechanism significantly enhances user experience by allowing access to resources without the need for repeated reauthentication, which is particularly useful in environments where network connectivity may be intermittent.
Despite its advantages, credential caching poses significant security risks if not managed properly. If PAM’s memory cache is not configured to clear immediately after use, sensitive information such as usernames and passwords may remain accessible in system memory. This vulnerability can be exploited by attackers or unauthorised internal users with administrative rights who gain access to memory dumps or can inspect system memory.
Configuration Files
The Pluggable Authentication Modules (PAM) framework is essential for managing authentication in Linux systems. This modular approach allows administrators to customise authentication processes for various services efficiently. Key configuration files within the PAM system include /etc/pam.d/common-auth and /etc/pam.d/common-session, which play crucial roles in controlling authentication methods and managing user sessions, respectively.
Misconfigured PAM files can lead to severe security vulnerabilities, such as:
Common Applications
Many tools, such as the AWS Command Line Interface (CLI) and Git, store user credentials in plaintext format within user directories. For instance, AWS credentials are typically found in ~/.aws/credentials, while Git credentials may reside in ~/.git-credentials. These files often lack adequate encryption or access restrictions, posing a significant security risk. In contrast, modern web browsers like Chrome and Firefox store credentials—including login data and session tokens—in their respective directories, such as ~/.config/google-chrome and ~/.mozilla/firefox. Although these credentials are encrypted, the encryption is generally tied to the local user account. Consequently, if an attacker gains local access to the user’s profile, they may successfully decrypt and retrieve these stored credentials.
Configuration and Storage Locations
Credential storage locations are typically found within the user’s home directory. If permissions are not configured correctly, these files can be easily accessed by unauthorized users. Implementing restrictive permissions is crucial to prevent unauthorised access to saved credentials, thereby enhancing the overall security posture.
The security implications of misconfigured permissions or inadequate encryption can be severe, as sensitive access details may be exposed. High-risk passwords should not be stored in plaintext or easily accessible formats. Instead, they should be secured using encrypted filesystems or specialised secret management tools designed for secure credential storage14.
To mitigate risks associated with browser-stored credentials, organisations should prioritise robust detection strategies that focus on credential access. This includes monitoring for suspicious activities that may indicate credential harvesting attempts, particularly from browsers where many users store sensitive information.
Purpose of NSS
The Name Service Switch (NSS) is a critical component in Linux that facilitates the operating system’s ability to retrieve and switch between various information sources, such as user and group information, hostname mapping, and more. By leveraging NSS, system administrators can configure the system to source information according to specific environmental requirements. This may involve querying local file databases (e.g., /etc/passwd, /etc/shadow) or network databases (e.g., LDAP, NIS, Active Directory). The flexibility provided by NSS is particularly beneficial in large or networked environments where centralised user management and data retrieval are essential for operational efficiency.
Configuration Files
The configuration for NSS is primarily managed through the /etc/nsswitch.conf file, which defines the order and sources for data retrieval. This file enumerates various databases (e.g., passwd, shadow, group) alongside one or more sources from which to obtain that information. The sources can include:
Potential Attack Surfaces
If external data sources aren’t secured, they may allow unauthorised users to retrieve sensitive information. Using TLS for LDAP and limiting the scope of external sources enhances security.
LUKS, or Linux Unified Key Setup, is a sophisticated on-disk format designed for encrypting volumes, primarily within Linux environments. This specification, developed by Clemens Fruhwirth in 2004, serves as a standard for disk encryption, providing robust security features suitable for various applications, including portable storage devices like USB drives.
LUKS employs a unique architecture where metadata is stored at the beginning of the encrypted volume. This metadata includes essential parameters such as:
This metadata structure allows users to avoid memorising complex parameters, facilitating ease of use when deploying LUKS on devices like USB memory sticks.
A key feature of LUKS is its use of a master key, which is encrypted using a hash derived from the user’s passphrase. This design enables multiple passphrases to unlock the same encrypted volume, allowing users to change their passphrases without needing to re-encrypt the entire disk. Each passphrase is associated with a unique key slot that contains a 256-bit salt used in the hashing process. When a user enters a passphrase, LUKS combines it with each salt, hashes the result, and checks against stored values to derive the master key.
LUKS has evolved over time, with two primary versions: LUKS1 and LUKS2. LUKS2 introduces several enhancements over its predecessor:
LUKS is particularly well-suited for encrypting entire block devices, making it an ideal choice for protecting sensitive data on mobile devices and removable storage media. Its ability to encrypt arbitrary data formats allows it to be utilised effectively for swap partitions and databases that require secure storage solutions.
Usage of Environment Variables
Environment variables are essential components in modern software development, commonly utilised to store sensitive information such as API tokens, database credentials, and configuration secrets. They provide a flexible way to manage configuration settings across different environments (development, testing, production) without hardcoding sensitive data directly into the source code. However, the convenience of environment variables comes with significant risks, particularly when processes and users can access these variables without appropriate safeguards.
Risks and Best Practices
While environment variables offer a straightforward solution for managing configuration data, they inherently lack fine-grained access control. This deficiency can lead to potential exposure of sensitive information if not properly secured. For instance, if a malicious actor gains access to a system where environment variables are accessible, they could retrieve critical secrets that can compromise the entire application or infrastructure. Furthermore, environment variables can be inadvertently logged or exposed through error messages, increasing the risk of data leaks.
To mitigate the risks associated with environment variables, organisations should consider adopting purpose-built secrets management solutions. These solutions provide enhanced security features that are crucial for protecting sensitive data:
In Linux systems that are integrated into enterprise identity infrastructures such as Active Directory, tools like SSSD and Quest Authentication Services (VAS/One Identity) are used to provide seamless authentication via centralized directories. One of their core capabilities is credential caching, which allows a user to authenticate using previously validated credentials even if the machine becomes disconnected from the domain controller. This enables persistent access for mobile or remote systems.
Usage and Functionality
When a user logs in successfully to a system that uses SSSD or Quest, the authentication service caches the user’s credentials locally in an encrypted or obfuscated format. This cache is then referenced in subsequent login attempts, especially when the system is offline.
Both tools store:
Best Practices for Secure Credential Caching
While credential caching improves usability, it introduces potential risks if not properly secured. Best practices include:
Overview of LDAP
The Lightweight Directory Access Protocol (LDAP) is a robust, vendor-neutral application protocol designed for accessing and managing directory services over an Internet Protocol (IP) network. LDAP provides a structured and hierarchical framework for storing, retrieving, adding, updating, and deleting user, group, and resource information, making it essential for identity and access management within organisations. Its design facilitates efficient data retrieval and management through a tree-like structure known as the Directory Information Tree (DIT), which organises data intuitively for both users and applications.
Key Features of LDAP
Credential Storage
LDAP serves as a secure repository for user identity information, allowing only authorised applications to access authentication details. This security is paramount; however, misconfigurations—such as the absence of encryption or inadequate access controls—can expose credential information, rendering the system vulnerable to security breaches. Implementing stringent access controls and encryption protocols is critical to maintaining the integrity of stored credentials.
Common Misconfigurations
LDAP Architecture
Understanding the architecture of LDAP is crucial for leveraging its capabilities effectively. The architecture consists of several models:
LDAP Operations
LDAP supports a variety of operations that facilitate interaction with the directory service:
Core Concepts of Kerberos
Kerberos is a sophisticated network authentication protocol that enables secure user access to multiple services without the need for repeated password entries. Developed at MIT, it employs a unique ticket-based system that enhances security by eliminating the transmission of passwords over the network.
Kerberos operates through several key entities:
Ticket Granting Tickets (TGT) in Kerberos Authentication
In the realm of network security, Ticket Granting Tickets (TGT) play a pivotal role within the Kerberos authentication framework. Upon initial authentication, users are issued a TGT by the Key Distribution Centre (KDC), which serves as a credential to request access to various services without the need for repeated credential input. This mechanism not only enhances user convenience but also bolsters security by minimising the frequency with which sensitive credentials are transmitted over the network.
TGTs are typically stored in secure locations, such as /tmp/krb5cc_* on Unix systems, facilitating quick access to services. Users can specify the location of their TGT cache using the KRB5CCNAME environment variable. For an in-depth understanding, resources such as “Kerberos Tickets on Linux Red Teams” by Mandiant provide valuable insights into TGT management and usage in practical scenarios
As organisations increasingly prioritise cybersecurity, transitioning from the outdated RC4 encryption to the more robust AES (Advanced Encryption Standard) for Kerberos authentication has become essential. This shift not only enhances security but also mitigates the risk of kerberoasting attacks, where attackers exploit weak encryption methods to extract service tickets and crack passwords offline.
Kerberoasting is a technique used by attackers to exploit service accounts in Active Directory (AD) that utilize weak encryption types, particularly RC4. The inherent weaknesses in RC4 make it susceptible to various attacks, allowing malicious actors to obtain service tickets and attempt to crack the associated passwords. By transitioning to AES, organisations can significantly reduce their exposure to these vulnerabilities.
Assess Current Encryption Settings:
Modify Active Directory Settings:
Group Policy Configuration:
Monitor Kerberos Ticket Requests:
Registry Key Adjustments:
Testing and Validation:
Gradual Rollout:
Documentation and Training:
Overview of the /proc
The /proc filesystem serves as a vital virtual interface within Unix-like operating systems, providing comprehensive insights into the state of running processes. It functions as a dynamic representation of kernel data, offering real-time information on various aspects such as process environment variables, memory utilisation, open file descriptors, and command-line arguments. Each active process is represented by a unique directory within /proc, identified by its Process ID (PID), formatted as /proc/<PID>. This hierarchical structure enables system administrators and applications to access process-specific data efficiently, facilitating resource management and performance monitoring. Consequently, the /proc filesystem is an indispensable tool for system diagnostics and introspection, acting as a control centre for kernel interactions and process management.
Security Concerns
Despite its utility, the /proc filesystem presents significant security challenges. Sensitive information may be exposed to unauthorised users if proper permissions are not enforced. The detailed data contained within /proc—such as environment variables, memory maps, open files, and command-line arguments—can inadvertently reveal critical credentials like API keys, tokens, and passwords stored in environment variables. Notably, files located under /proc/<PID>/, such as environ and cmdline, can disclose these sensitive details. This vulnerability makes the /proc filesystem a potential target for attackers seeking to extract confidential information from running processes.
Credential dumping is a significant cybersecurity threat where attackers extract authentication credentials from operating systems. This technique is essential for lateral movement within networks and can lead to severe breaches if not adequately mitigated. Below are key methods employed in credential dumping, particularly on Linux systems.Co
Attacker Perspective: Discovery and Exploitation
From an attacker’s standpoint, cached credentials present an attractive post-exploitation target—especially in environments where network authentication is unavailable.
Discovery Techniques
Abuse Techniques
Persistence
Attackers can tamper with the cache to:
With elevated privileges, attackers can access the /etc/shadow file, which securely stores hashed passwords for all system users. By extracting these hashes, adversaries can utilise sophisticated cracking tools such as John the Ripper or Hashcat to perform offline attacks. These attacks allow them to guess passwords without alerting the system administrators, employing various techniques, including:
Brute-force attacks: Systematically trying every possible password combination.
Dictionary attacks: Using a pre-defined list of likely passwords.
Rainbow tables: Precomputed tables for reversing cryptographic hash functions.
These methods are particularly effective against weak or commonly used passwords, making it crucial for organisations to enforce strong password policies and regular audits of their user accounts
Attackers can exploit files within the /proc filesystem, such as /proc/<PID>/mem and /proc/<PID>/environ, to read sensitive information, including command-line arguments and environment variables. By leveraging debugging tools like gdb or ptrace, they can attach to running processes and dump their memory. This technique can expose sensitive data temporarily held in RAM, such as plaintext passwords.
One notable tool used in this context is MimiPenguin, which specifically targets clear-text credentials stored in memory. It operates by dumping a process’s memory and scanning for lines that may contain clear-text credentials. MimiPenguin employs a statistical approach to assess the likelihood of each word being a valid credential by comparing hashes found in /etc/shadow, memory contents, and utilising regex searches. When potential matches are identified, it outputs them directly to the standard output stream
The compromise of SSH keys and browser credentials represents a significant threat to system security. When attackers gain access to a user’s home directory, they can extract SSH keys stored in the ~/.ssh/ directory. These keys allow unauthorised access to other systems without the need for password authentication, facilitating lateral movement within networks. Attackers can utilise these stolen keys to impersonate users and perform actions that could lead to severe data breaches or system compromises.
In addition to SSH keys, attackers often target browser directories such as ~/.mozilla for Firefox or ~/.config/google-chrome for Chrome. These directories may contain sensitive information, including credentials, cookies, and session tokens. By accessing these files without permission, attackers can hijack active sessions or extract login credentials for various web applications. This capability underscores the importance of securing browser data as part of a comprehensive credential dumping strategy.
Kerberos tickets, especially Ticket Granting Tickets (TGTs), are often cached locally to support offline authentication or reduce authentication overhead. These cached tickets can be harvested and replayed by an attacker to impersonate a legitimate user without knowing their password.
Discovery Techniques
klist
to view active tickets./tmp/krb5cc_*
or /var/tmp/krb5cc_*
for cached ticket files./etc/krb5.conf
for cache-related settings.From an attacker’s perspective, environment variables can be a rich source of sensitive data, especially in environments where credentials are injected into processes for application or automation purposes.
Discovery Techniques
printenv
, env
, or reading /proc/<PID>/environ
and replacing null bytes with newlines.To mitigate the risks associated with SSH key and browser credential extraction, organisations should implement robust security measures:
By adopting these strategies, organisations can significantly reduce their vulnerability to attacks that exploit SSH keys and browser credentials.