Introducing httpXplorer: Simplifying httpX URL Management and Analysis
2023-7-4 18:6:57 Author: infosecwriteups.com(查看原文) 阅读量:17 收藏

httpXplorer is a web-based application specifically designed for efficient URL management and analysis of the projectdiscovery’s httpx tools results. It allows users upload the httpx JSON output file, analyze URLs, status codes, web technologies, other information, sort the URLs based on their status codes, and focus on specific subdomains.

Abid Ahmad

InfoSec Write-ups

httpXplorer — httpX Local Database

Hey there! I’m excited to introduce you to httpXplorer, a powerful web-based application designed to streamline URL management and analysis. httpXplorer is here to make your life easier. In this article, I’ll walk you through an overview of the application, its key features, how to use it, the benefits it offers, and the technologies used in its development.

Overview:
httpXplorer is a web-based application designed for managing and analyzing URLs obtained from projectdiscovery’s httpx tools. It enables users to upload httpx JSON output files, analyze status codes, web technologies, and explore related information. The application provides sorting functionality for URLs based on their status codes, allowing users to prioritize and focus on specific areas of interest.

Whether you’re a penetration tester or bug hunter, httpXplorer provides valuable insights into your target URLs, helping you identify potential issues and optimize your workflow. The user-friendly interface and intuitive features of httpXplorer enable seamless organization, sorting, and extraction of information from your URL data. Whether you have a large collection of URLs or need to make informed decisions based on the data, httpXplorer simplifies the process and enhances your productivity and you’ll get comprehensive understanding of your target.

With httpXplorer, you have a centralized platform to manage and analyze your URL data, saving you time and effort. The application’s powerful features and intuitive interface make it easy for both beginners and experienced professionals to navigate and extract valuable insights from their URL datasets. Whether you are performing security assessments, bug hunting, or optimizing web applications, httpXplorer is your go-to tool for efficient URL management and analysis.

Key Features:

1. Upload and Analysis: Easily upload httpx JSON output files containing URLs and perform comprehensive analysis on their status codes, web technologies, and other relevant information.

2. URL Sorting: Sort the URLs based on their status codes in ascending or descending order.

3. Filtering Copy URLs: Filter and copy a range of URLs by specifying the start and end index, making it convenient to share or export specific sets of URLs.

3. Intuitive User Interface: Enjoy a user-friendly interface that enables smooth navigation, making it effortless to explore and interact with your URL data.

4. Efficient Data Management: Benefit from a well-organized centralized database system that ensures optimal performance and reliability in handling large volumes of data.

5. Flexible Configuration: Customize the application’s database name and adapt it to your specific target domain and requirements.

6. Data Update: Easily update the latest data of the target. When upload latest json file, if any data (status codes, technologies, CDN names, and hosts) changes in the latest output file, the new data will be updated by replacing old data

How to Use httpXplorer:

1. Install and Launch httpXplorer by following the instructions in the README file on the project’s GitHub repository.

2. Upload your httpx JSON output file containing the URLs you want to analyze. Before uploading the JSON file, make sure JSON format is valid. By default. httpx output JSON format is not valid because of missing starting and closing brackets.

So, first fix the JSON file via JSON Fixer, then upload the fixed JSON file in the httpXplorer.

Upload fixed JSON file

3. Explore the comprehensive analysis results, including status codes, web technologies, and other relevant information.

4. Utilize the sorting options to prioritize specific URLs based on their status codes in ascending or descending order.

Sorting URLs/subdomains based on status codes in asc or desc order

4. Filter & Copy URLs based on the index number. In the below example, first I sorted URLs in descending order, found index 1–6 URLs/subdomains are 404. I wanted to copy those 6 URLs, input number in Start Index & End Index then Copy those URLs.

Sorting desc to visualize 404 subdomains, then copy specific ranges of URLs
Export or paste copied URLs in a file

5. If user upload another JSON output file, the unique data will be stored accordingly. If any previous record changed, database will be updated based on new data by replacing old data record.

6. User can separe DATABASE based on the Target. Just change the database name (tesla.db) in the ‘SQLALCHEMY_DATABASE_URI’.

app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///tesla.db'
db = SQLAlchemy(app)

# OR,

app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///yahoo.db'

# LIKE THIS

All databases based on specific target

You can find all the databases in the “instance” folder of the application.

By default, the application only display STATUS CODE, URL, TECHNOLOGY, CDN, HOST.

If you want to remove any column or include other information, just change a little bit inside the “app.py” code. If you read the code carefully, you will understand how you can modify for your specific purpose

# HERE IS THE EXAMPLE

class Data(db.Model):
id = db.Column(db.Integer, primary_key=True)
url = db.Column(db.String(200))
status_code = db.Column(db.Integer)
tech = db.Column(db.String(200))
cdn_name = db.Column(db.String(200))
host = db.Column(db.String(200))

....

def __init__(self, url, status_code, tech, cdn_name, host):
self.url = url
self.status_code = status_code
self.tech = tech
self.cdn_name = cdn_name
self.host = host

....

with app.app_context():
for entry in data:
url = entry.get('url', 'NULL')
status_code = entry.get('status_code', 0)
tech = entry.get('tech', 'NULL')
cdn_name = entry.get('cdn_name', 'NULL')
host = entry.get('host', 'NULL')

....

if existing_data:
existing_data.status_code = status_code
existing_data.tech = tech
existing_data.cdn_name = cdn_name
existing_data.host = host
else:
new_data = Data(url=url, status_code=status_code, tech=tech, cdn_name=cdn_name, host=host)
db.session.add(new_data)

....

Technologies Used:

httpXplorer leverages a range of technologies to deliver its powerful features:

  • Front-end: As the front-end of the httpXplorer application, I have created an intuitive and user-friendly web interface using HTML, CSS, and JavaScript. The front-end is responsible for presenting the web pages to the users and handling user interactions. In this system, the front-end primarily consists of the ‘index.html’ file, which displays the table of URLs and provides features like uploading JSON data, sorting URLs, and copying selected URLs to the clipboard. I have utilized the Tailwind CSS framework to style the interface and make it visually appealing. Additionally, I have included some JavaScript code to enable dynamic functionality, such as selecting a range of URLs and copying them.
  • Back-end: As the back-end of the httpXplorer, I have utilized the Flask framework, a Python web framework, to handle server-side operations and manage the communication between the front-end and the database. The back-end primarily consists of the ‘app.py’ file, which defines the Flask application and its routes. When a user uploads a JSON file through the front-end, I receive the file on the back-end and parse its data using the json library. I then store the extracted data in a SQLite database using SQLAlchemy, an Object-Relational Mapping (ORM) library for Python. The back-end also retrieves the stored URLs from the database, performs sorting based on user preferences, and passes the sorted URLs to the front-end for display.
  • Database: For storing and managing the URLs and their associated data, I have integrated a SQLite database into the httpXplorer system. SQLite is a lightweight, serverless database engine that is easy to set up and use. I have defined a ‘Data’ model in the back-end using SQLAlchemy, which represents the structure of the database table. Each URL entry in the table corresponds to a row containing attributes such as the URL itself, status code, technology, CDN name, and host. Whenever a user uploads a JSON file, I parse the data and either update existing entries or create new entries in the database. This allows for efficient storage and retrieval of the URLs as per user requests

Benefits of Using httpXplorer:
1. Efficient Workflow: httpXplorer simplifies the process of managing and analyzing URLs, allowing you to focus on extracting valuable insights and making informed decisions.

2. Time and Effort Savings: By automating the analysis of URL data, httpXplorer eliminates manual and repetitive tasks, saving you time and effort.

3. Actionable Insights: Gain valuable insights into your target URLs, enabling you to identify potential issues, vulnerabilities, or optimization opportunities.

4. User-Friendly Interface: The intuitive interface of httpXplorer ensures a smooth and enjoyable user experience, making it accessible to users of all skill levels.

Wrapping Up:

httpXplorer is your go-to solution for efficient URL management and analysis. Take your URL analysis to the next level with httpXplorer and unlock the full potential of your projectdiscovery’s httpx tools results. Simplify your workflow, enhance your decision-making process, and stay ahead in the world of web application testing and security. Try httpXplorer today and experience the convenience, efficiency, and effectiveness it brings to your URL management and analysis tasks.

In closing, I would like to express my sincere gratitude for the opportunity to assist you. If you have any more questions or need further assistance, feel free to ask. Thank you for your time and trust.

Github repo link:

https://github.com/Abid-Ahmad/httpXplorer

If you find the GitHub repository helpful and would like to show your appreciation, you can consider giving it a star.


文章来源: https://infosecwriteups.com/introducing-httpxplorer-simplifying-httpx-url-management-and-analysis-56cfd7527bff?source=rss----7b722bfd1b8d--bug_bounty
如有侵权请联系:admin#unsafe.sh