Postman Secret Scanning: A Practical Guide to Finding Exposed APIs
嗯,这篇文章主要讲的是Postman这个API开发平台的安全问题。Postman允许开发者分享工作区,如果这些工作区被设置为公开,可能会暴露敏感信息,比如端点、认证流程和实时令牌。文章提到通过简单的搜索查询就能找到这些公开的工作区,而不需要复杂的漏洞利用。 作者详细介绍了Postman的搜索功能如何索引公开的工作区,并分析了这些工作区可能包含的信息,比如身份提供者的认证端点、云基础设施等。他们还提供了一些具体的搜索查询示例,以及如何通过这些查询来识别潜在的安全风险。 文章还讨论了Postman与其他平台(如GitHub)在秘密扫描上的不同之处。GitHub主要是基于模式匹配来检测泄露的凭证,而Postman则更关注API的工作流程和上下文信息。这意味着在Postman中发现的安全问题不仅仅是孤立的字符串,而是整个API架构的暴露。 此外,文章还提到了如何通过分析公开的工作区来推断组织的身份和基础设施。例如,通过检查请求中的URL、环境变量和响应头信息,可以识别出具体的组织和他们使用的云服务提供商。 最后,作者给出了一个实际案例,展示了如何通过公开的Postman工作区直接访问政府服务的聊天机器人基础设施。这表明即使没有复杂的漏洞利用,公开的工作区也可能导致严重的数据泄露。 总的来说,这篇文章强调了开发者在使用协作工具时需要更加注意数据保护,并建议组织加强对其开发工具的安全监控。 </think> 这篇文章揭示了Postman作为API开发平台在安全性方面的问题。当开发者将工作区设置为公开时,可能会无意中暴露敏感信息,包括端点、认证流程和实时令牌。文章指出,通过简单的搜索查询即可发现这些公开的工作区,并详细介绍了如何利用这些信息进行安全分析。 文章分析了Postman搜索功能如何索引公开的工作区,并强调了与GitHub等平台在秘密扫描上的不同之处:Postman不仅检测孤立的凭证字符串,还揭示API架构的上下文信息。作者提供了具体案例和搜索查询示例,展示了如何通过公开的工作区推断组织的身份、云基础设施及敏感数据访问路径。 最终案例表明,即使没有复杂的漏洞利用,公开的工作区也可能导致严重的数据泄露问题。文章呼吁开发者和组织加强安全意识,在使用协作工具时注意数据保护,并建议加强对开发工具的安全监控。 2026-4-19 03:16:53 Author: infosecwriteups.com(查看原文) 阅读量:29 收藏

Dzianis Skliar

One public Postman workspace exposed the full chatbot infrastructure of a government service — endpoints, authentication flows, live tokens. No exploit needed. Just a search query.

Press enter or click to view image in full size

Cover illustration for Postman Secret Scanning: A Practical Guide to Finding Exposed APIs

What this guide covers: how Postman public search indexes workspaces and where the exposure surface actually lives; a three-phase methodology for building your own queries, attributing anonymous workspaces, and validating operational status; five search queries targeting identity providers and cloud infrastructure, each with a real-world finding; a framework for analyzing what a matching workspace reveals; and two case studies demonstrating vendor exposure and direct credential exposure in government infrastructure.

Postman is an API development platform. Developers use it to build, test, and document APIs — collections of requests that hit real endpoints, with real authentication, against real backend services. It’s where API work actually happens before it becomes production code.

Over the past few years, Postman has added collaboration features, including the ability for teams to share workspaces. Workspaces and collections can be public.

The tool evolved from a local testing utility into a collaboration platform used by millions of developers every day, and this is where the security problem starts.

Postman’s main purpose is to make API work convenient. The tool has built-in functionality to store environments, authentication patterns, reusable tokens, and example requests that actually execute. Every feature that makes API development faster — environment variables, pre-request scripts, saved authorisation, collection-level auth — also creates a place where credentials live, not as a bug, but as a feature that makes developers’ work easier.

When a workspace becomes public, all of that material becomes indexed and searchable. Not just the code, as on GitHub, but the full operational context of how an API works: which endpoints it hits, how it authenticates, what headers it sends, which services it integrates with, and sometimes the credentials themselves.

This shifts how secret scanning in Postman should work.

On GitHub, secret scanning is pattern-based. Tools look for string patterns that match known token formats, such as AWS keys, Stripe tokens, GitHub PATs, and private keys. If a developer commits a secret, the scanner catches the string. That approach works because the attack surface is the string itself: a leaked credential is a leaked credential regardless of what surrounds it.

Postman is different. A workspace is not a file with secrets scattered inside. It’s a structured representation of how an API works — with endpoints, authentication flows, environment variables, pre-request scripts, and example requests all linked together by design. The value isn’t in isolated strings. The value is in the relationships between them. APIs are the backbone of modern applications, and Postman is where that backbone gets documented, tested, and — too often — published.

Generic keyword searches (“password”, “api_key”, “secret”) will find some things. They catch the obvious: developers who typed a credential directly into a variable name or into a request body, but they miss the real value of Postman as a reconnaissance source.

What Postman actually exposes is the architecture of API development. A collection with login.microsoftonline.com in the auth flow reveals which tenant a company uses, what scopes they request, and often which internal services consume those tokens - the same Microsoft Graph API attack surface that defenders routinely underestimate. A collection hitting sts.amazonaws.com shows how the organisation structures its federated access and cross-account roles. A collection with an atlassian.net URL exposes a real Atlassian instance and its API token patterns. These are service URLs that act as hidden gateways into the attack surface, and Postman packages them alongside the authentication context needed to use them.

None of these is a “secret” in the classical sense. But each one reveals the shape of how a system works — the kind of information that took an attacker weeks of reconnaissance to build, now available through a search query. In one case, a single Postman workspace was enough to go from public documentation to Power BI data exfiltration — not through pattern-matched secrets, but through the architectural exposure the collection revealed.

To find what matters in Postman, the queries need to match the shape of API work, not the shape of leaked strings. That means searching for platforms, authentication endpoints, and integration patterns — the artefacts that API collections are built around. Credentials come with the territory: almost every collection contains them in one form or another — client IDs and secrets, bearer tokens, API keys in environment variables, OAuth flows with real values. What makes Postman valuable as a reconnaissance source isn’t credentials versus architecture — it’s that both appear together, in the same place, with the operational context needed to use them.

Why Postman Search Works

Press enter or click to view image in full size

Comparison diagram: GitHub uses pattern-based scanning to find isolated leaked credentials while Postman context-based scanning exposes credentials alongside endpoints and authentication flows as a full attack roadmap

Postman offers public search via the API Network — a built-in discovery layer that indexes workspaces, collections, and requests with public visibility. The search bar accepts keywords and returns matching results across every public workspace on the platform. Recognising this is the first step toward preventing unintended exposure.

What the search actually indexes: workspace names, collection names, folder structure, request names, URLs, and descriptions — that’s the visible surface. Everything else — authorisation headers, environment variables, pre-request scripts, test code, example responses — becomes accessible the moment a result is clicked. What gets indexed is only the doorway; what lies behind it is the actual exposure.

This creates an important asymmetry. The search query matches on the surface layer, but the attacker reads the full depth once a workspace is opened. A query for a specific platform URL doesn’t just find collections that mention the platform. It opens the door to every request, every environment, and every hardcoded value in the collection.

Developers often don’t realise the distinction between a workspace being publicly accessible and one that appears in search engine indexes. Postman treats these as the same by design — public means discoverable. Understanding that “shared with the link” is different from “publicly indexed” can help teams better control data exposure and avoid unintended sharing.

How to Approach the Search

The five queries in this guide are worked examples. The methodology behind them is reusable. Knowing where those queries came from, how to attribute what they surface, and how to validate what was found is what turns a one-time read of this article into a repeatable workflow.

Press enter or click to view image in full size

Three-phase methodology pipeline: Build queries from platform documentation and service URLs, attribute workspaces through tenant subdomains and email domains, and validate operational status through response timestamps and token lifetime

Building the query list

Postman search queries don’t come from guessing. They come from API reconnaissance — the same process that produces every other reconnaissance artefact. The pipeline is straightforward:

Start with the platform. Identify a target — an identity provider, a cloud service, a SaaS vendor, an internal API. Read its documentation. Note the authentication endpoints, the API base URLs, the token issuance paths, and any region-specific subdomains. These are service URLs — the hidden gateways in an attack surface — and the process of cataloguing them produces candidates for Postman search.

Test each URL as a query. A useful Postman query returns results, but not too many. login.microsoftonline.com returns thousands of public workspaces - usable as a starting point. oauth2/v2.0/token returns millions - too generic to be useful. *.preprod.oktapreview.com returns a narrow set - high signal. The calibration comes from running candidates and observing what surfaces. Queries that return 10 to 200 results are typically the productive range: specific enough that every result is worth reading, broad enough to surface variety.

Not every service URL becomes a good query. Endpoints with high operational volume often don’t appear in public workspaces because they’re wrapped in authentication that makes them less useful as standalone examples. The queries that work are the ones that represent integration work — token endpoints, admin APIs, federation endpoints, platform-specific paths. These are what developers document in Postman, which is why they surface in Postman search.

From finding to attribution

A matching workspace is anonymous by default. Workspace names are often generic — Test, API Integration, New Workspace. Collection names are sometimes more revealing but frequently internal shorthand. Attribution - determining which organisation a workspace belongs to - is a separate analytical step, and it's the step that turns a finding from curiosity into an actionable artefact.

Several signals help attribute a workspace:

Tenant subdomains. The most direct signal. A URL like acme-corp.preprod.oktapreview.com or acmecorp.my.salesforce.com names the organisation in the subdomain itself. Microsoft tenant IDs don't reveal names directly, but they can be resolved to tenant names through public endpoints like login.microsoftonline.com/{tenant-id}/.well-known/openid-configuration - which returns the tenant's primary domain.

Email domains in example responses. Saved example responses frequently contain real user records captured during testing. Email addresses in those records — [email protected], [email protected] - directly identify the organisation. When email domains appear as .org.uk, .gov.au, or similar country-specific TLDs, they further narrow the organisation type.

Internal hostnames in environment variables. Variables like {{internal_api_url}}, {{backend_host}}, or {{admin_dashboard}} often contain real internal hostnames. A value like api.internal.acme.com names the organisation and reveals their internal naming conventions.

Headers in request examples. Origin, Referer, and custom headers like X-Tenant-ID, X-Customer-Name preserve attribution data from when the request was last executed. These are often overlooked by developers because they aren't part of the request body or URL.

Resource identifiers. Azure resource UUIDs, AWS account numbers, GCP project IDs are unique to an organisation’s cloud infrastructure. Resource UUIDs can sometimes be resolved to tenant names through Microsoft Graph or similar metadata APIs, and they can be searched in other contexts — GitHub, Stack Overflow posts, support forum mentions — to confirm attribution.

Workspace metadata. The workspace’s owner account, its public profile (if any), the creation timestamps, and the patterns of activity can narrow attribution further. A workspace created by a user account with a corporate email in the display name, or linked to a team with a named profile, is self-attributing.

Attribution doesn’t always produce a definitive answer. Sometimes a workspace reveals enough context to strongly suggest an organisation without confirming it. That ambiguity is part of the work. For reporting purposes, attribution confidence matters: a finding reported as “belongs to Acme Corp” needs stronger evidence than a finding described as “appears consistent with a financial services organisation of enterprise scale”.

Validating operational status

A workspace can be anywhere on the spectrum from active to abandoned. A finding against an abandoned workspace is still reconnaissance-valuable — the architecture, the integration patterns, the attribution — but the severity profile is different from a workspace whose credentials still issue live tokens.

Several signals indicate operational status:

Example response timestamps. If example responses contain created_at, updated_at, issued_at, or similar timestamps, those timestamps reveal when the request was last executed. Timestamps from the past week or month indicate active use. Timestamps from two years ago suggest an abandoned collection.

Token expires_in and iat claims. An OAuth token response with expires_in: 86400 and an iat claim from yesterday means a fresh 24-hour token was just issued. Tokens issued against test endpoints recently indicate the endpoints are still live and the credentials are still valid.

Workspace activity patterns. Postman surfaces last-updated dates on workspaces and collections. A workspace updated this week is actively maintained. One that hasn’t been touched since 2022 is likely abandoned — though abandoned workspaces can still leak material that remains valid, because credentials outlive the collections that document them.

Fork and clone counts. Popular public workspaces get forked. High fork counts on an API collection indicate it’s being used as a reference by many developers, which correlates with active maintenance.

Validation matters because it shapes what a finding means. A live, operational workspace with fresh tokens is a different severity profile from an archived workspace that documents historical infrastructure. Both can be meaningful findings, but for different reasons and with different follow-up actions.

Category 1: Identity Providers

Identity providers are the spine of modern authentication. Almost every API call in an enterprise environment starts with a token issued by Microsoft Entra ID, Okta, Salesforce, Auth0, or a peer identity provider. Developers working with these platforms inevitably end up with Postman collections that document OAuth flows, token exchanges, and refresh mechanisms — because that’s where authentication complexity gets worked out, and Postman is where developers work it out.

The queries in this category don’t target credentials directly. They target the identity provider’s authentication endpoints — URLs that appear in OAuth flows, token requests, and SAML configurations. When a collection references login.microsoftonline.com in its requests, it is almost certainly handling Microsoft authentication and almost certainly contains the tenant ID, client ID, and, often, the client secret or refresh token needed to replay the flow.

4 queries, 4 platforms, one pattern: find the auth endpoint, and the operational context around it comes with the result.

login.microsoftonline.com

login.microsoftonline.com is the authentication endpoint for Microsoft Entra ID (formerly Azure AD) — the identity layer behind Microsoft 365, Azure, Dynamics, Power Platform, and the entire Microsoft enterprise ecosystem. Any application that integrates with Microsoft services — from a simple OAuth login to a complex Graph API pipeline — talks to this endpoint to obtain tokens.

That makes login.microsoftonline.com one of the highest-value search queries in Postman. If an organisation uses Microsoft 365 — and most enterprises do — their developer team almost certainly has at least one Postman collection that references this endpoint.

What you’ll find in matching collections

Tenant IDs — the GUID that identifies the organisation’s Entra ID tenant. Combined with other data, the tenant ID reveals which company a collection belongs to, even if workspace names are generic

Client IDs — the GUID of the registered application within the tenant. Usually hardcoded in URLs or request bodies

Client secrets — the application’s secret credential. Frequently found in environment variables, collection-level auth configurations, or directly in request bodies for client credentials grant flows

Refresh tokens — long-lived tokens obtained during OAuth flows, often captured in example responses or saved in environment variables for convenience during development

Scopes — the permissions the application requests. Common values include Graph API scopes (Mail.Read, User.Read.All, Sites.FullControl.All), Azure Management scopes (https://management.azure.com/.default), and custom API scopes tied to internal services

What to look for in results

OAuth flow type — client credentials grant, authorisation code, refresh token grant, device code. Each type exposes different material. Client credentials grants almost always include a client secret directly in the request. Refresh token flows expose long-lived tokens that may still be valid

Environment labels — UAT, dev, staging, prod. Non-production environments often contain credentials that were never rotated because “it’s just dev” — but those credentials frequently point to production-adjacent services

Scope breadth — wide scopes like .default or scopes spanning multiple resource types indicate high-value application registrations. Narrow, specific scopes suggest the collection was built for a single integration

Token lifetimes: Microsoft access tokens are short-lived (1-hour default), but refresh tokens can last up to 90 days. If a refresh token appears in an environment variable with a recent timestamp, it may still be usable

Real-world impact

A Postman collection exposing Microsoft Entra ID authentication material doesn’t just give an attacker a credential — it gives them the full path to use it. The collection documents which endpoint to call, which scopes to request, which tenant to target, and which application identity to impersonate. With a valid client ID and client secret for a client credentials grant flow, an attacker can obtain access tokens for the same scopes the legitimate application uses, without any user interaction. Depending on the scopes, this can mean reading mailboxes, accessing SharePoint sites, modifying Azure resources, or pivoting across the Microsoft ecosystem.

Example

During a secret scanning session, a query tologin.microsoftonline.com surfaced a public Postman workspace belonging to an FMCG enterprise with approximately 59,000 employees. The collection documented a complete Microsoft Graph integration implemented using the client credentials grant flow.

Press enter or click to view image in full size

Working Microsoft Entra ID client credentials flow in a public Postman workspace

Everything the query is supposed to reveal was there:

· Tenant ID embedded in the token endpoint URL

· Client ID hardcoded in request parameters

· Client secret stored in the collection’s environment variables

· Grant type set to client_credentials, with Graph API scopes requested at .default

The request was not a template. It was a working example — the collection had been used, tested, and made public. A single request reproduced the flow: POST to the token endpoint, receive a valid access token, 200 OK. From there, the Graph API opened up. User directory enumeration returned the full employee list with job titles. Group membership queries exposed the organisational structure. Teams messages became readable through the messaging endpoints.

None of this required an exploit. The collection was the exploit — a documented, tested, working path from a public URL to internal enterprise data.

The full attack chain from this finding — the OAuth flows that produce Graph tokens, which Graph endpoints get weaponised, and why client credentials flow is the most dangerous — is covered in Microsoft Graph API Attack Surface: OAuth Flows, Abused Endpoints, and What Defenders Miss.

oktapreview.com

oktapreview.com is the default hostname for Okta preview tenants - the sandbox environments Okta provides for development, testing, and integration work. Every organisation with an Okta subscription gets at least one preview tenant alongside their production tenant, and many create multiple previews for different projects or teams.

Preview tenants matter because developers treat them differently. A production Okta tenant is guarded — SSO flows are reviewed, API tokens are rotated, and admin access is logged. A preview tenant is where experimentation happens. Credentials get hardcoded. Tokens live in environment variables. Nobody thinks of it as production. But preview tenants often mirror production configurations, integrate with the same upstream applications, and occasionally share credentials with their production counterparts.

That makes oktapreview.com a query that surfaces not just test data, but operational patterns an organisation uses in production - SAML assertions, SCIM provisioning flows, OAuth integrations, and admin API usage.

What you’ll find in matching collections

Okta API tokens — SSWS tokens used for admin API access. Often stored in Authorization: SSWS {token} headers within request configurations

Tenant subdomains — the organisation’s preview tenant identifier (e.g. acme.oktapreview.com), which reveals the company and provides an entry point for further reconnaissance

OAuth application configurations — client IDs, client secrets, redirect URIs, and scopes for apps integrated with Okta as an identity provider

SAML metadata — SSO setup details including ACS URLs, entity IDs, and certificate fingerprints

SCIM endpoints — user provisioning and deprovisioning flows, including tokens and target application URLs

Admin API usage — calls to /api/v1/users, /api/v1/groups, /api/v1/apps, which reveal how the organisation manages identity at scale

What to look for in results

Token type — SSWS tokens (API tokens) grant admin-level access and don’t expire by default. OAuth access tokens are shorter-lived but still valuable if recently captured

Scope of API calls — collections hitting /api/v1/users and /api/v1/groups indicate admin API access. Calls to /api/v1/apps expose application registrations including SAML and OIDC configurations

Cross-tenant references — preview tenant collections sometimes include calls to production tenants (*.okta.com) for comparison testing. These calls may carry production credentials alongside preview ones

Custom authorisation servers — references to /oauth2/{authServerId}/ indicate custom OAuth setups with their own scopes and tokens, often tied to internal applications

Real-world impact

An exposed Okta API token gives an attacker administrative control over an identity layer that typically sits in front of dozens or hundreds of applications. With a valid SSWS token, an attacker can enumerate users, create new accounts, assign group memberships, modify application assignments, or pivot into any SSO-connected application. Even when the token belongs to a preview tenant, the information it reveals about the organisation’s identity architecture — which apps are integrated, how SCIM is configured, which custom OAuth servers exist — translates directly to production reconnaissance. Preview tenants are not isolated environments in practice; they are scale models of production, and they leak the same blueprint.

Example

A query for oktapreview.com surfaced a public Postman workspace belonging to a multinational vehicle rental enterprise. The workspace was labelled for automation use, and inside it, a collection titled 210_Offline > Prerequisite > OKTA token preprod contained a working token request against the organisation's Okta preview tenant.

Press enter or click to view image in full size

Live Okta preprod token request returning a 24-hour bearer token

Everything the query is supposed to reveal was there:

Subdomain identifying the preview tenant (*.preprod.oktapreview.com), tying the workspace directly to the company

OAuth token endpoint path (/oauth2/default/v1/token)

Client ID and client secret in the request body

Grant type set to client_credentials

The request body was not a placeholder. It was populated with real values, and sending the request returned 200 OK: a valid bearer token with expires_in: 86400 - a 24-hour access token, freshly issued on execution.

The collection structure itself was additional context. The folder was named “Prerequisite,” meaning this request was the setup step for everything else in the collection — subsequent requests reused the token obtained here to perform automated actions against the preview environment. An attacker reading the collection learns not only that a token can be issued but also what the token will be used for next.

Preview tenants are often positioned as isolated from production. In practice, preview tenants mirror production configurations, integrate with similar upstream applications, and, in automation scenarios like this one, are wired into real pipelines. A 24-hour token issued against a preprod Okta instance is not theoretical exposure. It is operational access.

test.salesforce.com

test.salesforce.com is the authentication endpoint for Salesforce sandbox environments. Every Salesforce org comes with the ability to create one or more sandboxes - isolated copies of the production org used for development, testing, training, and integration work. Sandboxes have their own usernames, passwords, and authentication flow, but their configuration, data model, and integration patterns are designed to mirror production.

Like Okta preview tenants, Salesforce sandboxes suffer from the psychology of “it’s just test data”. Developers hardcode credentials. Security reviewers skip sandboxes because production is what matters. Integration tokens sit in environment variables long after the integration work is done. But sandboxes integrate with the same upstream systems as production — marketing automation, ERP, CRM-adjacent tools — and the tokens that work against sandboxes frequently reveal connected app configurations that exist identically in production.

What makes test.salesforce.com particularly valuable as a query is the richness of the Salesforce integration ecosystem. A Salesforce OAuth token doesn't just unlock Salesforce itself. It unlocks the web of connected apps, custom objects, Apex classes, and API integrations an organisation has built around it.

What you’ll find in matching collections

OAuth client credentials — consumer key (client ID) and consumer secret for connected apps, typically in token request bodies

Username-password flow credentials — combinations of username, password, and security token used in the resource owner password grant flow, which Salesforce still supports for programmatic access

Refresh tokens — long-lived tokens for connected apps with the refresh_token OAuth scope

Session IDs — captured from successful authentication flows, usable directly against the REST API

Custom domain indicators — My Domain URLs (https://{company}--{sandbox}.sandbox.my.salesforce.com) that reveal the organisation's sandbox naming conventions and often the parent production domain

Connected app configurations — scopes, callback URLs, and OAuth policies for third-party integrations like marketing tools, data enrichment services, and internal custom apps

What to look for in results

Authentication flow — username-password flows are a red flag. They expose the username, password, security token, consumer key, and consumer secret all in the same request. Web server OAuth flows expose less directly, but the connected app credentials are still present

API version in endpoint paths/services/data/v58.0/ or similar. Newer versions indicate active, maintained integrations. Old API versions may still work, but suggest neglected collections

Custom object references — collections calling /services/data/v{N}/sobjects/{CustomObject__c} reveal internal business entities and data model details that aid reconnaissance beyond credential abuse

SOQL and SOSL queries — embedded in request bodies or URL parameters. These expose which tables an integration reads and what business logic depends on it

Bulk API usage — calls to /services/async/{version}/job indicate large-scale data operations. Credentials with Bulk API access can extract or modify records at scale

Real-world impact

A Salesforce sandbox token exposes both data and architecture. With a valid session ID or OAuth token, an attacker can query the full data model of the sandbox, which is almost always a structural copy of production, including custom objects, field definitions, picklist values, and relationship hierarchies. That reveals internal business logic, customer segmentation schemes, pricing structures, and integration points with other systems. When sandbox credentials include connected app credentials, the same consumer key and consumer secret often work against production connected apps — because organisations register their connected apps once and reuse them across environments. The sandbox becomes a rehearsal space for production attacks, complete with a working blueprint of the target.

Example

A query for test.salesforce.com surfaced a public Postman workspace belonging to a major home improvement retailer. Inside, a POST request to https://test.salesforce.com/services/oauth2/token used Salesforce's resource owner password grant flow - the same flow the section above flagged as the worst case for credential exposure.

Press enter or click to view image in full size

Five Salesforce credentials populated in a single request body

Every field the flow requires was populated in the request body:

client_id - the connected app's consumer key

client_secret - the connected app's consumer secret

username - a user account in the sandbox, with a suffix indicating it was a UAT sandbox copied from production

password - the account's password

grant_type - set to password

Five credentials in a single request. No separate environment, no variable reference, no placeholder — every value populated directly into the form data. A reader could copy the request into their own Postman, hit Send, and receive a Salesforce session token without any further setup.

The username suffix (uatcopy) is its own signal. Salesforce sandboxes are often created by copying production configurations, data, integrations, and connected apps. When a sandbox is named "UAT copy," it is explicit documentation that this environment mirrors the production structure. The connected app consumer key and consumer secret visible in this request are highly likely to exist in production with the same values, because connected apps are typically registered once and reused across environments.

This is what the Salesforce sandbox exposure pattern looks like in its complete form: a single request that issues working sandbox credentials, combined with credentials that often work against production.

auth0.com

auth0.com is the default hostname for Auth0 tenants - the identity-as-a-service platform owned by Okta since 2021, but operated as a separate product with its own developer ecosystem. Where Okta is positioned for enterprise workforce identity, Auth0 is positioned for customer identity and developer integration. The platform handles user authentication for SaaS applications, mobile apps, single-page apps, and API backends, and its API is designed to be called programmatically at every stage of the authentication lifecycle.

Get Dzianis Skliar’s stories in your inbox

Join Medium for free to get updates from this writer.

Remember me for faster sign in

That developer-first positioning is exactly what makes auth0.com a high-value query. Auth0 tenants are configured, extended, and operated through API calls - user management, role assignments, rule execution, database connections, and custom actions. Every configuration task has a corresponding API endpoint, and every endpoint is a candidate for a Postman request. When a team is building or maintaining an Auth0 integration, there is almost always a Postman collection documenting the flows.

What you’ll find in matching collections

Management API tokens — JWTs issued for the Auth0 Management API, which administers the entire tenant. These are the highest-value credentials in an Auth0 collection

Application client credentials — client IDs and client secrets for applications registered in the tenant, used in machine-to-machine authentication flows

Tenant domain indicators — the tenant subdomain ({company}.auth0.com or {company}.{region}.auth0.com), which identifies the organisation and provides the base URL for direct API access

Custom domain configurations — references to customer-branded authentication domains (auth.{company}.com) that proxy to the Auth0 tenant

Connection configurations — database connections, social connections, enterprise connections (SAML, LDAP, Active Directory), and the credentials used to connect to external identity stores

Rules and Actions code — Auth0 allows custom JavaScript to execute during authentication flows. Collections that interact with the Rules or Actions APIs sometimes include the code itself, revealing business logic and security controls

What to look for in results

Management API scope breadth — Management API tokens carry scopes that define what they can do. A token with read:users update:users create:users delete:users scopes grants full user management. Tokens with read:clients update:clients can modify application configurations. The broader the scope, the higher the impact

Machine-to-machine application tokens — requests to /oauth/token with grant_type=client_credentials and an audience of https://{tenant}.auth0.com/api/v2/ are the canonical pattern for obtaining Management API tokens. If this request is populated with working values, the collection itself is an access mechanism, not just a reference

User export and search endpoints — calls to /api/v2/users or /api/v2/jobs/users-exports indicate the collection can enumerate or bulk-extract the user directory. These are the endpoints that turn a credential into a data breach

Rule and Action references — collections that touch /api/v2/rules or /api/v2/actions often expose the custom authentication logic a tenant uses. This reveals security controls, MFA exceptions, and any custom authorisation checks the organisation has built

Real-world impact

An Auth0 Management API token with broad scopes is tenant-level administrative access to an identity platform that often sits in front of customer-facing applications. With such a token, an attacker can enumerate and export the full user base of the customer application — email addresses, metadata, login history — and modify user records, reset passwords, assign roles, or create new admin accounts. When the collection exposes machine-to-machine application credentials rather than an already-issued token, the attacker can mint fresh Management API tokens on demand, making the access persistent as long as the application registration exists. For SaaS companies that use Auth0 to authenticate their customers, an exposed Management API credential is not an identity compromise — it is a customer data compromise, reachable through the same API the legitimate operations team uses.

Example

A query for auth0.com surfaced a public Postman workspace named New_User_Updation containing collections for user management operations against an Auth0 development tenant. The tenant domain followed the pattern {tenant}-dev.eu.auth0.com, identifying it as a European deployment with development-labeled infrastructure.

Press enter or click to view image in full size

Machine-to-machine token request for Auth0 Management API

Press enter or click to view image in full size

User directory enumerated via the issued Management API token

The workspace’s collection structure told the complete story before any request was executed. One collection, Test_User_Reset_Password_Dev, contained three chained requests:

POST Access_Token - obtains a Management API token using client credentials

GET Get user by email - queries a user record from the Auth0 directory

PATCH Update Password - modifies the password on an existing account

The Access_Token request was a textbook machine-to-machine flow. The body, in raw JSON, contained:

client_id - the application registration's identifier

client_secret - its secret credential

audience: https://{tenant}.dev.eu.auth0.com/api/v2/ - the Management API audience

grant_type: client_credentials

Sending the request returned 200 OK with an access token scoped to read:users update:users delete:users create:users - full user lifecycle control. Token lifetime was 14400 seconds, a fresh 4-hour window on each execution.

With that token, the Management API opened. A single call to GET /api/v2/users returned a paginated list of real user records - email addresses, nicknames, display names, identity provider metadata, login timestamps, and custom user_metadata fields tracking internal flags like SoftDeleted. The user's emails resolved to a.org.uk domain, consistent with a development tenant populated with real organisational data. Timestamps on the records showed active use through late 2025, confirming the tenant was live.

The complete chain from public Postman URL to the enumerated user directory took three requests. Everything needed was in the workspace — the credentials to mint the token, the endpoint to call with it, and the scopes that authorised the call. The Update Password request in the same collection indicated the credentials’ reach extended beyond enumeration: the same token that reads users can modify their passwords. An attacker reading this collection does not learn only how to query the directory. They learn how to take over accounts in it.

Category 2: Cloud Infrastructure

Where identity providers sit in front of applications, cloud infrastructure sits underneath them. Cloud infrastructure credentials in Postman collections are less common than identity provider credentials — organisations generally treat AWS and Azure access more carefully than OAuth tokens — but when they appear, the impact scales with the blast radius of the platform itself. This category covers one query, targeted at the junction point where cloud authentication most often surfaces in API work: AWS STS.

sts.amazonaws.com

sts.amazonaws.com is the endpoint for AWS Security Token Service - the component of AWS IAM that issues temporary credentials. STS is how federated identity enters AWS: SAML assertions, OIDC tokens, and cross-account AssumeRole requests all resolve through STS into short-lived access keys and session tokens. Applications that integrate with AWS, CI/CD pipelines that deploy to AWS, and cross-account automation flows all end up talking to STS.

Unlike long-term IAM access keys, STS credentials are temporary — session tokens typically last 15 minutes to 12 hours depending on configuration. That short lifetime is often read as “less dangerous than permanent keys,” which is true for the credential itself but misses the point of what a Postman collection exposing STS usage actually reveals. The collection documents the mechanism of access — which role gets assumed, which trust relationship exists, which external identity provider is federated, which account is the target. The mechanism is persistent even when individual tokens are not.

What you’ll find in matching collections

Temporary credentials in example responsesAccessKeyId, SecretAccessKey, SessionToken combinations captured from prior successful AssumeRole or GetSessionToken calls. These may still be valid if the collection was used recently

Role ARNs — the full Amazon Resource Name of the IAM role being assumed (arn:aws:iam::{account-id}:role/{role-name}). This reveals the AWS account number, naming conventions, and the role's purpose

External IDs — the shared secret used to prevent confused deputy attacks in cross-account role assumption. When hardcoded in request bodies, external IDs are effectively authentication material

SAML assertions and OIDC tokens — the input to AssumeRoleWithSAML and AssumeRoleWithWebIdentity calls. These reveal the federated identity provider an organisation uses and, occasionally, live assertions captured during testing

Trust policy hints — through sts:SourceIdentity parameters, session tags, and session names, collections often reveal the patterns trust policies are configured to accept

Cross-account architecture — collections that chain multiple AssumeRole calls reveal the organisation's AWS account topology: hub accounts, workload accounts, shared services accounts, and the paths between them

What to look for in results

STS action typeAssumeRole indicates standard cross-account or service role access. AssumeRoleWithSAML indicates enterprise federated identity. AssumeRoleWithWebIdentity indicates OIDC federation, often from Kubernetes service accounts (IRSA) or CI/CD providers. Each reveals a different access pattern

Duration requestsDurationSeconds parameters higher than 3600 (one hour) indicate roles configured for extended session duration, typically up to 43200 (12 hours). Longer sessions mean captured credentials stay valid longer

Policy ARNs in requestsAssumeRole calls can include PolicyArns or inline Policy parameters that scope down the session. When these are missing, the session inherits the role's full permissions

Account number patterns — AWS account IDs in role ARNs follow no public pattern by design, but collections sometimes reference multiple accounts with visible relationships (prod-123456789012, dev-234567890123, shared-345678901234 naming conventions in adjacent requests reveal the full account topology)

Real-world impact

An STS-related finding in Postman exposes the federation and trust architecture of an organisation’s AWS environment. Even when individual temporary credentials have expired, the role ARNs, trust relationships, and federation patterns are themselves valuable reconnaissance — they tell an attacker which accounts exist, which roles are assumable, which identity providers are trusted, and which external IDs are required. When collections include working input credentials for the AssumeRole call - hardcoded access keys for the calling principal, or captured SAML assertions - the finding escalates from reconnaissance to direct access. A successful AssumeRole call issues a session that operates within the target role's permission boundary, which for administrative roles can mean full control of the account. STS findings are rarely about one credential. They are about the map of how access moves between accounts, and who holds the keys at each junction.

Example

A query for sts.amazonaws.com surfaced a public Postman workspace containing a single request that executed a complete AWS STS AssumeRole call. The endpoint was https://sts.amazonaws.com/, configured as a GET request with AWS Signature authentication - Postman's native mechanism for signing requests with IAM credentials.

Press enter or click to view image in full size

Complete AssumeRole call with hardcoded IAM credentials and live session response

The request’s authorisation configuration carried the two values that make an STS finding operational rather than aspirational:

AccessKey - a long-term IAM access key ID for the calling principal

SecretKey - the corresponding secret access key

AWS Region: us-east-1

Service Name: sts

Both credentials were hardcoded into the collection’s authorisation settings, stored permanently with the request. A reader opening the collection saw the full input material needed to execute the call.

Sending the request returned 200 OK with an XML AssumeRoleResponse containing a complete set of temporary session credentials:

AccessKeyId - the session's access key

SecretAccessKey - it's secret

SessionToken - the session identifier

Expiration - a timestamp showing the session was valid for hours from the moment of execution

The Arn in the response (arn:aws:sts::{account-id}:...) confirmed which AWS account the session was issued against and which role had been assumed. The expiration timestamp was recent - the request was being executed in live testing, not sitting as a stale artefact.

The chain this collection documents is short and complete. The hardcoded long-term credentials are used to authenticate to STS. STS issues temporary session credentials scoped to the assumed role. An attacker reading the collection does not need to find a role to assume, work out a trust relationship, or guess an External ID. The collection answers all of those questions by executing them. Every time the request is sent, a fresh session is minted inside the target AWS account.

STS session credentials themselves are temporary — the exposed response credentials would expire within hours. But the input credentials in the request authorisation are long-term IAM keys. Those do not expire automatically. As long as the calling principal’s access keys remain active and the trust policy on the target role remains unchanged, the collection is a reusable access mechanism. Revoking the session token does nothing. The reset requires rotating the IAM access keys and reviewing the role’s trust policy — neither of which occurs unless the finding is discovered and acted on.

How to Analyse What You Find

A matching search result is an entry point, not a finding. The value comes from how the workspace is read after it opens. This section covers the analytical workflow — where to look, what to read, and how to assemble an attacker’s operational picture from a public Postman workspace.

Press enter or click to view image in full size

Grid of six surfaces to analyze in a public Postman workspace: workspace structure, environments, scripts, example responses, auth headers, and cross-collection correlation

Start with the workspace, not the request

The first page that opens when a Postman search result is clicked is usually a specific request. Ignore it for a moment. Navigate up to the workspace level and read the structure.

Workspace names, collection names, folder hierarchy, and request naming conventions carry signal the individual request does not. A workspace named Acme-PROD-Integrations tells more than any single request inside it. Folder structures like Auth > Tokens > Refresh reveal how the team thinks about the API. Request names like Get User — ADMIN OVERRIDE or Delete All — DANGER mark themselves as the operations that matter. The workspace structure is the table of contents of the API being documented, and attackers read tables of contents first.

Read environments separately

Postman environments are a distinct artefact from collections and deserve dedicated analysis. An environment is a named set of variables — key-value pairs that requests reference using {{variable_name}} syntax. Environments are indexed by Postman search, and their contents are visible to anyone who opens the environment in a public workspace.

Environments are where credentials hide in plain sight. A request body might reference {{api_token}} with no visible credentials. The actual token sits in the environment variable, fully populated. Check every environment in the workspace. Check each variable's "Current Value" and "Initial Value" columns - sometimes only one is populated, and sometimes they contain different values (a sanitised initial value and a real current value, for example).

Workspaces frequently have multiple environments labelled Dev, UAT, Staging, Production. Each one is its own credential set. A careful read of one environment is a start; reading all of them is the full surface.

Check scripts, not just requests

Every Postman request has two code surfaces beyond the request body itself:

Pre-request scripts — JavaScript that runs before the request executes. Used for dynamic auth, timestamp generation, signing, and token refresh flows

Tests — JavaScript that runs after the response. Used for validation and often for extracting values from responses into environment variables

Both surfaces are text, both are indexed by Postman search, and both frequently contain credentials, internal URLs, business logic, and integration details that never appear in the visible request. A pre-request script that fetches a token from an internal endpoint reveals the endpoint. A test script that saves a response value into an environment variable reveals what the integration cares about. Read both.

Scripts also reveal the shape of authentication that cannot be expressed in a simple header — HMAC signing, rotating nonces, request-specific encryption. A collection with complex pre-request scripts is a collection documenting a sophisticated integration, and sophisticated integrations tend to carry more valuable credentials than simple ones.

Mine the example responses

Every Postman request can have saved example responses — snapshots of previous successful executions, preserved inside the collection. Developers use examples to document expected API behaviour. Attackers read examples for data.

Example responses often include real data from the developer’s last run of the request. A saved example for GET /users often contains real user records. A saved example for an internal search endpoint contains real internal search terms. A saved example for an OAuth token response contains a real access token (possibly expired, but sometimes not). The data in example responses is authentic by nature - it was captured live, not mocked - and it often reveals what the developer considered a "normal" result, which is a direct window into the production environment's data patterns.

Read auth headers as fingerprints

Authorisation headers and custom headers are stack fingerprints. They tell a reader what technology sits behind the endpoint before any request is executed.

Common patterns:

Authorization: Bearer {token} - OAuth or JWT-based authentication. The token format (JWT vs opaque) and audience claim reveal the identity provider

Authorization: Basic {base64} - HTTP Basic auth. In Atlassian Cloud, this is the standard pattern with email and API token concatenated. In other contexts, it often signals legacy integration

Authorization: SSWS {token} - Okta API token. No other platform uses this prefix

X-Api-Key: {key} - generic API key header, used by AWS API Gateway, Kong, Cloudflare API Shield, and many SaaS platforms

Ocp-Apim-Subscription-Key: {key} - Azure API Management

Ms-Oc-User-Agent: omnichannel-chat-sdk/* - Microsoft Dynamics 365 Omnichannel

A reader who recognises these patterns identifies the platform without reading the URL. That identification narrows the scope of what the credential unlocks and what the next request should be.

Correlate across collections in the same workspace

Workspaces often contain more than one collection, and those collections are often related. A vendor workspace might have separate collections for each client. An internal platform workspace might have separate collections for each environment. An API product workspace might have separate collections for each version.

Cross-collection correlation is where single findings become multi-target findings. If one collection in a workspace exposes authentication material for Client A, check whether other collections in the same workspace expose material for Client B, Client C, and so on. If one collection targets staging.api.company.com, check whether another targets api.company.com using the same reusable credentials. The workspace is the unit of exposure, not the collection.

This is the pattern that turns a single finding into a vendor-exposure event, in which one IT vendor’s public workspace reveals multiple of its customers at once.

Case Study: Vendor Exposure

The queries above are targeted. Each one aims at a specific platform, and each returns specific classes of findings, but the workspace analysis framework from the previous section applies regardless of how a workspace is discovered, and some of the most consequential findings surface through pattern recognition rather than targeted search — a search for one thing that reveals something larger.

Press enter or click to view image in full size

This case study is one of those. It illustrates the surface that targeted queries alone cannot reach: vendor exposure, where a single IT services vendor’s public workspace exposes the operational infrastructure of multiple downstream customers at once.

The workspace

A public Postman workspace belonged to a multinational IT services vendor specialising in digital transformation projects for public-sector clients. The workspace name carried the vendor’s brand. Inside, it contained not one collection but a portfolio — each collection dedicated to a different client engagement.

The collections were named for their clients: one for a Ministry of Higher Education, one for a Ministry of Social Services, one for an Economic Development authority, one labelled for a CRM maintenance workflow, several more for additional public-sector clients. Each collection was a complete API integration, documented to the request level, with environments populated and example responses preserved.

The vendor treated the workspace as internal documentation. Postman treated it as a public, indexed page.

Reading the workspace

Applied to this workspace, the six analysis surfaces from the previous section each returned material.

Workspace structure named the vendor and listed every client engagement as a separate collection. A reader scanning the workspace learned the vendor’s customer portfolio before opening a single request.

Collection naming identified each client specifically. Folder hierarchies within each collection revealed the integration’s purpose: Prerequisite > Get Chatbot Tokens, Submit new complaint, Search for license number, Send OTP code. The naming was self-explanatory because the vendor was documenting their own work for their own team.

Environments carried tenant-specific material per client: resource identifiers, API subdomains, and environment labels. Switching environment within a collection changed the target from one client’s infrastructure to another’s.

Request URLs and auth headers fingerprinted the technology stack. The collections called endpoints at *.communication.azure.com - Azure Communication Services - with headers including Ms-Oc-User-Agent: omnichannel-chat-sdk/*, the fingerprint of Microsoft Dynamics 365 Omnichannel. Other collections referenced ServiceNow instances. The vendor's technology stack was inferable from headers alone.

Example responses contained real session tokens captured during testing — JWTs with scopes like chat.join.limited,voip and regions matching the client's geography. These tokens were issued against live infrastructure. Some had already expired; others, captured in recent test runs, had not.

Cross-collection correlation tied the picture together. The same authentication patterns, the same response structures, the same endpoint conventions appeared across collections — revealing that the vendor reused integration architecture across clients. A reader who understood one collection understood all of them.

What the workspace exposed

The aggregate finding went beyond what any single collection revealed. Across the portfolio:

The vendor’s complete public-sector client list, named and scoped

The technology platform powering each engagement — a Microsoft stack built on Dynamics 365 Omnichannel, Copilot Studio for conversational AI, Azure Communication Services for chat and voice, integrated with ServiceNow for backend workflows

The API surface of each client’s citizen-facing chatbot: authentication flows, session management, OTP verification, complaint submission, license search, and service request handling

Live endpoint URLs with resource UUIDs identifying the Azure Communication Services instances of multiple government entities

Working token acquisition flows — requests that, when executed, returned fresh session tokens valid for hours

An attacker reading the workspace did not need to map each client’s infrastructure independently. The vendor had mapped it for them. Recognising the vendor’s client, learning their tech stack, identifying their endpoints, and obtaining working tokens all happened within the same public page.

The pattern

Vendor exposure is a structural category of finding, not a technical one. It does not come from a misconfigured endpoint, a leaked credential, or a vulnerable application. It comes from a gap in how third-party risk programs define their scope. Vendor due diligence typically covers the vendor’s certifications, policies, security questionnaires, and contractual controls. It rarely covers the vendor’s developer tool hygiene.

A vendor’s public Postman workspace is not covered by SOC 2. It is not listed on the vendor’s security page. It is not an asset that the customer can monitor directly. It exists in a surface area most risk programs do not consider — developer collaboration tools — and it can expose multiple customers simultaneously, without any of them knowing they are exposed.

For defenders, the question is not “does our vendor have SOC 2?” It is: “Does our vendor have a public Postman workspace, and if so, what is in it?” For researchers, the pattern is generative. Find one vendor workspace with a named customer, and the same workspace often names several more.

Case Study: Municipal Agency

The same pattern — public Postman workspace, live API credentials, and government infrastructure — recurred in a different context during earlier reconnaissance work.

A query to login.microsoftonline.com returned a public Postman workspace containing API credentials for a municipal emergency services agency in Asia. The workspace documented an integration with Microsoft Power BI that authenticates via Entra ID, putting it directly in the path of the Microsoft identity query from earlier in this guide. The collection contained working client credentials for the agency's Power BI tenant - client ID, client secret, tenant ID, and OAuth flow configuration.

Press enter or click to view image in full size

Press enter or click to view image in full size

The credentials were not demonstration material. Executing the token acquisition request returned a valid Azure AD access token. With that token, a single call to the Power BI REST API enumerated the agency’s workspaces. Another call listed datasets within each workspace. The path from a public URL to reading a municipal government’s internal reporting data takes minutes.

Unlike the workspace in the previous case, which exposed multiple clients through a single vendor, the Power BI finding was a direct exposure: the agency’s own workspace, populated by its own developers. The pattern was the same: credentials and context together, in public, indexed by the platform itself. The full walkthrough, including the API calls used and the data categories reachable, is documented in Microsoft Power BI API Credential Exposure: From Public Postman Workspace to Data Exfiltration in Minutes.

Two findings. Two different exposure paths — one through a vendor, one through a customer directly. Same outcome: working credentials, live infrastructure, citizen-adjacent government services. The gap both findings exploit is the same gap the rest of this article has been describing: Postman as an attack surface that sits outside most organisations’ security monitoring, containing not just secrets but also the operational context to use them.

Closing

Postman search is not a secret scanner. It is a public index of API development work — the tool developers use to document authentication flows, test integrations, and save working examples of real requests. That index was designed to help developers discover APIs. It also helps anyone reading it reconstruct the operational architecture of any organisation whose developers forgot to make their workspaces private.

The five queries in this guide are not a complete list. They are a starting pattern: target the platforms whose presence in a collection implies the rest — the identity providers, the cloud infrastructure, the vendor platforms an enterprise cannot operate without. The queries find the endpoints. Everything else — tenant IDs, client secrets, session tokens, integration architectures, customer relationships — comes with the result.

Postman’s own secret scanner catches known token formats. It does not catch a workspace name that identifies a customer. It does not catch an environment variable holding a tenant ID. It does not catch the pattern of one vendor’s public workspace exposing six of their clients at once. What lies outside the scanner’s detection model is the architectural exposure: credentials in context, tokens with their endpoints, and authentication flows with their target audiences.

The same patterns keep recurring because the incentives that produce them remain in place. Postman is a productivity tool. Public workspaces are discoverable by design. Developer culture around API collaboration has not built the security reflex that developer culture around source code has built over a decade of hard lessons. Closing that gap is a program-level effort. In the meantime, the workspaces remain public, the search bar remains accessible, and the patterns remain findable — by whoever is looking.


文章来源: https://infosecwriteups.com/postman-secret-scanning-a-practical-guide-to-finding-exposed-apis-47feabde6843?source=rss----7b722bfd1b8d---4
如有侵权请联系:admin#unsafe.sh