After spending a year watching developers struggle with Dynamic Client Registration (DCR) for MCP servers, the Model Context Protocol community just made a massive shift. The November 2025 spec update introduced Client ID Metadata Documents (CIMD) as the preferred default for client authentication—and it's about time.
If you've tried implementing DCR from scratch, you know the pain. You're essentially building a mini authorization server just to let AI agents connect to your MCP server. It's like being asked to build a parking garage when all you wanted was to park a car.
Let me break down what CIMD actually is, why it matters for MCP deployments, and most importantly—how to implement it without the headaches that plagued DCR.
Traditional OAuth assumes you know your clients upfront. You register them in a developer portal, get a client ID, and move on. This works great when you have 5-10 apps connecting to your service.
MCP breaks that assumption completely.
Think about it: A single MCP client like Claude Desktop, Cursor, or VS Code might connect to thousands of MCP servers that it discovers at runtime. Asking developers to pre-register every client with every server? That's a non-starter.
The first attempt at solving this was Dynamic Client Registration (DCR). The idea was simple: let clients automatically register themselves by POSTing to a registration endpoint. The server generates a client ID, stores it, and returns the credentials.
The DCR Problem:
After working with dozens of organizations implementing MCP auth, here's what actually happened:
As I've written about in my guide to M2M authentication, the challenge with machine-to-machine auth has always been establishing trust without human intervention. DCR tried to solve this but created more problems than it fixed.
Client ID Metadata Documents flip the model entirely.
Instead of clients POSTing registration data to your server, the client ID itself becomes a URL that points to a JSON document describing the client.
{
"client_id": "https://client.example.com/oauth/metadata.json",
"client_name": "Example MCP Client",
"client_uri": "https://client.example.com",
"redirect_uris": ["https://client.example.com/oauth/callback"],
"token_endpoint_auth_method": "none",
"grant_types": ["authorization_code"],
"response_types": ["code"],
"scope": "mcp:read mcp:write"
}
When a client wants to authenticate:
client_id in the OAuth authorization requestThe critical insight: Domain ownership becomes the trust anchor. If you control client.example.com, only you can host metadata at https://client.example.com/oauth/metadata.json.
This elegantly solves the identity verification problem that plagued DCR. An attacker can't pretend to be "Cursor" unless they compromise cursor.sh's infrastructure—which is vastly harder than just POSTing fake metadata to a registration endpoint.
Let me give you a concrete scenario. Say you're building an MCP server for your company's internal cost optimization platform.
Your team ships a VS Code extension for developers to query deployment costs. Within a week, you discover:
You host a single metadata document at https://costoptimizer.company.com/oauth/metadata.json. Every instance of the VS Code extension uses the same client_id URL. Your authorization server:
This is particularly crucial in the MCP ecosystem where clients multiply unpredictably. As I explored in my comprehensive MCP enterprise adoption guide, authentication has been one of the biggest barriers to production MCP deployments.
First, create a JSON file with your client's metadata:
{
"client_id": "https://yourdomain.com/oauth/metadata.json",
"client_name": "Your MCP Client",
"client_uri": "https://yourdomain.com",
"logo_uri": "https://yourdomain.com/logo.png",
"redirect_uris": [
"https://yourdomain.com/oauth/callback"
],
"grant_types": ["authorization_code", "refresh_token"],
"response_types": ["code"],
"token_endpoint_auth_method": "none",
"scope": "mcp:read mcp:write openid email"
}
Critical requirements:
client_id field must exactly match the URL hosting the documentredirect_uris must be explicitly listed"token_endpoint_auth_method": "none" with PKCEThe document must be:
# Nginx example
location /.well-known/oauth/client-metadata.json {
add_header Cache-Control "public, max-age=86400";
add_header Content-Type "application/json";
return 200 '{"client_id":"https://yourdomain.com/.well-known/oauth/client-metadata.json",...}';
}
Real example – VS Code publishes its metadata at: https://vscode.dev/oauth/client-metadata.json
When initiating OAuth, use the metadata URL as your client_id:
GET /authorize?
client_id=https://yourdomain.com/oauth/metadata.json&
redirect_uri=https://yourdomain.com/oauth/callback&
response_type=code&
scope=mcp:read&
state=random_state_value&
code_challenge=sha256_of_verifier&
code_challenge_method=S256&
resource=https://mcp-server.example.com
The authorization server will:
If you're building an MCP server with CIMD support, here's what you need to implement:
def is_cimd_client(client_id):
"""Check if client_id is a CIMD URL"""
return (
client_id.startswith("https://") and
(client_id.endswith("/oauth/metadata.json") or
client_id.endswith("/client-metadata.json") or
"/.well-known/oauth/" in client_id)
)
import requests
from urllib.parse import urlparse
def fetch_cimd_metadata(client_id_url, timeout=5):
"""Fetch and validate CIMD metadata"""
# Security checks
parsed = urlparse(client_id_url)
# Block private IP ranges (SSRF protection)
if parsed.hostname in ['localhost', '127.0.0.1']:
if not ALLOW_LOCALHOST_DEV:
raise ValueError("Localhost not allowed in production")
# Enforce HTTPS
if parsed.scheme != 'https':
raise ValueError("CIMD URL must use HTTPS")
# Fetch with strict limits
response = requests.get(
client_id_url,
timeout=timeout,
headers={'Accept': 'application/json'},
allow_redirects=False # Don't follow redirects
)
# Check size (prevent bomb attacks)
if len(response.content) > 10240: # 10KB limit
raise ValueError("Metadata document too large")
metadata = response.json()
# Validate required fields
if metadata.get('client_id') != client_id_url:
raise ValueError("client_id mismatch")
if not metadata.get('redirect_uris'):
raise ValueError("redirect_uris required")
return metadata
from datetime import datetime, timedelta
import redis
cache = redis.Redis()
def get_cimd_metadata(client_id_url):
"""Get CIMD metadata with caching"""
cache_key = f"cimd:{client_id_url}"
cached = cache.get(cache_key)
if cached:
return json.loads(cached)
# Fetch fresh metadata
metadata = fetch_cimd_metadata(client_id_url)
# Cache for 24 hours
cache.setex(
cache_key,
timedelta(hours=24),
json.dumps(metadata)
)
return metadata
Your authorization server metadata should indicate CIMD support:
{
"issuer": "https://auth.yourserver.com",
"authorization_endpoint": "https://auth.yourserver.com/authorize",
"token_endpoint": "https://auth.yourserver.com/token",
"jwks_uri": "https://auth.yourserver.com/.well-known/jwks.json",
"client_id_metadata_document_supported": true,
"code_challenge_methods_supported": ["S256"],
"grant_types_supported": ["authorization_code", "refresh_token"],
"response_types_supported": ["code"]
}
CIMD introduces different security considerations than DCR. Based on my experience with SSO protocol vulnerabilities, here's what you need to watch for:
The most critical risk: authorization servers must fetch URLs, which attackers could exploit.
Mitigation strategies:
BLOCKED_NETWORKS = [
'10.0.0.0/8', # Private network
'172.16.0.0/12', # Private network
'192.168.0.0/16', # Private network
'169.254.0.0/16', # Link-local
'127.0.0.0/8', # Loopback
]
def is_safe_url(url):
"""Check if URL is safe to fetch"""
parsed = urlparse(url)
# Resolve hostname to IP
try:
ip = socket.gethostbyname(parsed.hostname)
except:
return False
# Check against blocked networks
ip_obj = ipaddress.ip_address(ip)
for network in BLOCKED_NETWORKS:
if ip_obj in ipaddress.ip_network(network):
return False
return True
Just because a client can host a CIMD doesn't mean you should trust it. Implement allowlists for production:
TRUSTED_DOMAINS = [
'vscode.dev',
'cursor.sh',
'claude.ai',
'openai.com'
]
def should_trust_client(client_id_url):
"""Check if client domain is trusted"""
domain = urlparse(client_id_url).hostname
return any(domain.endswith(trusted) for trusted in TRUSTED_DOMAINS)
Localhost redirects remain risky—any process on the user's machine can intercept them. For production:
Even though CIMD removes the registration endpoint, you still need limits:
from flask_limiter import Limiter
limiter = Limiter(
key_func=lambda: request.headers.get('X-Forwarded-For', request.remote_addr)
)
@app.route('/authorize')
@limiter.limit("10 per minute") # Per IP
def authorize():
# Authorization logic
pass
Here's the reality check: while CIMD is now the MCP spec default, provider support is still limited.
✅ Stytch – First major provider with full CIMD support, even built a demo site (client.dev)
✅ WorkOS – Added CIMD support in AuthKit specifically for MCP
✅ Authlete – Completed full CIMD implementation in November 2025
✅ Custom implementations – Several companies have built their own
❌ Auth0 – DCR supported, CIMD support not announced
❌ Okta – DCR supported, no CIMD yet
❌ AWS Cognito – No DCR or CIMD support
❌ Azure AD / Entra ID – Pre-registration only
❌ Google Identity – No native MCP auth support
This is actually a massive pain point for developers right now. You have three options:
If you're starting fresh, go with Stytch or WorkOS. They understand MCP and built for it.
// Stytch example
const stytch = require('stytch');
const client = new stytch.Client({
project_id: process.env.STYTCH_PROJECT_ID,
secret: process.env.STYTCH_SECRET,
});
// CIMD automatically handled
If you're stuck with a provider that only supports pre-registration, you can build a CIMD-to-registration bridge:
class CIMDBridge:
"""Convert CIMD requests to pre-registrations"""
def __init__(self, auth_provider):
self.provider = auth_provider
self.cimd_cache = {}
def handle_authorize(self, client_id, redirect_uri, **params):
"""Handle authorization with CIMD support"""
if client_id.startswith('https://'):
# CIMD flow
metadata = self.get_cimd_metadata(client_id)
# Check if already registered
registered_id = self.cimd_cache.get(client_id)
if not registered_id:
# Register with downstream provider
registered_id = self.provider.register_client(
name=metadata['client_name'],
redirect_uris=metadata['redirect_uris']
)
self.cimd_cache[client_id] = registered_id
# Forward to provider with registered ID
return self.provider.authorize(
client_id=registered_id,
redirect_uri=redirect_uri,
**params
)
# Standard flow
return self.provider.authorize(client_id, redirect_uri, **params)
Major providers will eventually add CIMD support—the MCP spec is clear this is the future. But production deployments can't wait.
If you've already implemented DCR, here's your migration path:
The MCP spec priority order is:
def handle_client_registration(client_id, redirect_uri):
"""Support multiple registration methods"""
# Check pre-registration
if client_id in PREREGISTERED_CLIENTS:
return get_preregistered_metadata(client_id)
# Check CIMD
if client_id.startswith('https://'):
return get_cimd_metadata(client_id)
# Check DCR (existing registrations)
if client_id in dcr_registry:
return get_dcr_metadata(client_id)
# Fallback: manual registration required
raise AuthError("Client not registered")
Once CIMD is stable:
@app.route('/register', methods=['POST'])
def register_client():
"""DCR endpoint - deprecated"""
return {
"error": "deprecated",
"error_description": "Please use CIMD instead. See: https://docs.example.com/cimd",
"migration_guide": "https://docs.example.com/dcr-to-cimd"
}, 410 # Gone
Gradually remove old DCR registrations:
def cleanup_dcr_clients():
"""Remove unused DCR registrations"""
cutoff = datetime.now() - timedelta(days=90)
for client in dcr_registry.values():
if client.last_used < cutoff:
logger.info(f"Removing inactive DCR client: {client.id}")
dcr_registry.delete(client.id)
After working through MCP auth implementations with multiple teams, here's what actually works:
Don't implement DCR unless you absolutely have to. CIMD is simpler, more secure, and the spec's future.
# Good: Strict validation, loose caching
def validate_and_cache_cimd(client_id_url):
metadata = fetch_cimd_metadata(client_id_url)
# Strict validation
assert metadata['client_id'] == client_id_url
assert all(uri.startswith('https://') for uri in metadata['redirect_uris'])
assert metadata['token_endpoint_auth_method'] in ['none', 'private_key_jwt']
# Cache for 24h
cache.set(client_id_url, metadata, ex=86400)
return metadata
Not all domains are equal:
class TrustLevel(Enum):
VERIFIED = "verified" # Known good clients
COMMUNITY = "community" # Community-submitted, unverified
UNKNOWN = "unknown" # Never seen before
def get_trust_level(domain):
if domain in VERIFIED_DOMAINS:
return TrustLevel.VERIFIED
if domain in COMMUNITY_ALLOWLIST:
return TrustLevel.COMMUNITY
return TrustLevel.UNKNOWN
Then adjust your consent screens accordingly:
def render_consent(client_metadata, trust_level):
warnings = []
if trust_level == TrustLevel.UNKNOWN:
warnings.append("⚠️ This client has not been verified")
if any('localhost' in uri for uri in client_metadata['redirect_uris']):
warnings.append("⚠️ This client uses localhost redirects")
return render_template('consent.html',
client=client_metadata,
warnings=warnings
)
Track CIMD-related metrics:
from prometheus_client import Counter, Histogram
cimd_fetches = Counter('cimd_metadata_fetches_total', 'CIMD metadata fetches')
cimd_failures = Counter('cimd_metadata_failures_total', 'CIMD fetch failures')
cimd_duration = Histogram('cimd_fetch_duration_seconds', 'CIMD fetch time')
@cimd_duration.time()
def fetch_cimd_metadata(url):
cimd_fetches.inc()
try:
# Fetch logic
return metadata
except Exception as e:
cimd_failures.inc()
raise
Create clear docs for client developers:
# Connecting to Our MCP Server
## Quick Start
1. Host your client metadata:
- URL: https://yourdomain.com/.well-known/oauth/client-metadata.json
- Format: [See example](#example)
2. Use that URL as your `client_id` in OAuth flows
3. We'll fetch and cache your metadata automatically
## Requirements
- HTTPS required (localhost OK for development)
- Metadata must be < 10KB
- Must include valid redirect_uris
- We cache for 24 hours
## Example Metadata
```json
{
"client_id": "https://yourdomain.com/.well-known/oauth/client-metadata.json",
"client_name": "Your MCP Client",
...
}
## Testing Your CIMD Implementation
Several tools can help you validate:
### MCPJam OAuth Debugger
The MCPJam team built a visual debugger specifically for MCP OAuth flows:
```bash
npm install -g @mcpjam/inspector
mcpjam inspect
Features:
# 1. Fetch your metadata manually
curl https://yourdomain.com/oauth/metadata.json
# 2. Initiate auth flow
curl -v "https://auth-server.com/authorize?client_id=https://yourdomain.com/oauth/metadata.json&redirect_uri=https://yourdomain.com/callback&response_type=code&scope=mcp:read&state=test"
# 3. Check server logs for metadata fetch
client_id field matches hosting URLredirect_uris explicitly listedCIMD is still new. The IETF draft was only promoted to working group status in October 2025. The MCP spec made it the default in November 2025. We're in early days.
What's coming:
The next evolution is cryptographically signed attestations. Instead of just trusting DNS, clients could present JWTs signed by trusted authorities:
{
"client_id": "https://example.com/oauth/metadata.json",
"software_statement": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...",
...
}
The software statement contains signed claims about the client's identity, verified by a trusted CA or platform (like macOS, Windows, or Android).
Even stronger: OS-level attestation that proves a binary is legitimate:
{
"client_id": "https://example.com/oauth/metadata.json",
"platform_attestation": {
"platform": "macos",
"bundle_id": "com.example.mcp-client",
"team_id": "ABCD123456",
"attestation": "..."
}
}
This makes client impersonation prohibitively expensive—you'd need to compromise the legitimate software vendor's infrastructure.
The November 2025 spec also introduced Cross App Access (Enterprise-Managed Authorization), which lets enterprises manage MCP connections through their IdP without OAuth redirects. This pairs beautifully with CIMD for enterprise deployments.
The MCP ecosystem is moving toward CIMD as the standard client registration method. It's more secure than DCR, simpler to implement, and scales better for the "thousands of clients" use case that MCP enables.
But here's the reality: limited provider support means you'll likely need to implement your own authorization layer or use one of the few CIMD-supporting providers.
My recommendation:
The authentication story for AI agents is still being written. CIMD is a major step forward, but it's not the final answer. As we explored in my analysis of AI agent security challenges, purpose-built identity systems for AI will continue to evolve.
The important thing is to start building with the best tools available today while staying flexible for tomorrow's improvements.
Related articles from this blog:
Deepak Gupta is the Co-founder & CEO of GrackerAI (Helping companies become machine-readable and recommendation-ready) and previously founded LoginRadius. He writes about AI, cybersecurity, and B2B SaaS growth at guptadeepak.com.
*** This is a Security Bloggers Network syndicated blog from Deepak Gupta | AI & Cybersecurity Innovation Leader | Founder's Journey from Code to Scale authored by Deepak Gupta - Tech Entrepreneur, Cybersecurity Author. Read the original post at: https://guptadeepak.com/client-id-metadata-documents-cimd-the-future-of-mcp-authentication/