Console terminology: In the Maverics Console, the combination of applications,
policies, headers, and connector bindings is managed through User Flows. In
YAML, these elements are configured directly within each app’s configuration block
under
apps[].policies[].Overview
MCP Proxy sits between AI agents and upstream MCP servers, adding identity, authorization, and audit to MCP tool access. Agents connect to the Orchestrator’s MCP endpoint, and the Orchestrator forwards tool calls to upstream MCP servers after authenticating the agent, evaluating authorization policies, and exchanging tokens. No modification is needed to the upstream MCP servers.How It Works
The MCP Proxy operates as an identity-aware intermediary for MCP traffic:- Agent connects — The AI agent connects to the Orchestrator’s MCP endpoint (SSE or Streamable HTTP).
- OAuth authentication — The Orchestrator validates the agent’s OAuth token against the configured authorization server.
- Tool discovery — The Orchestrator proxies tool discovery to the upstream MCP server and returns the available tools (optionally namespaced with a prefix).
- Policy evaluation — When the agent invokes a tool, the Orchestrator evaluates OPA policies before forwarding the request.
- Token exchange — The Orchestrator exchanges the agent’s token for a delegation token scoped to the upstream MCP server, with per-tool scopes and TTLs.
- Upstream forwarding — The tool call is forwarded to the upstream MCP server via the configured transport (stdio subprocess or HTTP streaming).
- Response passthrough — The upstream server’s response flows back through the Orchestrator to the agent, with audit logging capturing the interaction.
Use Cases
- Securing existing MCP servers — Add identity and authorization to upstream MCP servers without modifying them. The Orchestrator handles OAuth authentication and token exchange.
- Per-tool authorization — Apply different scopes and TTLs to different tools based on the tool name. Read-only tools get read scopes; write tools get write scopes.
- Multiple transport support — Connect to upstream servers via stdio (subprocess managed by the Orchestrator) or HTTP streaming (remote MCP server over the network).
- Centralized token exchange — Exchange agent tokens for service-specific delegation tokens with per-tool granularity, preserving the user’s identity through the agent-to-server chain.
Key Concepts
Transport Flexibility
MCP Proxy supports two upstream transport types. Stdio runs the MCP server as a subprocess managed by the Orchestrator (for local or sidecar deployments). Stream connects to a remote MCP server over HTTP (for network-based deployments). The transport choice depends on where the MCP server runs, not on the agent-facing protocol.Tool Namespacing
When multiple MCP Proxy apps connect to different upstream MCP servers, tool names can collide. ThetoolNamespace feature adds a configurable prefix to all tools from an app, enabling agents to access tools from multiple MCP servers through a single endpoint.
Token Exchange for MCP Servers
MCP Proxy uses RFC 8693 token exchange to convert the agent’s inbound OAuth token into a scoped token for the upstream MCP server. Each tool gets its own scopes and TTL, enabling least-privilege access. The delegation token carries both agent and user identity through the entire chain.Inbound OPA Policies
OPA Rego policies evaluate inbound MCP requests before forwarding to the upstream server. This adds a centralized authorization layer to MCP servers that may lack their own authorization. Policies can block specific tool invocations based on agent identity, user attributes, or requested tool name.Upstream Token Validation Best Practice
The upstream MCP servers receive standard MCP requests, and the Orchestrator adds identity, authorization, and audit transparently — upgrading existing MCP servers with enterprise identity without modifying them. While no protocol-level changes are required, it is best practice for upstream MCP servers to validate the received delegation token. Specifically, upstreams should:- Validate the token signature and expiry — Confirm the token has not been tampered with and is still valid.
- Confirm the issuer matches the Auth Provider Orchestrator — The
issclaim should match the expected authorization server. - Verify the audience claim matches the server’s own identifier — The
audclaim should contain the MCP server’s registered audience. - Check that the token’s scopes authorize the requested tool invocation — The
scopeclaim should include the permissions required for the specific tool.
Interface
- Console UI
- Configuration
MCP Proxy app configuration is currently available via YAML only. The Console UI does not yet support creating or editing MCP Proxy apps. See the Configuration tab for the full YAML reference.
Configuration Reference
mcpProxy App Type (apps[].type: mcpProxy)
Each MCP Proxy app entry defines a proxied connection to an upstream MCP server. Apps are configured under the apps array with type: mcpProxy. For shared app fields (name, type) see the Apps and Routes reference.
| Key | Type | Required | Description |
|---|---|---|---|
toolNamespace.disabled | Boolean | No | Disable tool namespacing. When false, all tool names are prefixed with toolNamespace.name. |
toolNamespace.name | String | No | Prefix for tool names (e.g., service_). Characters allowed: a-z, A-Z, 0-9, ., -, _. |
upstream.transport | String | Yes | Upstream transport type: stdio (subprocess) or stream (HTTP streaming) |
upstream.stdio.command | String | Conditional | Command to execute the MCP server subprocess. Required when transport is stdio. |
upstream.stdio.args | Array | No | Command-line arguments for the subprocess |
upstream.stdio.env | Object | No | Environment variables for the subprocess (key-value pairs) |
upstream.stream.url | String | Conditional | Upstream MCP server URL. Required when transport is stream. |
upstream.stream.tls | String | No | TLS profile name for the upstream connection (references a tls entry) |
upstream.stream.connection.dialTimeout | String | No | Connection dial timeout (duration string, e.g., 10s) |
upstream.stream.connection.keepAlive.interval | String | No | Keep-alive interval for the upstream connection (duration string) |
upstream.stream.connection.pool.maxIdleConns | Integer | No | Maximum number of idle connections in the pool |
upstream.stream.connection.pool.maxIdleConnsPerHost | Integer | No | Maximum idle connections per upstream host |
authorization.inbound.opa.name | String | No | Name identifier for the OPA policy |
authorization.inbound.opa.file | String | Conditional | Path to the Rego policy file. Mutually exclusive with rego. |
authorization.inbound.opa.rego | String | Conditional | Inline Rego policy. Mutually exclusive with file. |
authorization.outbound.type | String | No | Outbound authorization type: tokenExchange or unprotected |
authorization.outbound.tokenExchange.type | String | No | Token exchange type: delegation (default) or impersonation |
authorization.outbound.tokenExchange.idp | String | Conditional | Identity provider connector name for token exchange. Required when using token exchange. |
authorization.outbound.tokenExchange.audience | String | Conditional | Token audience for the upstream MCP server. Required when using token exchange. |
authorization.outbound.tokenExchange.tools[].name | String | No | Tool name (exact match) or regex pattern (~ pattern) for per-tool config |
authorization.outbound.tokenExchange.tools[].ttl | String | No | Access token lifetime for this tool (duration string, e.g., 5s) |
authorization.outbound.tokenExchange.tools[].scopes[].name | String | No | OAuth scope to request when exchanging tokens for this tool |
Upstream Transports
MCP Proxy supports two upstream transport types for connecting to MCP servers. stdio (upstream.transport: "stdio") — Runs the MCP server as a subprocess managed by the Orchestrator. The Orchestrator starts the process, communicates via stdin/stdout, and manages its lifecycle. Use this when the MCP server runs locally or as a sidecar.
- Console UI
- Configuration
Upstream stdio transport configuration is available via YAML only. The Console UI does not yet support configuring MCP Proxy upstream transports.
upstream.transport: "stream") — Connects to a remote MCP server over HTTP streaming. Configure with the server URL, optional TLS profile, and connection pooling settings. Use this when the MCP server runs as a separate service.
- Console UI
- Configuration
Upstream stream transport configuration is available via YAML only. The Console UI does not yet support configuring MCP Proxy upstream transports.
Token Exchange
MCP Proxy uses RFC 8693 token exchange to obtain scoped tokens for outbound MCP server calls. Token exchange supports two modes: delegation (default) and impersonation.-
Delegation (
tokenExchange.type: delegation) — Delegation is the default and recommended mode. The Orchestrator exchanges the agent’s token for a delegation token that contains anact(actor) claim identifying the agent alongside the original user’s identity. The upstream MCP server sees both identities — who the user is and which agent is acting on their behalf — which supports auditability and least-surprise behavior for downstream services. -
Impersonation (
tokenExchange.type: impersonation) — The Orchestrator exchanges the agent’s token for an impersonation token that fully assumes the user’s identity, with no trace of agent involvement. The upstream MCP server receives a token that looks like it came directly from the user, meaning audit trails at the upstream server will not show agent participation. Impersonation may be required when upstream MCP servers do not support theactclaim pattern. -
Token minting policies — When token exchange is processed by the OIDC Provider, administrators can configure OPA-based token minting policies (
authorization.tokenMinting.accessToken.policieson the OIDC Provider app) to control which tokens get issued. These policies evaluate the token exchange request context and can deny token issuance based on agent identity, requested scopes, audience, or delegating user attributes. This provides a governance layer over token exchange independent of the inbound OPA policies that control tool access. - Per-tool scopes — Each tool can have its own OAuth scopes and token TTL. Tool names support exact matching and regex patterns for wildcarding.
- Console UI
- Configuration
Token exchange configuration for MCP Proxy apps is available via YAML only. The Console UI does not yet support configuring outbound authorization or per-tool token exchange settings.
listItems is matched exactly, while ~ create.* uses a regex pattern to match any tool name starting with create (e.g., createItem, createOrder). Per-tool token exchange ensures each tool invocation receives a minimally-scoped token for the upstream MCP server.
The token exchange connects to the OIDC Provider for token issuance. The OIDC Provider must be configured with the urn:ietf:params:oauth:grant-type:token-exchange grant type to support token exchange requests.
Inbound Authorization (OPA)
MCP Proxy evaluates OPA (Open Policy Agent) Rego policies on inbound MCP requests before performing outbound token exchange. This blocks unauthorized tool calls early, before any outbound MCP server call is made. Configureauthorization.inbound.opa with a name and either a file path to a Rego policy file or inline rego content. The policy is evaluated against the MCP request context including the agent’s identity and the requested tool.
Troubleshooting
Cannot connect to upstream MCP server
Cannot connect to upstream MCP server
Symptoms: The agent connects to the proxy but tool discovery fails with a connection error. No tools are returned.Causes:
- The
upstream.stream.urlis incorrect or unreachable from the Orchestrator. - TLS certificate issues when connecting to the upstream MCP server over HTTPS.
- The upstream MCP server is not running or is not accepting connections.
- Verify the
upstream.stream.urlis correct and reachable from the Orchestrator host. - If using TLS, check the TLS profile configuration and ensure the upstream server’s certificate is trusted.
- Confirm the upstream MCP server is running and accepting connections on the expected port.
Tool namespacing conflicts
Tool namespacing conflicts
Symptoms: Tools from different upstream MCP servers have name collisions, causing unexpected behavior when agents invoke tools.Causes:
- The
toolNamespacefeature is not configured on one or more MCP Proxy apps. - Namespace prefixes are not unique across apps, so tools from different servers still collide.
- Configure unique
toolNamespace.namevalues for each MCP Proxy app to prefix tool names (e.g.,service_a_andservice_b_). - Ensure
toolNamespace.disabledis set tofalse(the default) on all apps that share a single MCP endpoint.
Token exchange fails during tool invocation
Token exchange fails during tool invocation
Symptoms: The agent authenticates successfully but tool invocation fails. Orchestrator logs show a token exchange failure.Causes:
- The Auth Provider Orchestrator’s token endpoint is unreachable from the Gateway Orchestrator.
- The
urn:ietf:params:oauth:grant-type:token-exchangegrant type is not enabled on the OIDC Provider app. - Audience mismatch — the
audiencein the token exchange configuration does not match the OIDC Provider’sexpectedAudiences.
- Verify the Auth Provider Orchestrator’s token endpoint is reachable from the Gateway Orchestrator.
- Ensure the OIDC Provider app has
urn:ietf:params:oauth:grant-type:token-exchangein itsgrantTypeslist. - Check that the
audiencevalue in the MCP Proxy app’stokenExchangeconfiguration matches the OIDC Provider app’sexpectedAudiences.
SSE transport disconnects
SSE transport disconnects
Symptoms: The agent connection drops during tool invocation. Logs show “connection reset” or timeout errors.Causes:
- SSE keep-alive is not configured, so the connection appears idle to intermediate proxies or load balancers.
- A proxy or load balancer is enforcing a timeout that kills long-lived connections.
- Configure
upstream.stream.connection.keepAlive.intervalwith an appropriate interval (e.g.,15s) to keep connections alive. - Configure any intermediate proxy or load balancer to allow long-lived connections for the MCP endpoint.
Session timeout during long operations
Session timeout during long operations
Symptoms: Multi-step agent interactions fail partway through. The agent loses its session between tool invocations.Causes:
- The session timeout is too short for the expected agent workflow duration.
- The upstream MCP server takes longer to respond than the configured timeout allows.
- Increase the session timeout to accommodate the expected workflow duration.
- Review the upstream MCP server’s response times and adjust connection timeouts (
upstream.stream.connection.dialTimeout) accordingly.