Skip to main content
The Remote StreamNative MCP Server can publish Orca agents as Model Context Protocol (MCP) tools. This lets IDE copilots, external orchestrators, and other MCP-aware agents call your Orca workloads without building a custom integration layer.

Automatic discovery and refresh

  • Orca Engine registers agents that expose the required root_agent entry point and marks them as shareable through MCP.
  • The managed server regularly discovers eligible agents and publishes them as MCP tools. Default tool names follow the pattern agent_<tenant>_<namespace>_<agent> when you do not supply overrides.
  • Server-Sent Events (SSE) signal catalog changes. When the MCP server reports tools changed, Orca Engine reloads cached metadata so clients immediately see the latest instructions and tool hints.

Metadata surfaced to MCP clients

  • Each exported tool includes the agent name, description, and supported session mode so callers understand the conversational context before invoking it.
  • When an agent declares structured input or output channels, the Remote MCP Server maps that contract into MCP tool parameters so clients know which fields to supply.
  • Keep prompt instructions concise and results deterministic where possible so downstream tools can parse responses reliably.

Customize tool expose metadata

Provide descriptive names and summaries to make the MCP tool easier to be used by LLM agents.
{
  "env": {
    "MCP_TOOL_NAME": "IncidentResponder",
    "MCP_TOOL_DESCRIPTION": "Guides engineers through triaging active StreamNative Cloud incidents."
  }
}
  • Set MCP_TOOL_NAME and MCP_TOOL_DESCRIPTION as environment variables in the agent configuration through CLI deployment flags.
  • When present, the Remote MCP Server uses these values instead of the default derived name and description.
  • Administrators can require both variables before publishing an agent so only well-documented tools appear in Remote StreamNative MCP server.

Prepare an agent for MCP publication

  1. Follow the Google ADK agents guide or OpenAI agents guide to produce a deployable artifact.
  2. Store model credentials as StreamNative Cloud secrets and bind them with the --secrets flag during deployment.
  3. Add MCP_TOOL_NAME and MCP_TOOL_DESCRIPTION to the agent environment, then redeploy so the discovery cycle can pick up the customized metadata.

Control discovery with MCP features

  • The Remote MCP Server exports shareable Orca agents by default when your identity can reach the associated tenant and namespace.
  • To skip agent discovery from a client, send the X-MCP-Features header during connection setup and omit agents-as-tools from the list.
  • Provide the feature explicitly—X-MCP-Features: pulsar-admin,agents-as-tools—when you want to mix agent tools with other MCP capabilities while keeping control over optional features.

Consume the agent from MCP clients

  • MCP clients invoke the tool using standard call_tool requests. The managed server forwards payloads to the corresponding Orca agent and streams responses back.
  • The MCP server enforces StreamNative Cloud permissions, so users only see agents they are authorized to access.
  • Use StreamNative observability dashboards and snctl agents status to monitor traffic routed through MCP.
This pattern lets you share domain-specific assistants—such as troubleshooting copilots or knowledge workers—with any MCP-aware application while keeping configuration centralized in StreamNative Cloud.
I