Skip to main content
This guide shows how to adapt projects built with the OpenAI Agents Python SDK so they run as Orca Agents on the StreamNative cloud platform. It walks through preparing a project, exposing the required root_agent entry point, packaging the code for upload, and deploying it with snctl.

Prerequisites

  • Python 3.10 or later with a virtual environment for isolating dependencies.
  • The openai-agents library, the openai SDK components your agent requires, and any extra model integrations.
  • The orca-agent-engine Python SDK for interacting with the Orca runtime.
  • StreamNative CLI (snctl) installed and configured for the organization, tenant, and namespace where you plan to deploy.
  • Access to StreamNative cloud topics that deliver agent input and capture responses.
  • An OpenAI API key stored as a secret so the runtime injects it into the agent (for example, OPENAI_API_KEY).

Prepare an OpenAI Agents project

  1. Start from an OpenAI Agents SDK example, such as the multi-tool or Model Context Protocol (MCP) samples distributed with the Orca Engine examples, or create a new project from scratch.
  2. Copy the project into your own source repository and update the package metadata (for example, adjust module names and version numbers in pyproject.toml).
  3. Install dependencies inside a fresh virtual environment:
    python -m venv .venv
    source .venv/bin/activate
    pip install --upgrade pip setuptools
    pip install openai-agents
    pip install openai
    pip install orca-agent-engine
    
  4. Add any helper libraries (HTTP clients, supporting SDK packages, etc.) to requirements.txt or your packaging metadata so they stay bundled with the agent artifact.

Export the root_agent

The Orca runtime imports OpenAI projects by loading a module-level variable named root_agent. Define the agent with the OpenAI SDK and keep runtime secrets such as OPENAI_API_KEY outside the code.
from agents import Agent

root_agent = Agent(
    name="Assistant",
    instructions="You only respond in haikus.",
)
  • Ensure your package exposes the module through __all__ in __init__.py so the runtime can discover the agent.
  • Validate the agent locally by importing the package and sending a sample request through the OpenAI SDK.

Add tools and orchestrations

Use the @function_tool decorator to register functions as callable tools. This allows the agent to execute Python code when it needs to fetch data or perform actions.
from agents import Agent, function_tool

@function_tool
async def fetch_weather(city: str) -> str:
    # Replace with a real API call
    return "The weather is sunny."

root_agent = Agent(
    name="Assistant",
    instructions="Use tools when you need more context before answering.",
    tools=[fetch_weather],
)
  • Leverage type hints and descriptive function comments to provide context that improves tool selection.
  • When you need access to the execution context (for example, to stream tokens), annotate parameters with RunContextWrapper from the OpenAI SDK.

Discover managed tools at runtime

When your Orca workspace provides managed tools through the console, fetch them with the Orca runtime context and supply them to the agent.
from agents import Agent
from orca.functions.agent_context import AgentContext

managed_tools = AgentContext.current().get_tools()

root_agent = Agent(
    name="operations_assistant",
    instructions="Use the managed tools provided by Orca Engine before you respond.",
    tools=managed_tools,
)
Wrap the context lookup in a try/except block if you run unit tests outside of Orca Engine.

Package the project

Orca Engine accepts OpenAI agent artifacts packaged as ZIP archives. ZIP archives offer the fastest iteration loop. Organize the files so the top-level folder matches your importable module.
openai_multi_tool/
  __init__.py
  agent.py
requirements.txt
python -m pip freeze --exclude-editable > requirements.txt
zip -r openai_multi_tool.zip openai_multi_tool requirements.txt
  • Run pip freeze from the virtual environment you used during development so dependency versions stay consistent.
  • The runtime installs listed dependencies automatically during deployment.
  • Reference the ZIP archive with --agent-file openai_multi_tool.zip when running snctl commands.

Deploy with snctl

Use the agent-focused commands to publish both the code and its configuration to the StreamNative cloud platform. The CLI captures all required settings, so you don’t need to ship an agent.yaml file.

Package as a ZIP archive

snctl agents create \
  --tenant <tenant> \
  --namespace <namespace> \
  --name openai-multi-tool \
  --directory openai_multi_tool \
  --agent-framework openai \
  --inputs <input-topic> \
  --output <output-topic> \
  --agent-file openai_multi_tool.zip
  • --directory must match the importable package path inside your ZIP archive.
  • --agent-framework openai tells Orca Engine to load the OpenAI runtime adapter.
  • Repeat the command with updated artifacts or specifications and snctl agents update to roll out changes. Use snctl agents status to monitor runtime health and snctl agents trigger to submit test payloads.

Next steps

  • Configure service accounts and permissions before deploying to production namespaces.
  • Store the OpenAI API key and other credentials in StreamNative cloud secrets so the runtime injects them securely.
  • Run local or CI tests that import the package and call root_agent to verify tool wiring before publishing updates.
  • Use the StreamNative cloud console to review managed tool assignments and ensure your OpenAI agent can discover required Model Context Protocol (MCP) integrations.
I