root_agent entry point, packaging the code for upload, and deploying it with snctl.
Prerequisites
- Python 3.10 or later with a virtual environment for isolating dependencies.
- The
openai-agentslibrary, theopenaiSDK components your agent requires, and any extra model integrations. - The
orca-agent-enginePython SDK for interacting with the Orca runtime. - StreamNative CLI (
snctl) installed and configured for the organization, tenant, and namespace where you plan to deploy. - Access to StreamNative cloud topics that deliver agent input and capture responses.
- An OpenAI API key stored as a secret so the runtime injects it into the agent (for example,
OPENAI_API_KEY).
Prepare an OpenAI Agents project
- Start from an OpenAI Agents SDK example, such as the multi-tool or Model Context Protocol (MCP) samples distributed with the Orca Engine examples, or create a new project from scratch.
-
Copy the project into your own source repository and update the package metadata (for example, adjust module names and version numbers in
pyproject.toml). -
Install dependencies inside a fresh virtual environment:
-
Add any helper libraries (HTTP clients, supporting SDK packages, etc.) to
requirements.txtor your packaging metadata so they stay bundled with the agent artifact.
Export the root_agent
The Orca runtime imports OpenAI projects by loading a module-level variable named root_agent. Define the agent with the OpenAI SDK and keep runtime secrets such as OPENAI_API_KEY outside the code.
- Ensure your package exposes the module through
__all__in__init__.pyso the runtime can discover the agent. - Validate the agent locally by importing the package and sending a sample request through the OpenAI SDK.
Add tools and orchestrations
Use the@function_tool decorator to register functions as callable tools. This allows the agent to execute Python code when it needs to fetch data or perform actions.
- Leverage type hints and descriptive function comments to provide context that improves tool selection.
- When you need access to the execution context (for example, to stream tokens), annotate parameters with
RunContextWrapperfrom the OpenAI SDK.
Discover managed tools at runtime
When your Orca workspace provides managed tools through the console, fetch them with the Orca runtime context and supply them to the agent.try/except block if you run unit tests outside of Orca Engine.
Package the project
Orca Engine accepts OpenAI agent artifacts packaged as ZIP archives.Package as a ZIP archive (recommended)
ZIP archives offer the fastest iteration loop. Organize the files so the top-level folder matches your importable module.- Run
pip freezefrom the virtual environment you used during development so dependency versions stay consistent. - The runtime installs listed dependencies automatically during deployment.
- Reference the ZIP archive with
--agent-file openai_multi_tool.zipwhen runningsnctlcommands.
Deploy with snctl
Use the agent-focused commands to publish both the code and its configuration to the StreamNative cloud platform. The CLI captures all required settings, so you don’t need to ship an agent.yaml file.
Package as a ZIP archive
--directorymust match the importable package path inside your ZIP archive.--agent-framework openaitells Orca Engine to load the OpenAI runtime adapter.- Repeat the command with updated artifacts or specifications and
snctl agents updateto roll out changes. Usesnctl agents statusto monitor runtime health andsnctl agents triggerto submit test payloads.
Next steps
- Configure service accounts and permissions before deploying to production namespaces.
- Store the OpenAI API key and other credentials in StreamNative cloud secrets so the runtime injects them securely.
- Run local or CI tests that import the package and call
root_agentto verify tool wiring before publishing updates. - Use the StreamNative cloud console to review managed tool assignments and ensure your OpenAI agent can discover required Model Context Protocol (MCP) integrations.