"OpenHarness: Open Agent Harness -- Ultra-Lightweight Claude Code"
oh — OpenHarness & ohmo
undefinedOpenHarness delivers core lightweight agent infrastructure: tool-use, skills, memory, and multi-agent coordination.
undefinedohmo is a personal AI agent built on OpenHarness — not another chatbot, but an assistant that actually works for you over long sessions. Chat with ohmo in Feishu / Slack / Telegram / Discord, and it forks branches, writes code, runs tests, and opens PRs on its own. ohmo runs on your existing Claude Code or Codex subscription — no extra API key needed.
undefinedJoin the community: contribute Harness for open agent development.
One Command (oh) to Launch OpenHarness and Unlock All Agent Harnesses.
Supports CLI agent integration including OpenClaw, nanobot, Cursor, and more.
🔄 Agent Loop
• Streaming Tool-Call Cycle • API Retry with Exponential Backoff • Parallel Tool Execution • Token Counting & Cost Tracking |
🔧 Harness Toolkit
• 43 Tools (File, Shell, Search, Web, MCP) • On-Demand Skill Loading (.md) • Plugin Ecosystem (Skills + Hooks + Agents) • Compatible with anthropics/skills & plugins |
🧠 Context & Memory
• CLAUDE.md Discovery & Injection • Context Compression (Auto-Compact) • MEMORY.md Persistent Memory • Session Resume & History |
🛡️ Governance
• Multi-Level Permission Modes • Path-Level & Command Rules • PreToolUse / PostToolUse Hooks • Interactive Approval Dialogs |
🤝 Swarm Coordination
• Subagent Spawning & Delegation • Team Registry & Task Management • Background Task Lifecycle • ClawTeam Integration (Roadmap) |
An Agent Harness is the complete infrastructure that wraps around an LLM to make it a functional agent. The model provides intelligence; the harness provides hands, eyes, memory, and safety boundaries.
OpenHarness is an open-source Python implementation designed for researchers, builders, and the community:
oh, ohmo, and openharness into ~/.local/bin instead of prepending the virtualenv bin directory to PATH, which avoids clobbering Conda-managed shells.Shift+Enter to insert a newline while keeping plain Enter as submit.ohmo gains channel slash commands and multimodal attachment supportohmo channels support file attachments and multimodal gateway messagesreasoning_content support for thinking modelsOPENAI_BASE_URL env override, profile-scoped credential prioritycall_tool / read_resourceweb_fetch URL validation--debug logging, Windows cmd flash fixohmo personal-agent app:
oh setup now guides provider selection as workflows instead of exposing raw auth/provider internalsohmo ships as a packaged app with ~/.ohmo workspace, gateway, bootstrap prompts, and channel config flowStart here: Quick Start · Provider Compatibility · Showcase · Contributing · Changelog
# One-click install
curl -fsSL https://raw.githubusercontent.com/HKUDS/OpenHarness/main/scripts/install.sh | bash
# Or via pip
pip install openharness-ai
# One-click install (PowerShell)
iex (Invoke-WebRequest -Uri 'https://raw.githubusercontent.com/HKUDS/OpenHarness/main/scripts/install.ps1')
# Or via pip
pip install openharness-ai
undefinedNote: Windows support is now native. In PowerShell, use openh instead of oh because oh can resolve to the built-in Out-Host alias.
oh setup # interactive wizard — pick a provider, authenticate, done
# On Windows PowerShell, use: openh setup
Supports Claude / OpenAI / Copilot / Codex / Moonshot(Kimi) / GLM / MiniMax and any compatible endpoint.
oh
# On Windows PowerShell, use: openh
Want an AI agent that works for you from Feishu / Slack / Telegram / Discord?
ohmo init # initialize ~/.ohmo workspace
ohmo config # configure channels and provider
ohmo gateway start # start the gateway — ohmo is now live in your chat app
ohmo runs on your existing Claude Code subscription or Codex subscription — no extra API key needed.
# Single prompt → stdout
oh -p "Explain this codebase"
# JSON output for programmatic use
oh -p "List all functions in main.py" --output-format json
# Stream JSON events in real-time
oh -p "Fix the bug" --output-format stream-json
OpenHarness treats providers as workflows backed by named profiles. In day-to-day use, prefer:
oh setup
oh provider list
oh provider use <profile>
| Workflow | What it is | Typical backends |
|---|---|---|
| undefinedAnthropic-Compatible APIundefined | Anthropic-style request format | Claude official, Kimi, GLM, MiniMax, internal Anthropic-compatible gateways |
| undefinedClaude Subscriptionundefined | Claude CLI subscription bridge | Local ~/.claude/.credentials.json |
| undefinedOpenAI-Compatible APIundefined | OpenAI-style request format | OpenAI official, OpenRouter, DashScope, DeepSeek, SiliconFlow, Groq, Ollama, GitHub Models |
| undefinedCodex Subscriptionundefined | Codex CLI subscription bridge | Local ~/.codex/auth.json |
| undefinedGitHub Copilotundefined | Copilot OAuth workflow | GitHub Copilot device-flow login |
Typical examples:
| Backend | Base URL | Example models |
|---|---|---|
| undefinedClaude officialundefined | https://api.anthropic.com |
claude-sonnet-4-6, claude-opus-4-6 |
| undefinedMoonshot / Kimiundefined | https://api.moonshot.cn/anthropic |
kimi-k2.5 |
| undefinedZhipu / GLMundefined | custom Anthropic-compatible endpoint | glm-4.5 |
| undefinedMiniMaxundefined | custom Anthropic-compatible endpoint | minimax-m1 |
Any provider implementing the OpenAI /v1/chat/completions style API works:
| Backend | Base URL | Example models |
|---|---|---|
| undefinedOpenAIundefined | https://api.openai.com/v1 |
gpt-5.4, gpt-4.1 |
| undefinedOpenRouterundefined | https://openrouter.ai/api/v1 |
provider-specific |
| undefinedAlibaba DashScopeundefined | https://dashscope.aliyuncs.com/compatible-mode/v1 |
qwen3.5-flash, qwen3-max, deepseek-r1 |
| undefinedDeepSeekundefined | https://api.deepseek.com |
deepseek-chat, deepseek-reasoner |
| undefinedGitHub Modelsundefined | https://models.inference.ai.azure.com |
gpt-4o, Meta-Llama-3.1-405B-Instruct |
| undefinedSiliconFlowundefined | https://api.siliconflow.cn/v1 |
deepseek-ai/DeepSeek-V3 |
| undefinedGoogle Geminiundefined | https://generativelanguage.googleapis.com/v1beta/openai |
gemini-2.5-flash, gemini-2.5-pro |
| undefinedGroqundefined | https://api.groq.com/openai/v1 |
llama-3.3-70b-versatile |
| undefinedOllama (local)undefined | http://localhost:11434/v1 |
any local model |
# List saved workflows
oh provider list
# Switch the active workflow
oh provider use codex
# Add your own compatible endpoint
oh provider add my-endpoint \
--label "My Endpoint" \
--provider openai \
--api-format openai \
--auth-source openai_api_key \
--model my-model \
--base-url https://example.com/v1
For custom compatible endpoints, OpenHarness can bind credentials per profile instead of forcing every Anthropic-compatible or OpenAI-compatible backend to share the same API key.
Run local models through Ollama’s OpenAI-compatible endpoint:
# Add an Ollama provider profile
oh provider add ollama \
--label "Ollama" \
--provider Ollama \
--api-format openai \
--auth-source openai_api_key \
--model glm-4.7-flash:q8_0 \
--base-url http://localhost:11434/v1
Saved provider profile: ollama
# Activate and verify
oh provider use ollama
Activated provider profile: ollama
oh provider list
claude-api: Anthropic-Compatible API [ready]
...
moonshot: Moonshot (Kimi) [missing auth]
auth=moonshot_api_key model=kimi-k2.5 base_url=https://api.moonshot.cn/v1
* ollama: Ollama [ready]
auth=openai_api_key model=glm-4.7-flash:q8_0 base_url=http://localhost:11434/v1
--api-format copilot)Use your existing GitHub Copilot subscription as the LLM backend. Authentication uses GitHub’s OAuth device flow — no API keys needed.
# One-time login (opens browser for GitHub authorization)
oh auth copilot-login
# Then launch with Copilot as the provider
uv run oh --api-format copilot
# Or via environment variable
export OPENHARNESS_API_FORMAT=copilot
uv run oh
# Check auth status
oh auth status
# Remove stored credentials
oh auth copilot-logout
| Feature | Details |
|---|---|
| undefinedAuth methodundefined | GitHub OAuth device flow (no API key needed) |
| undefinedToken managementundefined | Automatic refresh of short-lived session tokens |
| undefinedEnterpriseundefined | Supports GitHub Enterprise via --github-domain flag |
| undefinedModelsundefined | Uses Copilot’s default model selection |
| undefinedAPIundefined | OpenAI-compatible chat completions under the hood |
OpenHarness implements the core Agent Harness pattern with 10 subsystems:
openharness/
engine/ # 🧠 Agent Loop — query → stream → tool-call → loop
tools/ # 🔧 43 Tools — file I/O, shell, search, web, MCP
skills/ # 📚 Knowledge — on-demand skill loading (.md files)
plugins/ # 🔌 Extensions — commands, hooks, agents, MCP servers
permissions/ # 🛡️ Safety — multi-level modes, path rules, command deny
hooks/ # ⚡ Lifecycle — PreToolUse/PostToolUse event hooks
commands/ # 💬 54 Commands — /help, /commit, /plan, /resume, ...
mcp/ # 🌐 MCP — Model Context Protocol client
memory/ # 🧠 Memory — persistent cross-session knowledge
tasks/ # 📋 Tasks — background task management
coordinator/ # 🤝 Multi-Agent — subagent spawning, team coordination
prompts/ # 📝 Context — system prompt assembly, CLAUDE.md, skills
config/ # ⚙️ Settings — multi-layer config, migrations
ui/ # 🖥️ React TUI — backend protocol + frontend
The heart of the harness. One loop, endlessly composable:
while True:
response = await api.stream(messages, tools)
if response.stop_reason != "tool_use":
break # Model is done
for tool_call in response.tool_uses:
# Permission check → Hook → Execute → Hook → Result
result = await harness.execute_tool(tool_call)
messages.append(tool_results)
# Loop continues — model sees results, decides next action
The model decides what to do. The harness handles how — safely, efficiently, with full observability.
flowchart LR
U[User Prompt] --> C[CLI or React TUI]
C --> R[RuntimeBundle]
R --> Q[QueryEngine]
Q --> A[Anthropic-compatible API Client]
A -->|tool_use| T[Tool Registry]
T --> P[Permissions + Hooks]
P --> X[Files Shell Web MCP Tasks]
X --> Q
| Category | Tools | Description |
|---|---|---|
| undefinedFile I/Oundefined | Bash, Read, Write, Edit, Glob, Grep | Core file operations with permission checks |
| undefinedSearchundefined | WebFetch, WebSearch, ToolSearch, LSP | Web and code search capabilities |
| undefinedNotebookundefined | NotebookEdit | Jupyter notebook cell editing |
| undefinedAgentundefined | Agent, SendMessage, TeamCreate/Delete | Subagent spawning and coordination |
| undefinedTaskundefined | TaskCreate/Get/List/Update/Stop/Output | Background task management |
| undefinedMCPundefined | MCPTool, ListMcpResources, ReadMcpResource | Model Context Protocol integration |
| undefinedModeundefined | EnterPlanMode, ExitPlanMode, Worktree | Workflow mode switching |
| undefinedScheduleundefined | CronCreate/List/Delete, RemoteTrigger | Scheduled and remote execution |
| undefinedMetaundefined | Skill, Config, Brief, Sleep, AskUser | Knowledge loading, configuration, interaction |
Every tool has:
Skills are on-demand knowledge — loaded only when the model needs them:
Available Skills:
- commit: Create clean, well-structured git commits
- review: Review code for bugs, security issues, and quality
- debug: Diagnose and fix bugs systematically
- plan: Design an implementation plan before coding
- test: Write and run tests for code
- simplify: Refactor code to be simpler and more maintainable
- pdf: PDF processing with pypdf (from anthropics/skills)
- xlsx: Excel operations (from anthropics/skills)
- ... 40+ more
undefinedCompatible with anthropics/skills — just copy .md files to ~/.openharness/skills/.
undefinedCompatible with claude-code plugins. Tested with 12 official plugins:
| Plugin | Type | What it does |
|---|---|---|
commit-commands |
Commands | Git commit, push, PR workflows |
security-guidance |
Hooks | Security warnings on file edits |
hookify |
Commands + Agents | Create custom behavior hooks |
feature-dev |
Commands | Feature development workflow |
code-review |
Agents | Multi-agent PR review |
pr-review-toolkit |
Agents | Specialized PR review agents |
# Manage plugins
oh plugin list
oh plugin install <source>
oh plugin enable <name>
OpenHarness is useful as a lightweight harness layer around Claude-style tooling conventions:
For concrete usage ideas instead of generic claims, see docs/SHOWCASE.md.
Multi-level safety with fine-grained control:
| Mode | Behavior | Use Case |
|---|---|---|
| undefinedDefaultundefined | Ask before write/execute | Daily development |
| undefinedAutoundefined | Allow everything | Sandboxed environments |
| undefinedPlan Modeundefined | Block all writes | Large refactors, review first |
undefinedPath-level rules in settings.json:
{
"permission": {
"mode": "default",
"path_rules": [{"pattern": "/etc/*", "allow": false}],
"denied_commands": ["rm -rf /", "DROP TABLE *"]
}
}
React/Ink TUI with full interactive experience:
/ → arrow keys to select → Enter/permissions → select from list/resume → pick from historyoh [OPTIONS] COMMAND [ARGS]
Session: -c/--continue, -r/--resume, -n/--name
Model: -m/--model, --effort, --max-turns
Output: -p/--print, --output-format text|json|stream-json
Permissions: --permission-mode, --dangerously-skip-permissions
Context: -s/--system-prompt, --append-system-prompt, --settings
Advanced: -d/--debug, --mcp-config, --bare
Subcommands: oh setup | oh provider | oh auth | oh mcp | oh plugin
ohmo is a personal-agent app built on top of OpenHarness. It is packaged alongside oh, with its own workspace and gateway:
# Initialize personal workspace
ohmo init
# Configure gateway channels and pick a provider profile
ohmo config
# Run the personal agent
ohmo
# Run the gateway in foreground
ohmo gateway run
# Check or restart the gateway
ohmo gateway status
ohmo gateway restart
Key concepts:
~/.ohmo/
soul.md
identity.md
ohmo isuser.md
BOOTSTRAP.md
memory/
gateway.json
ohmo config uses the same workflow language as oh setup, so you can point the personal-agent gateway at:
Anthropic-Compatible APIClaude SubscriptionOpenAI-Compatible APICodex SubscriptionGitHub Copilotohmo init creates the home workspace once. After that, use ohmo config to update provider and channel settings; if the gateway is already running, the config flow can restart it for you.
Currently ohmo init / ohmo config can guide channel setup for:
| Suite | Tests | Status |
|---|---|---|
| Unit + Integration | 114 | ✅ All passing |
| CLI Flags E2E | 6 | ✅ Real model calls |
| Harness Features E2E | 9 | ✅ Retry, skills, parallel, permissions |
| React TUI E2E | 3 | ✅ Welcome, conversation, status |
| TUI Interactions E2E | 4 | ✅ Commands, permissions, shortcuts |
| Real Skills + Plugins | 12 | ✅ anthropics/skills + claude-code/plugins |
# Run all tests
uv run pytest -q # 114 unit/integration
python scripts/test_harness_features.py # Harness E2E
python scripts/test_real_skills_plugins.py # Real plugins E2E
from pydantic import BaseModel, Field
from openharness.tools.base import BaseTool, ToolExecutionContext, ToolResult
class MyToolInput(BaseModel):
query: str = Field(description="Search query")
class MyTool(BaseTool):
name = "my_tool"
description = "Does something useful"
input_model = MyToolInput
async def execute(self, arguments: MyToolInput, context: ToolExecutionContext) -> ToolResult:
return ToolResult(output=f"Result for: {arguments.query}")
Create ~/.openharness/skills/my-skill.md:
---
name: my-skill
description: Expert guidance for my specific domain
---
# My Skill
## When to use
Use when the user asks about [your domain].
## Workflow
1. Step one
2. Step two
...
Create .openharness/plugins/my-plugin/.claude-plugin/plugin.json:
{
"name": "my-plugin",
"version": "1.0.0",
"description": "My custom plugin"
}
Add commands in commands/*.md, hooks in hooks/hooks.json, agents in agents/*.md.
OpenHarness is most useful when treated as a small, inspectable harness you can adapt to a real workflow:
json and stream-json output in automation flows.See docs/SHOWCASE.md for short, reproducible examples.
OpenHarness is a community-driven research project. We welcome contributions in:
| Area | Examples |
|---|---|
| undefinedToolsundefined | New tool implementations for specific domains |
| undefinedSkillsundefined | Domain knowledge .md files (finance, science, DevOps…) |
| undefinedPluginsundefined | Workflow plugins with commands, hooks, agents |
| undefinedProvidersundefined | Support for more LLM backends (OpenAI, Ollama, etc.) |
| undefinedMulti-Agentundefined | Coordination protocols, team patterns |
| undefinedTestingundefined | E2E scenarios, edge cases, benchmarks |
| undefinedDocumentationundefined | Architecture guides, tutorials, translations |
# Development setup
git clone https://github.com/HKUDS/OpenHarness.git
cd OpenHarness
uv sync --extra dev
uv run pytest -q # Verify everything works
Useful contributor entry points:
CONTRIBUTING.md for setup, checks, and PR expectationsCHANGELOG.md for user-visible changesdocs/SHOWCASE.md for real-world usage patterns worth documentingMIT — see LICENSE.
Oh my Harness!
The model is the agent. The code is the harness.
Thanks for visiting ✨ OpenHarness!