Skip to main content

OpenCode SDK

This provider integrates OpenCode, an open-source AI coding agent for the terminal with support for 75+ LLM providers.

Provider IDs

  • opencode:sdk - Uses OpenCode's configured model
  • opencode - Same as opencode:sdk

The model is configured via the OpenCode CLI or ~/.opencode/config.yaml.

Installation

The OpenCode SDK provider requires both the OpenCode CLI and the SDK package.

1. Install OpenCode CLI

curl -fsSL https://opencode.ai/install | bash

Or via other package managers - see opencode.ai for options.

2. Install SDK Package

npm install @opencode-ai/sdk
note

The SDK package is an optional dependency and only needs to be installed if you want to use the OpenCode SDK provider.

Setup

Configure your LLM provider credentials. For Anthropic:

export ANTHROPIC_API_KEY=your_api_key_here

For OpenAI:

export OPENAI_API_KEY=your_api_key_here

OpenCode supports 75+ providers - see Supported Providers for the full list.

Quick Start

Basic Usage

Use opencode:sdk to access OpenCode's configured model:

promptfooconfig.yaml
providers:
- opencode:sdk

prompts:
- 'Write a Python function that validates email addresses'

Configure your model via the OpenCode CLI: opencode config set model openai/gpt-4o

By default, OpenCode SDK runs in a temporary directory with no tools enabled. When your test cases finish, the temporary directory is deleted.

With Inline Model Configuration

Specify the provider and model directly in your config:

promptfooconfig.yaml
providers:
- id: opencode:sdk
config:
provider_id: anthropic
model: claude-sonnet-4-20250514

prompts:
- 'Write a Python function that validates email addresses'

This overrides the model configured via the OpenCode CLI for this specific eval.

With Working Directory

Specify a working directory to enable read-only file tools:

providers:
- id: opencode:sdk
config:
working_dir: ./src

prompts:
- 'Review the TypeScript files and identify potential bugs'

By default, when you specify a working directory, OpenCode SDK has access to these read-only tools: read, grep, glob, list.

With Full Tool Access

Enable additional tools for file modifications and shell access:

providers:
- id: opencode:sdk
config:
working_dir: ./project
tools:
read: true
grep: true
glob: true
list: true
write: true
edit: true
bash: true
permission:
bash: allow
edit: allow
warning

When enabling write/edit/bash tools, consider how you will reset files after each test. See Managing Side Effects.

Supported Parameters

ParameterTypeDescriptionDefault
apiKeystringAPI key for the LLM providerEnvironment variable
baseUrlstringURL for existing OpenCode serverAuto-start server
hostnamestringServer hostname when starting new server127.0.0.1
portnumberServer port when starting new serverAuto-select
timeoutnumberServer startup timeout (ms)30000
working_dirstringDirectory for file operationsTemporary directory
provider_idstringLLM provider (anthropic, openai, google, etc.)Auto-detect
modelstringModel to useProvider default
toolsobjectTool configurationRead-only with working_dir
permissionobjectPermission configuration for toolsAsk for dangerous tools
agentstringBuilt-in agent to use (build, plan)Default agent
custom_agentobjectCustom agent configurationNone
session_idstringResume existing sessionCreate new session
persist_sessionsbooleanKeep sessions between callsfalse
mcpobjectMCP server configurationNone
Planned Features

The following parameters are defined but not yet supported by the OpenCode SDK:

  • max_retries - Maximum retries for API calls
  • log_level - SDK log level
  • enable_streaming - Enable streaming responses

Supported Providers

OpenCode supports 75+ LLM providers through Models.dev:

Cloud Providers:

  • Anthropic (Claude)
  • OpenAI
  • Google AI Studio / Vertex AI
  • Amazon Bedrock
  • Azure OpenAI
  • Groq
  • Together AI
  • Fireworks AI
  • DeepSeek
  • Perplexity
  • Cohere
  • Mistral
  • And many more...

Local Models:

  • Ollama
  • LM Studio
  • llama.cpp

Configure your preferred model using the OpenCode CLI:

# Set your default model
opencode config set model anthropic/claude-sonnet-4-20250514

# Or for OpenAI
opencode config set model openai/gpt-4o

# Or for local models
opencode config set model ollama/llama3

Tools and Permissions

Default Tools

With no working_dir specified, OpenCode runs in a temp directory with no tools.

With working_dir specified, these read-only tools are enabled by default:

ToolPurpose
readRead file contents
grepSearch file contents with regex
globFind files by pattern
listList directory contents

Tool Configuration

Customize available tools:

# Enable additional tools
providers:
- id: opencode:sdk
config:
working_dir: ./project
tools:
read: true
grep: true
glob: true
list: true
write: true # Enable file writing
edit: true # Enable file editing
bash: true # Enable shell commands
patch: true # Enable patch application
webfetch: true # Enable web fetching

# Disable specific tools
providers:
- id: opencode:sdk
config:
working_dir: ./project
tools:
bash: false # Disable shell

Permissions

Configure tool permissions:

providers:
- id: opencode:sdk
config:
permission:
bash: allow # or 'ask' or 'deny'
edit: allow
webfetch: deny

Session Management

Ephemeral Sessions (Default)

Creates a new session for each eval:

providers:
- opencode:sdk

Persistent Sessions

Reuse sessions between calls:

providers:
- id: opencode:sdk
config:
persist_sessions: true

Session Resumption

Resume a specific session:

providers:
- id: opencode:sdk
config:
session_id: previous-session-id

Custom Agents

Define custom agents with specific configurations:

providers:
- id: opencode:sdk
config:
custom_agent:
description: Security-focused code reviewer
model: claude-sonnet-4-20250514
temperature: 0.3
tools:
read: true
grep: true
write: false
bash: false
prompt: |
You are a security-focused code reviewer.
Analyze code for vulnerabilities and report findings.

MCP Integration

OpenCode supports MCP (Model Context Protocol) servers:

providers:
- id: opencode:sdk
config:
mcp:
weather-server:
type: local
command: ['node', 'mcp-weather-server.js']
environment:
API_KEY: '{{env.WEATHER_API_KEY}}'
api-server:
type: remote
url: https://api.example.com/mcp
headers:
Authorization: 'Bearer {{env.API_TOKEN}}'

Caching Behavior

This provider automatically caches responses based on:

  • Prompt content
  • Working directory fingerprint (if specified)
  • Provider and model configuration
  • Tool configuration

To disable caching:

export PROMPTFOO_CACHE_ENABLED=false

To bust the cache for a specific test:

tests:
- vars: {}
options:
bustCache: true

Managing Side Effects

When using tools that allow side effects (write, edit, bash), consider:

  • Serial execution: Set evaluateOptions.maxConcurrency: 1 to prevent race conditions
  • Git reset: Use git to reset files after each test
  • Extension hooks: Use promptfoo hooks for setup/cleanup
  • Containers: Run tests in containers for isolation

Example with serial execution:

providers:
- id: opencode:sdk
config:
working_dir: ./project
tools:
write: true
edit: true

evaluateOptions:
maxConcurrency: 1

Comparison with Other Agentic Providers

FeatureOpenCode SDKClaude Agent SDKCodex SDK
Provider flexibility75+ providersAnthropic onlyOpenAI only
ArchitectureClient-serverDirect APIThread-based
Local modelsOllama, LM StudioNoNo
Tool ecosystemNative + MCPNative + MCPNative
Working dir isolationYesYesGit required

Choose based on your use case:

  • Multiple providers / local models → OpenCode SDK
  • Anthropic-specific features → Claude Agent SDK
  • OpenAI-specific features → Codex SDK

Examples

See the examples directory for complete implementations:

See Also