Overview
The @standardagents/openai package provides the OpenAI provider factory for Standard Agents. It enables direct access to OpenAI models with typed providerOptions and built-in provider tools.
Key Features
Direct OpenAI API access with automatic client management
Typed providerOptions with TypeScript autocompletion
Built-in provider tools (web search, file search, code interpreter, image generation)
Model capability detection and icon support
Installation
npm install @standardagents/openai
pnpm add @standardagents/openai
yarn add @standardagents/openai
Quick Start
import { defineModel } from '@standardagents/spec' ;
import { openai } from '@standardagents/openai' ;
export default defineModel ({
name: 'gpt-4o' ,
provider: openai ,
model: 'gpt-4o' ,
inputPrice: 2.5 ,
outputPrice: 10 ,
}) ;
Provider Factory
The openai export is a provider factory function that creates OpenAI provider instances:
import { openai } from '@standardagents/openai' ;
// The factory is used by defineModel internally
defineModel ({
name: 'my-model' ,
provider: openai , // Pass the factory, not a string
model: 'gpt-4o' ,
});
Unlike the legacy string-based provider: 'openai', the new API uses an imported factory function. This enables typed providerOptions and runtime validation.
Provider Options
The openai factory includes a typed schema for provider-specific options:
import { defineModel } from '@standardagents/spec' ;
import { openai } from '@standardagents/openai' ;
export default defineModel ({
name: 'gpt-4o' ,
provider: openai ,
model: 'gpt-4o' ,
inputPrice: 2.5 ,
outputPrice: 10 ,
providerOptions: {
service_tier: 'default' , // TypeScript knows this is valid
user: 'user-123' ,
seed: 42 ,
} ,
}) ;
Available Options
Option Type Description service_tier'auto' | 'default' | 'flex'Service tier for request processing userstringUser identifier for abuse monitoring seednumberSeed for deterministic outputs (beta) frequency_penaltynumberReduces repetition (-2.0 to 2.0) presence_penaltynumberEncourages new topics (-2.0 to 2.0) logprobsbooleanReturn log probabilities top_logprobsnumberMost likely tokens to return (0-20) storebooleanStore completion for future reference metadataRecord<string, string>Metadata for stored completions
Type Safety
The providerOptions type is automatically inferred from the provider:
import { openai } from '@standardagents/openai' ;
import type { OpenAIProviderOptions } from '@standardagents/openai' ;
// TypeScript enforces the correct shape
const options : OpenAIProviderOptions = {
service_tier: 'default' ,
frequency_penalty: 0.5 ,
// seed: 'invalid', // Error: Type 'string' is not assignable to type 'number'
};
OpenAI provides built-in tools that can be used alongside your custom tools:
Tool Description Models web_searchSearch the web with citations GPT-4o, GPT-4o-mini, o1, o3-mini, o4-mini file_searchSearch uploaded files with vector embeddings GPT-4o, GPT-4o-mini, o4-mini code_interpreterExecute Python in a sandbox GPT-4o, GPT-4o-mini, o1, o3-mini, o4-mini image_generationGenerate images with GPT-image-1 GPT-4o, o4-mini
Enable provider tools in your model definition:
import { defineModel } from '@standardagents/spec' ;
import { openai } from '@standardagents/openai' ;
export default defineModel ({
name: 'gpt-4o-with-tools' ,
provider: openai ,
model: 'gpt-4o' ,
inputPrice: 2.5 ,
outputPrice: 10 ,
providerTools: [ 'web_search' , 'code_interpreter' ] ,
}) ;
Then reference them in prompts:
import { definePrompt } from '@standardagents/spec' ;
export default definePrompt ({
name: 'research-assistant' ,
model: 'gpt-4o-with-tools' ,
prompt: 'You are a research assistant with web search capabilities.' ,
tools: [ 'web_search' ] , // Provider tool
}) ;
Some provider tools require thread environment variables (tenvs):
file_search:
// Requires vectorStoreId tenv
tenvs : z . object ({
vectorStoreId: z . string (). describe ( 'OpenAI Vector Store ID' ),
})
Provider tools are executed by OpenAI’s servers, not locally. Results come back in the API response.
Model Capabilities
Set capabilities to help the framework understand what the model supports:
export default defineModel ({
name: 'gpt-4o' ,
provider: openai ,
model: 'gpt-4o' ,
inputPrice: 2.5 ,
outputPrice: 10 ,
capabilities: {
supportsImages: true ,
supportsToolCalls: true ,
supportsJsonMode: true ,
supportsStreaming: true ,
maxContextTokens: 128000 ,
maxOutputTokens: 16384 ,
} ,
}) ;
Environment Setup
Set your OpenAI API key as an environment variable:
For Cloudflare Workers:
wrangler secret put OPENAI_API_KEY
Example Configurations
import { defineModel } from '@standardagents/spec' ;
import { openai } from '@standardagents/openai' ;
export default defineModel ({
name: 'heavy-thinking' ,
provider: openai ,
model: 'gpt-4o' ,
inputPrice: 2.5 ,
outputPrice: 10 ,
capabilities: {
supportsImages: true ,
supportsToolCalls: true ,
supportsJsonMode: true ,
maxContextTokens: 128000 ,
} ,
providerOptions: {
service_tier: 'default' ,
} ,
providerTools: [ 'web_search' , 'code_interpreter' ] ,
}) ;
Budget Model
export default defineModel ({
name: 'fast-response' ,
provider: openai ,
model: 'gpt-4o-mini' ,
inputPrice: 0.15 ,
outputPrice: 0.60 ,
capabilities: {
supportsImages: true ,
supportsToolCalls: true ,
maxContextTokens: 128000 ,
} ,
}) ;
Reasoning Model
export default defineModel ({
name: 'deep-reasoning' ,
provider: openai ,
model: 'o1' ,
inputPrice: 15 ,
outputPrice: 60 ,
capabilities: {
supportsToolCalls: true ,
reasoningLevels: { 0 : null , 33 : 'low' , 66 : 'medium' , 100 : 'high' },
} ,
providerTools: [ 'web_search' , 'code_interpreter' ] ,
}) ;
Exports
import {
// Provider factory
openai ,
// Provider options schema (Zod)
openaiProviderOptions ,
// Provider class (advanced usage)
OpenAIProvider ,
} from '@standardagents/openai' ;
import type {
// TypeScript types
OpenAIProviderOptions ,
} from '@standardagents/openai' ;
Next Steps