Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.standardagentbuilder.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

defineModel creates a model configuration that can be referenced by prompts and agents. Models use provider factory functions imported from provider packages for typed configuration.
import { defineModel } from '@standardagents/spec';
import { openai } from '@standardagents/openai';

export default defineModel({
  name: 'heavy-thinking',
  provider: openai,
  model: 'gpt-4o',
  inputPrice: 2.5,
  outputPrice: 10,
});

Type Definition

import type { ProviderFactoryWithOptions, ModelCapabilities } from '@standardagents/spec';
import type { ZodTypeAny, z } from 'zod';

function defineModel<
  N extends string,
  P extends ProviderFactoryWithOptions<ZodTypeAny>
>(options: ModelDefinition<N, P>): ModelDefinition<N, P>;

interface ModelDefinition<
  N extends string = string,
  P extends ProviderFactoryWithOptions<ZodTypeAny> = ProviderFactoryWithOptions<ZodTypeAny>
> {
  name: N;
  provider: P;
  model: string;
  inputPrice?: number;
  outputPrice?: number;
  cachedPrice?: number;
  fallbacks?: string[];
  includedProviders?: string[];
  capabilities?: ModelCapabilities;
  providerOptions?: InferProviderOptions<P>;
  providerTools?: string[];
}

interface ModelCapabilities {
  reasoningLevels?: Record<number, string | null>;
  supportsImages?: boolean;
  supportsToolCalls?: boolean;
  supportsStreaming?: boolean;
  supportsJsonMode?: boolean;
  maxContextTokens?: number;
  maxOutputTokens?: number;
}

// Automatically infers the correct type from the provider's schema
type InferProviderOptions<P> = P extends ProviderFactoryWithOptions<infer S>
  ? S extends ZodTypeAny ? z.input<S> : Record<string, unknown>
  : Record<string, unknown>;

Parameters

name
string
required
Unique identifier for this model configuration. Used to reference the model in prompts and agents.Best Practice: Name models by use case, not by model ID:
  • heavy-thinking - Complex reasoning tasks
  • fast-response - Quick, simple responses
  • code-generation - Code-focused tasks
provider
ProviderFactory
required
Provider factory function imported from a provider package.
import { cerebras } from '@standardagents/cerebras';
import { cloudflare } from '@standardagents/cloudflare';
import { google } from '@standardagents/google';
import { groq } from '@standardagents/groq';
import { openai } from '@standardagents/openai';
import { openrouter } from '@standardagents/openrouter';
import { xai } from '@standardagents/xai';

provider: cerebras     // Cerebras Chat Completions API
provider: cloudflare   // Cloudflare Workers AI OpenAI-compatible API
provider: google       // Google Gemini / Imagen via @google/genai
provider: groq         // Groq chat completions via groq-sdk
provider: openai       // Direct OpenAI API
provider: openrouter   // Multi-provider gateway
provider: xai          // xAI Grok chat / image models
The provider determines which providerOptions are available. TypeScript provides autocompletion based on the provider’s schema.
model
string
required
The specific model identifier sent to the provider API.OpenAI examples:
  • gpt-4o
  • gpt-4o-mini
  • o1-mini
Cerebras examples:
  • llama3.1-8b
  • qwen-3-235b-a22b-instruct-2507
  • gpt-oss-120b
Cloudflare Workers AI examples:
  • @cf/meta/llama-3.1-8b-instruct
  • @cf/openai/gpt-oss-120b
  • @cf/qwen/qwen3-30b-a3b-fp8
OpenRouter format: provider/model-name
  • anthropic/claude-sonnet-4
  • google/gemini-2.0-flash-exp
  • meta-llama/llama-3.3-70b-instruct
Google examples:
  • gemini-2.5-pro
  • gemini-2.5-flash
  • imagen-4.0-generate-001
Groq examples:
  • llama-3.1-8b-instant
  • openai/gpt-oss-120b
  • qwen/qwen3-32b
xAI examples:
  • grok-4-0709
  • grok-code-fast-1
  • grok-imagine-image
inputPrice
number
Price per 1 million input tokens in USD. Typically required for direct providers like OpenAI and Cerebras.OpenRouter models fetch pricing automatically from the API.
outputPrice
number
Price per 1 million output tokens in USD. Typically required for direct providers like OpenAI and Cerebras.
cachedPrice
number
Price per 1 million cached tokens in USD (if provider supports caching).
fallbacks
string[]
Array of model names to try if the primary model fails. Must reference other models defined with defineModel.Fallbacks are tried for:
  • Network errors
  • Rate limits (429)
  • Server errors (5xx)
  • Authentication errors (401)
includedProviders
string[]
OpenRouter only. Only use specific providers for this model.
includedProviders: ['anthropic']  // Only route to Anthropic
capabilities
ModelCapabilities
Model capability flags for framework behavior. See Capabilities section.
providerOptions
InferProviderOptions<P>
Provider-specific options. Type is inferred from the provider’s schema.See Provider Options section for available options per provider.
providerTools
string[]
Enable built-in provider tools by name. Available tools depend on the provider and model.
providerTools: ['web_search', 'code_interpreter']

Provider Options

OpenAI

import { openai } from '@standardagents/openai';

defineModel({
  name: 'gpt-4o',
  provider: openai,
  model: 'gpt-4o',
  providerOptions: {
    service_tier: 'auto' | 'default' | 'flex',
    user: 'user-123',
    seed: 42,
    frequency_penalty: 0.5,  // -2.0 to 2.0
    presence_penalty: 0.5,   // -2.0 to 2.0
    logprobs: true,
    top_logprobs: 5,         // 0-20
    store: true,
    metadata: { key: 'value' },
  },
});

OpenRouter

import { openrouter } from '@standardagents/openrouter';

defineModel({
  name: 'claude-sonnet',
  provider: openrouter,
  model: 'anthropic/claude-sonnet-4',
  providerOptions: {
    provider: {
      order: ['anthropic', 'openai'],
      allow_fallbacks: true,
      require_parameters: true,
      data_collection: 'allow' | 'deny',
      zdr: true,
      only: ['anthropic'],
      ignore: ['azure'],
      sort: 'price' | 'throughput' | 'latency',
      max_price: {
        prompt: 1,
        completion: 5,
        request: 0.01,
        image: 0.05,
      },
      quantizations: ['fp16', 'bf16'],
      preferred_min_throughput: 100,
      preferred_max_latency: 1,
    },
  },
});

Cerebras

import { cerebras } from '@standardagents/cerebras';

defineModel({
  name: 'cerebras-fast',
  provider: cerebras,
  model: 'llama3.1-8b',
  inputPrice: 0.1,
  outputPrice: 0.1,
  providerOptions: {
    service_tier: 'default',
    reasoning_effort: 'high',
    clear_thinking: false,
    user: 'user-123',
    seed: 42,
  },
});

Cloudflare Workers AI

import { cloudflare } from '@standardagents/cloudflare';

defineModel({
  name: 'workers-fast',
  provider: cloudflare,
  model: '@cf/meta/llama-3.1-8b-instruct',
  inputPrice: 0.1,
  outputPrice: 0.1,
  providerOptions: {
    seed: 42,
    user: 'user-123',
  },
});

Capabilities

capabilities: {
  // Reasoning support (0-100 scale → model values)
  reasoningLevels: {
    0: null,      // No reasoning
    33: 'low',    // Low effort
    66: 'medium', // Medium effort
    100: 'high',  // High effort
  },

  // Feature support
  supportsImages: true,      // Can process image inputs
  supportsToolCalls: true,   // Supports function calling
  supportsStreaming: true,   // Supports streaming responses
  supportsJsonMode: true,    // Supports structured JSON output

  // Token limits
  maxContextTokens: 128000,  // Max context window
  maxOutputTokens: 16384,    // Max output tokens
}

Return Value

Returns the same ModelDefinition object passed in, enabling the build system to register it.

Examples

OpenAI Model

import { defineModel } from '@standardagents/spec';
import { openai } from '@standardagents/openai';

export default defineModel({
  name: 'heavy-thinking',
  provider: openai,
  model: 'gpt-4o',
  inputPrice: 2.5,
  outputPrice: 10,
  capabilities: {
    supportsImages: true,
    supportsToolCalls: true,
    supportsJsonMode: true,
    maxContextTokens: 128000,
  },
  providerOptions: {
    service_tier: 'default',
  },
});

OpenRouter Model

import { defineModel } from '@standardagents/spec';
import { openrouter } from '@standardagents/openrouter';

export default defineModel({
  name: 'claude-sonnet',
  provider: openrouter,
  model: 'anthropic/claude-sonnet-4',
  capabilities: {
    supportsImages: true,
    supportsToolCalls: true,
    maxContextTokens: 200000,
  },
  providerOptions: {
    provider: {
      zdr: true,
      only: ['anthropic'],
    },
  },
});
OpenRouter models automatically fetch pricing from the OpenRouter API. You do not need to specify inputPrice or outputPrice.

With Fallbacks

import { defineModel } from '@standardagents/spec';
import { openai } from '@standardagents/openai';

// Define fallback model first
export const fallbackCheap = defineModel({
  name: 'fallback-cheap',
  provider: openai,
  model: 'gpt-4o-mini',
  inputPrice: 0.15,
  outputPrice: 0.60,
});

// Primary model with fallback
export default defineModel({
  name: 'primary',
  provider: openai,
  model: 'gpt-4o',
  fallbacks: ['fallback-cheap'],
  inputPrice: 2.5,
  outputPrice: 10,
});

With Provider Tools

import { defineModel } from '@standardagents/spec';
import { openai } from '@standardagents/openai';

export default defineModel({
  name: 'gpt-4o-with-tools',
  provider: openai,
  model: 'gpt-4o',
  inputPrice: 2.5,
  outputPrice: 10,
  providerTools: ['web_search', 'code_interpreter', 'file_search'],
});

Reasoning Model

import { defineModel } from '@standardagents/spec';
import { openai } from '@standardagents/openai';

export default defineModel({
  name: 'deep-reasoning',
  provider: openai,
  model: 'o1',
  inputPrice: 15,
  outputPrice: 60,
  capabilities: {
    supportsToolCalls: true,
    reasoningLevels: {
      0: null,
      33: 'low',
      66: 'medium',
      100: 'high',
    },
  },
});

File Location

Models are auto-discovered from agents/models/:
agents/
└── models/
    ├── heavy_thinking.ts
    ├── fast_response.ts
    └── budget.ts
Requirements:
  • Use snake_case for file names
  • One model per file
  • Default export required

Generated Types

After running the build, StandardAgents.Models type is generated:
declare namespace StandardAgents {
  type Models = 'heavy-thinking' | 'fast-response' | 'budget';
}
This enables type-safe model references in prompts.

Runtime Validation

defineModel performs runtime validation:
  • name is required and must be a non-empty string
  • provider must be a valid ProviderFactory function
  • model is required
  • Pricing values must be non-negative if provided
  • providerOptions are validated against the provider’s schema
// This will throw at runtime:
defineModel({
  name: 'my-model',
  provider: 'openai',  // Error: Must be a ProviderFactory, not a string
  model: 'gpt-4o',
});