Models & endpoints
Runtime LLM behavior comes from ~/.anycode/config.json (and per-agent overrides in routing.agents). Cargo feature openai on anycode-llm adds an OpenAIClient code path; OpenAI-compatible gateways (z.ai, OpenRouter, etc.) still typically use the Zai OpenAI-shaped client unless provider is exactly openai with that feature enabled.
OpenClaw alignment
- Provider list: The CLI and config validator use the static catalog in
crates/llm/src/provider_catalog.rs(PROVIDER_CATALOG), aligned with the OpenClaw Provider Directory. Canonical upstream ids live in openclaw/openclaw. - Model refs: OpenClaw often writes
provider/model(e.g.anthropic/claude-opus-4-6). In anyCode, split that intoprovider+modelinconfig.json(same meaning). You may also put a qualified string inmodelalone (e.g.anthropic/claude-3-5-sonnet); it is validated independently of the globalprovider. Resolution helpers live inanycode_llm(build_qualified_chat_model_value,resolve_chat_model_ref, mirroring OpenClawchat-model-ref.ts). anycode status: printsprimary_chat_ref,model_routesaliases, and the resolvedprovider / modelperRuntimeModeso you can verify mode routing without starting a session.- Naming: Config
providervalues are snake_case (e.g.cloudflare_ai_gateway,vercel_ai_gateway). OpenClaw kebab-case names are accepted and normalized (e.g.cloudflare-ai-gateway→cloudflare_ai_gateway). - Aliases: Examples:
claude→anthropic,zai/bigmodel→z.ai,kimi→moonshot,github-copilot→copilot,amazon-bedrock→bedrock,glm→z.ai. - AWS Bedrock: Set
providertoamazon_bedrock(aliasbedrock), choose a model id for your region, and rely on the AWS credential chain (e.g.AWS_PROFILE, instance role). The stack uses Bedrock Converse (Converse/ streaming). - GitHub Copilot: Set
providertogithub_copilot(aliascopilot), pick a Copilot Chat–compatible model id, then runanycode model auth copilot(device flow) so tokens are stored under~/.anycode/credentials/. - Placeholders: Some catalog entries remain OpenClaw parity only (e.g. media-only APIs). Use
customwith your own OpenAI-compatiblebase_urlwhen the catalog entry is not wired in anyCode.
Run anycode model to pick a provider interactively; the menu follows the same catalog.
config.json fields (summary)
provider: A known catalog id (seePROVIDER_CATALOGabove), plus aliases such asz.ai/bigmodel/zai,anthropic/claude, or kebab-case OpenClaw-style ids.plan:codingorgeneral(affects default z.ai base URL whenbase_urlis empty).base_url: optional override.model: model id for the active provider.api_key: vendor key.provider_credentials: extra keys for other vendors when routing mixes providers.session(optional): TUI session behavior.auto_compact(defaulttrue): before sending your next user message, if the last agent turn reported input token usage above a threshold, anyCode runs automatic compaction (same pipeline as/compact).context_window_auto(defaulttrue): derive the context window fromprovider+model(built-in heuristics inanycode_llm::resolve_context_window_tokens, e.g. Claude ≈200k, GLM/z.ai ≈128k, Gemini ≈1M). Setcontext_window_autotofalseand setcontext_window_tokensto a fixed size when you want a manual override. Tuneauto_compact_ratio(default0.88) orauto_compact_min_input_tokens(absolute threshold, overrides the ratio). Setauto_compacttofalseto disable. The runtime flagcontext-compression(**anycode enable context-compression**) is tracked inruntime.features(see Releases & flags); threshold behavior remains driven bysession.auto_compact_*fields above.
z.ai (BigModel)
Default endpoints (when base_url omitted):
- General:
https://api.z.ai/api/paas/v4/chat/completions - Coding plan:
https://api.z.ai/api/coding/paas/v4/chat/completions
Client uses OpenAI Chat Completions shape: tools / tool_calls and multi-turn history.
Anthropic
Set provider to anthropic (or claude), provide api_key and a valid model id for the Messages API. Optional base_url overrides the default endpoint.
Retries
Retries with backoff on HTTP 429, 5xx, and transport errors. 401/403 and other non-retryable codes are not retried.
OpenAI official API (optional)
With cargo build -p anycode --features openai, if global provider normalizes to openai, the stack may use OpenAIClient instead of ZaiClient for that profile.
More detail (Chinese, including feature matrix and env vars) lives in 模型与端点(中文).
