Providers and models

Open Interpreter is provider agnostic. The provider you choose decides which models are available and where the bill goes. You can mix providers across sessions and even switch mid-conversation.

Built-in providers

ProviderWhat it gives you
OpenAIGPT-5 family including the Codex variants. Default pick.
AnthropicClaude family. Strong at long-running, careful work.
OllamaLocal models, fully offline. Discovers what you installed.
LM StudioLocal server with a friendly GUI for model management.

Add a custom provider

Anything OpenAI-compatible works. Pick "Add a custom provider" in the onboarding picker, or add it to config.toml directly:

[model_providers.together]
name = "Together"
base_url = "https://api.together.xyz/v1"
env_key = "TOGETHER_API_KEY"
wire_api = "openai"

[profiles.together-llama]
model_provider = "together"
model = "meta-llama/Llama-3.3-70B-Instruct-Turbo"

Then:

interpreter --profile together-llama

Switch from inside a session

/model

Pick a different provider, model, or reasoning effort. The current model shows in the footer.

For the fastest responses on a supported model:

/fast

Reasoning effort

Open Interpreter exposes a reasoning dial for models that support it.

LevelWhen to use
noneQuick edits, simple lookups
lowRoutine work where speed matters
mediumDefault. Balanced for everyday use
highTricky bugs, complex refactors, long-form review

Set it as a default in config.toml:

model_reasoning_effort = "medium"

Or change it in the /model picker.

Local models

Local models keep everything on your machine. Useful for sensitive code and for working offline.

Install a runner

Grab Ollama or LM Studio.

Pull a model

For Ollama:

ollama pull qwen2.5-coder:14b
Pick it in Open Interpreter

Run /model and choose the local provider. Open Interpreter discovers what is installed.

Edit on GitHub