Skip to main content

AI Providers

Configure your preferred AI provider for screenplay analysis. Khaos Machine supports both local and cloud providers — switch between them at any time without losing previous results.

Settings page showing provider configuration with task routing

Provider Overview

ProviderTypeCostBest For
OllamaLocalFreePrivacy-first, offline work, no API key
LM StudioLocalFreeGUI-based local model management
OpenAICloudPay-per-useGPT-4o, high-quality analysis
AnthropicCloudPay-per-useClaude models, nuanced analysis
MistralCloudPay-per-useFast, cost-effective European provider
GroqCloudFree tierExtremely fast inference

Local Providers

Ollama

Ollama runs AI models locally on your machine. No account, no API key, no data leaves your disk.

Setup:

  1. Download and install Ollama from ollama.com.
  2. Pull a recommended model:
# Recommended for screenplay analysis
ollama pull qwen3:8b

# Alternative — larger, higher quality
ollama pull llama3:70b
  1. In Khaos Machine Settings, select Ollama as your provider.
  2. Choose your model from the dropdown.

Recommended models:

ModelSizeQualitySpeed
qwen3:8b5 GBGoodFast
llama3:8b4.7 GBGoodFast
llama3:70b40 GBExcellentSlow
mistral:7b4.1 GBGoodFast
tip

Start with qwen3:8b — it provides good analysis quality with fast inference on most hardware.

LM Studio

LM Studio provides a desktop application for running local AI models. If you prefer a visual interface over Terminal commands, LM Studio is the better choice.

Setup:

  1. Download LM Studio from lmstudio.ai and install it.
  2. Open LM Studio and search for a model — we recommend qwen3 8b or llama 3 8b.
  3. Click the download button next to the model.
  4. Go to the Local Server tab (the ↔ icon) and click Start Server.
  5. In Khaos Machine Settings, you should see LM Studio with a green Ready badge.

LM Studio provider showing connected status and available models

Keep LM Studio running with its server active while you use Khaos Machine. You can minimize the window, but don't quit the app.

Model identifiers

LM Studio uses full model identifiers (e.g., qwen/qwen3-8b) rather than short names. Khaos Machine reads these automatically from the running server — just select the model from the dropdown in Settings.

Cloud Providers

Cloud providers require an API key. Your screenplay text is sent to their servers for processing — review each provider's privacy policy.

OpenAI

  1. Create an API key at platform.openai.com/api-keys.
  2. In Khaos Machine Settings, select OpenAI as your provider.
  3. Enter your API key.
  4. Choose a model — gpt-4o is recommended for best results.

Anthropic

  1. Create an API key at console.anthropic.com.
  2. In Khaos Machine Settings, select Anthropic as your provider.
  3. Enter your API key.
  4. Choose a model — claude-sonnet-4-20250514 or claude-3-5-sonnet recommended.

Mistral

  1. Create an API key at console.mistral.ai.
  2. In Khaos Machine Settings, select Mistral as your provider.
  3. Enter your API key.
  4. Choose a model.

Groq

Groq offers extremely fast inference with a generous free tier.

  1. Create an API key at console.groq.com.
  2. In Khaos Machine Settings, select Groq as your provider.
  3. Enter your API key.
  4. Choose a model.

Switching Providers

Analysis results are stored per-provider, so switching providers never overwrites previous results. This lets you:

  • Analyze with Ollama first (free, local), then re-analyze with GPT-4o for comparison.
  • Try different models and compare analysis quality.
  • Start with a fast provider for initial feedback, then use a more capable model for final analysis.

To switch providers:

  1. Go to Settings.
  2. Select a different provider and model.
  3. Run analysis again — new results are stored alongside existing ones.

API Key Storage

API keys are stored locally in ~/.khaos/keys.json with 0600 file permissions (owner-only read/write). Keys are never sent anywhere except to the configured provider's API endpoint.

Next Steps