Popular Model Providers
These are the most commonly used model providers that offer a wide range of capabilities:| Provider | Description | Capabilities | 
|---|---|---|
| Anthropic | Providers of Claude models, known for long context windows and strong reasoning | Chat, Edit, Apply, Embeddings | 
| OpenAI | Creators of GPT models with strong coding capabilities | Chat, Edit, Apply, Embeddings | 
| Azure | Microsoft’s cloud platform offering OpenAI models | Chat, Edit, Apply, Embeddings | 
| Amazon Bedrock | AWS service offering access to various foundation models | Chat, Edit, Apply, Embeddings | 
| Ollama | Run open-source models locally with a simple interface | Chat, Edit, Apply, Embeddings, Autocomplete | 
| Google Gemini | Google’s multimodal AI models | Chat, Edit, Apply, Embeddings | 
| DeepSeek | Specialized code models with strong performance | Chat, Edit, Apply | 
| Mistral | High-performance open models with commercial offerings | Chat, Edit, Apply, Embeddings | 
| xAI | Grok models from xAI | Chat, Edit, Apply | 
| Vertex AI | Google Cloud’s machine learning platform | Chat, Edit, Apply, Embeddings | 
| Inception | On-premises open-source model runners | Chat, Edit, Apply | 
Additional Model Providers
Beyond the top-level providers, Continue supports many other options:Hosted Services
| Provider | Description | 
|---|---|
| Groq | Ultra-fast inference for various open models | 
| Together AI | Platform for running a variety of open models | 
| DeepInfra | Hosting for various open source models | 
| OpenRouter | Gateway to multiple model providers | 
| Tetrate Agent Router Service | Gateway with intelligent routing across multiple model providers | 
| Cohere | Models specialized for semantic search and text generation | 
| NVIDIA | GPU-accelerated model hosting | 
| Cloudflare | Edge-based AI inference services | 
| HuggingFace | Platform for open source models | 
Local Model Options
| Provider | Description | 
|---|---|
| LM Studio | Desktop app for running models locally | 
| llama.cpp | Optimized C++ implementation for running LLMs | 
| LlamaStack | Stack for running Llama models locally | 
| llamafile | Self-contained executable model files | 
Enterprise Solutions
How to Choose a Model Provider
When selecting a model provider, consider:- Hosting preference: Do you need local models for offline use or privacy, or are you comfortable with cloud services?
- Performance requirements: Different providers offer varying levels of speed, quality, and context length.
- Specific capabilities: Some models excel at code generation, others at embeddings or reasoning tasks.
- Pricing: Costs vary significantly between providers, from free local options to premium cloud services.
- API key requirements: Most cloud providers require API keys that you’ll need to configure.
Configuration Format
You can add models to yourconfig.yaml file like this: