Getting Started with API Providers
PS Smart Agent supports multiple AI providers, giving you flexibility to choose the best model for your needs. This guide will help you connect to your preferred provider.
Supported Providers
| Provider | Type | Best For |
|---|---|---|
| Ollama | Local | Privacy, offline use, free |
| OpenAI | Cloud | GPT-4, GPT-3.5 |
| Anthropic | Cloud | Claude 3.5 Sonnet, Claude 3 Opus |
| DeepSeek | Cloud | Cost-effective coding |
| OpenRouter | Cloud | 100+ models via one API |
| LM Studio | Local | Local model hosting |
| VS Code LM | Built-in | Copilot integration |
Setting Up Ollama (Local - Free)
Ollama allows you to run AI models completely locally on your machine.
1. Install Ollama
# macOS/Linux
curl -fsSL https://ollama.ai/install.sh | sh
# Windows - Download from https://ollama.ai/download
2. Pull a Model
# Pull Llama 3.2
ollama pull llama3.2
# Pull Qwen 2.5
ollama pull qwen2.5
# Pull Mistral
ollama pull mistral
3. Configure PS Smart Agent
- Open PS Smart Agent in VS Code
- Click the settings icon
- Select “Ollama” as provider
- Enter base URL:
http://localhost:11434 - Select your model from the dropdown
Setting Up OpenAI
- Get your API key from OpenAI Platform
- In PS Smart Agent settings, select “OpenAI”
- Enter your API key
- Select your preferred model (GPT-4, GPT-3.5-turbo, etc.)
Setting Up Anthropic Claude
- Get your API key from Anthropic Console
- In PS Smart Agent settings, select “Anthropic”
- Enter your API key
- Select Claude 3.5 Sonnet or Claude 3 Opus
Setting Up DeepSeek
DeepSeek offers cost-effective models optimized for coding.
- Get your API key from DeepSeek Platform
- In PS Smart Agent settings, select “DeepSeek”
- Enter your API key
- Base URL:
https://api.deepseek.com/v1
Setting Up OpenRouter
OpenRouter provides access to 100+ models through a single API.
- Get your API key from OpenRouter
- In PS Smart Agent settings, select “OpenRouter”
- Enter your API key
- Select from available models
Testing Your Connection
After configuring your provider:
- Click “Test Connection” in settings
- If successful, you’ll see a green checkmark
- If failed, check your API key and base URL
Troubleshooting
Connection Failed
- Verify your API key is correct
- Check the base URL includes
/v1for OpenAI-compatible APIs - Ensure you have available API credits
Ollama Not Found
- Make sure Ollama is running:
ollama serve - Check the base URL is
http://localhost:11434
Model Not Available
- For Ollama:
ollama pull <model-name> - For cloud providers: Check your subscription tier
Need help? Visit pyshine.com for more tutorials.