AI Providers
Dory uses a pluggable AI provider architecture for AI Chat, SQL generation, chart suggestions, and schema-aware analysis. You can switch providers through environment variables without changing application code.
Supported providers
| Provider | DORY_AI_PROVIDER value | Notes |
|---|---|---|
| OpenAI | openai | Uses the official OpenAI API. |
| OpenAI-compatible | openai-compatible | For services exposing an OpenAI-compatible API. |
| Anthropic | anthropic | Claude models via Anthropic's API. |
google | Gemini models via Google Generative AI. | |
| Qwen | qwen | Qwen models through a compatible endpoint. |
| xAI | xai | Grok models via xAI API. |
Core variables
Most deployments need the following values:
export DORY_AI_PROVIDER=openai
export DORY_AI_MODEL=gpt-4o-mini
export DORY_AI_API_KEY=your_api_key_here
export DORY_AI_URL=https://api.openai.com/v1| Variable | Required | Description |
|---|---|---|
DORY_AI_PROVIDER | Yes | Selects the provider adapter. |
DORY_AI_MODEL | Yes | Selects the model used by Dory AI features. |
DORY_AI_API_KEY | Usually | API key or bearer token accepted by the provider. |
DORY_AI_URL | Provider-dependent | Base URL for OpenAI-compatible providers or custom endpoints. |
Do not put provider credentials in frontend code, checked-in .env files, screenshots, or support tickets.
Provider examples
OpenAI
DORY_AI_PROVIDER=openai
DORY_AI_MODEL=gpt-4o-mini
DORY_AI_API_KEY=sk-...
DORY_AI_URL=https://api.openai.com/v1Use this path when you want the most direct documented setup for Dory AI SQL generation.
OpenAI-compatible
DORY_AI_PROVIDER=openai-compatible
DORY_AI_MODEL=your-model-name
DORY_AI_API_KEY=your_provider_key
DORY_AI_URL=https://your-compatible-endpoint.example.com/v1Use this path when your organization already standardizes on an OpenAI-compatible API surface.
Anthropic
DORY_AI_PROVIDER=anthropic
DORY_AI_MODEL=claude-3-5-sonnet-latest
DORY_AI_API_KEY=your_anthropic_keyUse Anthropic when your team prefers Claude models for longer context reasoning or policy reasons.
DORY_AI_PROVIDER=google
DORY_AI_MODEL=gemini-1.5-pro
DORY_AI_API_KEY=your_google_keyUse Google when your organization already runs Gemini workflows.
Qwen
DORY_AI_PROVIDER=qwen
DORY_AI_MODEL=qwen-plus
DORY_AI_API_KEY=your_qwen_keyUse Qwen when it is the preferred model family for your language, region, or cost profile.
xAI
DORY_AI_PROVIDER=xai
DORY_AI_MODEL=grok-2-latest
DORY_AI_API_KEY=your_xai_keyUse xAI when Grok models are part of your approved AI stack.
Choosing a provider
| Requirement | Recommended direction |
|---|---|
| Fastest standard setup | Start with OpenAI. |
| Existing compatible endpoint or proxy | Use OpenAI-compatible. |
| Long reasoning tasks | Compare Anthropic and OpenAI models. |
| Existing Google AI usage | Use Google. |
| Region, language, or cost fit | Compare Qwen and other approved providers. |
| Internal compliance requirement | Choose the provider already approved by your organization. |
Validation checklist
- Confirm the provider value matches one of the documented
DORY_AI_PROVIDERvalues. - Confirm the model name exists for that provider.
- Confirm the API key has permission to call the selected model.
- If using a compatible endpoint, confirm
DORY_AI_URLincludes the correct base path. - Restart the Dory server after changing provider variables.
- Test AI Chat with a small schema question before testing large SQL generation tasks.
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
| AI Chat returns authentication errors | Invalid or missing API key. | Rotate the key and update DORY_AI_API_KEY. |
| Model not found | Wrong model name for the selected provider. | Use a model name supported by that provider. |
| Compatible endpoint fails | Incorrect base URL or path. | Verify the provider's OpenAI-compatible base URL. |
| SQL quality is inconsistent | Model has limited schema reasoning ability. | Try a stronger model and include clearer table context. |
| Responses are slow | Provider latency or large schema context. | Use a faster model or narrow the database context. |
Limitation
Provider support does not guarantee identical output quality. SQL generation quality will vary by model, schema complexity, and prompt clarity.