All LLM Providers
This page provides a comprehensive comparison of all supported LLM providers for MegaLinter's AI-powered fix suggestions.
Provider Comparison
Provider | API Cost | Local/Cloud | Setup Complexity | Best For |
---|---|---|---|---|
OpenAI | $$ | Cloud | Easy | General use, high quality |
Anthropic | $$ | Cloud | Easy | Code analysis, safety |
Google GenAI | $ | Cloud | Easy | Cost-effective, multilingual |
Mistral AI | $ | Cloud | Easy | European alternative |
DeepSeek | $ | Cloud | Easy | Code-focused models |
Grok | $$ | Cloud | Easy | xAI's conversational model |
Ollama | Free | Local | Medium | Privacy, offline use |
Hugging Face | Free* | Local/Cloud | Hard | Open models, customization |
*Free for local models, paid for hosted inference
Common Configuration Options
All providers support these common configuration options:
LLM_ADVISOR_ENABLED: true # Enable/disable AI advisor
LLM_PROVIDER: provider_name # Choose your provider
LLM_MODEL_NAME: model_name # Provider-specific model
LLM_MAX_TOKENS: 1000 # Maximum response tokens
LLM_TEMPERATURE: 0.1 # Creativity (0.0-1.0)
Security Recommendations
- Environment Variables: Always set API keys as environment variables, never in configuration files
- Private Code: Use local providers (Ollama, Hugging Face local) for sensitive codebases
- Rate Limiting: Monitor API usage to avoid unexpected costs
- Review Suggestions: Always review AI suggestions before applying changes
Next Steps
- Choose a provider based on your requirements
- Follow the setup guide for your chosen provider
- Test with a small project before deploying widely
- Monitor costs and usage for cloud providers
Each provider page includes detailed setup instructions, model recommendations, and troubleshooting tips.