Hugging Face Provider
Hugging Face provides access to thousands of open-source models that can run locally or via their inference API.
Setup
Prerequisites
Hugging Face models require additional dependencies:
# Install as PRE_COMMAND in .mega-linter.yml
pip install langchain-huggingface transformers torch
Configuration
-
Optional: Get API Token (for private models or hosted inference):
- Go to Hugging Face Settings
- Create a new token
-
Set Environment Variable (optional):
bash export HUGGINGFACE_API_TOKEN=hf_your-token
-
Configure MegaLinter:
yaml LLM_ADVISOR_ENABLED: true LLM_PROVIDER: huggingface LLM_MODEL_NAME: microsoft/DialoGPT-medium LLM_MAX_TOKENS: 1000 LLM_TEMPERATURE: 0.1
Official Model List
For the most up-to-date list of Hugging Face models and their capabilities, see the official Hugging Face model hub:
Configuration Options
Basic Configuration
LLM_PROVIDER: huggingface
LLM_MODEL_NAME: microsoft/DialoGPT-medium
Advanced Configuration
# Model task type
HUGGINGFACE_TASK: text-generation
# Device settings
HUGGINGFACE_DEVICE: -1 # -1 for CPU, 0+ for GPU
# For hosted inference API
HUGGINGFACE_USE_API: true
Troubleshooting
Common Issues
-
"Model not found"
- Verify model name and repository exists
- Check if model requires authentication
- Ensure model supports the specified task
-
"Out of memory"
- Use smaller models
- Enable CPU-only mode:
HUGGINGFACE_DEVICE: -1
- Close other applications
-
"Import errors"
- Install required dependencies:
bash pip install langchain-huggingface transformers torch
- Install required dependencies:
-
"Slow inference"
- Use GPU if available:
HUGGINGFACE_DEVICE: 0
- Consider smaller models
- Use hosted API for large models
- Use GPU if available:
Debug Mode
LOG_LEVEL: DEBUG
HUGGINGFACE_VERBOSE: true