Gen AI Configs
HuggingFace Inference API
Configure Danswer to use HuggingFace APIs
Refer to Model Configs for how to set the environment variables for your particular deployment.
To use the HuggingFace Inference APIs, you must sign up for a Pro Account
to get an API Key
- After signing up for
Pro Account
, go to your user settings:
- Copy the
User Access Token
Set Danswer to use Llama-2-70B
via next-token generation prompting
On the LLM
page in the Admin Panel add a Custom LLM Provider
with the following settings:
Update (November 2023): HuggingFace has stopped supporting very large models (>10GB) via the Pro Plan. The latest options are to rent dedicated hardware from them via Inference Endpoint or get an Enterprise Plan. The Pro Plan still supports smaller models but these produce worse results for Danswer.