Spaces:
Sleeping
Sleeping
Create .env
Browse files
.env
ADDED
|
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# You can use any model that available to you and deployed on Hugging Face with compatible API
|
| 2 |
+
# X_NAME variables are optional for HuggingFace API you can use them for your convenience
|
| 3 |
+
|
| 4 |
+
# Make sure your key has permission to use all models
|
| 5 |
+
# Set up you key here: https://huggingface.co/docs/api-inference/en/quicktour#get-your-api-token
|
| 6 |
+
HF_API_KEY=os.getenv('HF_Key')
|
| 7 |
+
|
| 8 |
+
# For example you can try public Inference API endpoint for Meta-Llama-3-70B-Instruct model
|
| 9 |
+
# This model quiality is comparable with GPT-4
|
| 10 |
+
# But public API has strict limit for output tokens, so it is very hard to use it for this usecase
|
| 11 |
+
# You can use your private API endpoint for this model
|
| 12 |
+
# Or use any other Hugging Face model that supports Messages API
|
| 13 |
+
# Don't forget to add '/v1' to the end of the URL
|
| 14 |
+
LLM_URL=https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct/v1
|
| 15 |
+
LLM_TYPE=HF_API
|
| 16 |
+
LLM_NAME=Meta-Llama-3-70B-Instruct
|
| 17 |
+
|
| 18 |
+
# If you want to use any other model serving provider the configuration will be similar
|
| 19 |
+
# Below is the example for Groq
|
| 20 |
+
# GROQ_API_KEY=gsk_YOUR_GROQ_API_KEY
|
| 21 |
+
# LLM_URL=https://api.groq.com/openai/v1
|
| 22 |
+
# LLM_TYPE=GROQ_API
|
| 23 |
+
# LLM_NAME=llama3-70b-8192
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
# The Open AI whisper family with more models is available on HuggingFace:
|
| 27 |
+
# https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013
|
| 28 |
+
# You can also use any other compatible STT model from HuggingFace
|
| 29 |
+
STT_URL=https://api-inference.huggingface.co/models/openai/whisper-tiny.en
|
| 30 |
+
STT_TYPE=HF_API
|
| 31 |
+
STT_NAME=whisper-tiny.en
|
| 32 |
+
|
| 33 |
+
# You can use compatible TTS model from HuggingFace
|
| 34 |
+
# For example you can try public Inference API endpoint for Facebook MMS-TTS model
|
| 35 |
+
# In my experience OS TTS models from HF sound much more robotic than OpenAI TTS models
|
| 36 |
+
TTS_URL=https://api-inference.huggingface.co/models/facebook/mms-tts-eng
|
| 37 |
+
TTS_TYPE=HF_API
|
| 38 |
+
TTS_NAME=Facebook-mms-tts-eng
|