BLING Models
Collection
Small CPU-based RAG-optimized, instruct-following 1B-3B parameter models • 27 items • Updated • 28
How to use llmware/bling-phi-3-onnx with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="llmware/bling-phi-3-onnx", trust_remote_code=True)
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llmware/bling-phi-3-onnx", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("llmware/bling-phi-3-onnx", trust_remote_code=True)
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use llmware/bling-phi-3-onnx with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "llmware/bling-phi-3-onnx"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "llmware/bling-phi-3-onnx",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/llmware/bling-phi-3-onnx
How to use llmware/bling-phi-3-onnx with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "llmware/bling-phi-3-onnx" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "llmware/bling-phi-3-onnx",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "llmware/bling-phi-3-onnx" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "llmware/bling-phi-3-onnx",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use llmware/bling-phi-3-onnx with Docker Model Runner:
docker model run hf.co/llmware/bling-phi-3-onnx
bling-phi-3-onnx is a fast and accurate fact-based question-answering model, designed for retrieval augmented generation (RAG) with complex business documents, quantized and packaged in ONNX int4 for AI PCs using Intel GPU, CPU and NPU.
This model is one of the most accurate in the BLING/DRAGON model series, which is especially notable given the relatively small size and is ideal for use on AI PCs and local inferencing.
Base model
llmware/bling-phi-3