legacy-datasets/wikipedia
Updated โข 121k โข 629
How to use cyberagent/open-calm-7b with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="cyberagent/open-calm-7b") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cyberagent/open-calm-7b")
model = AutoModelForCausalLM.from_pretrained("cyberagent/open-calm-7b")How to use cyberagent/open-calm-7b with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "cyberagent/open-calm-7b"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "cyberagent/open-calm-7b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/cyberagent/open-calm-7b
How to use cyberagent/open-calm-7b with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "cyberagent/open-calm-7b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "cyberagent/open-calm-7b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "cyberagent/open-calm-7b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "cyberagent/open-calm-7b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use cyberagent/open-calm-7b with Docker Model Runner:
docker model run hf.co/cyberagent/open-calm-7b
OpenCALM is a suite of decoder-only language models pre-trained on Japanese datasets, developed by CyberAgent, Inc.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cyberagent/open-calm-7b", device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("cyberagent/open-calm-7b")
inputs = tokenizer("AIใซใใฃใฆ็ง้ใฎๆฎใใใฏใ", return_tensors="pt").to(model.device)
with torch.no_grad():
tokens = model.generate(
**inputs,
max_new_tokens=64,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.05,
pad_token_id=tokenizer.pad_token_id,
)
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)
| Model | Params | Layers | Dim | Heads | Dev ppl |
|---|---|---|---|---|---|
| cyberagent/open-calm-small | 160M | 12 | 768 | 12 | 19.7 |
| cyberagent/open-calm-medium | 400M | 24 | 1024 | 16 | 13.8 |
| cyberagent/open-calm-large | 830M | 24 | 1536 | 16 | 11.3 |
| cyberagent/open-calm-1b | 1.4B | 24 | 2048 | 16 | 10.3 |
| cyberagent/open-calm-3b | 2.7B | 32 | 2560 | 32 | 9.7 |
| cyberagent/open-calm-7b | 6.8B | 32 | 4096 | 32 | 8.2 |
@software{gpt-neox-library,
title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
url = {https://www.github.com/eleutherai/gpt-neox},
doi = {10.5281/zenodo.5879544},
month = {8},
year = {2021},
version = {0.0.1},
}