Instructions to use panigrah/winberto-gpt2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use panigrah/winberto-gpt2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="panigrah/winberto-gpt2")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("panigrah/winberto-gpt2") model = AutoModelForCausalLM.from_pretrained("panigrah/winberto-gpt2") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use panigrah/winberto-gpt2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "panigrah/winberto-gpt2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "panigrah/winberto-gpt2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/panigrah/winberto-gpt2
- SGLang
How to use panigrah/winberto-gpt2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "panigrah/winberto-gpt2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "panigrah/winberto-gpt2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "panigrah/winberto-gpt2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "panigrah/winberto-gpt2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use panigrah/winberto-gpt2 with Docker Model Runner:
docker model run hf.co/panigrah/winberto-gpt2
Wineberto gpt2
GPT2 model trained from scratch on the winemag reviews dataset to generate wine descriptions using text-generation. Note that these descriptions are mostly random descriptions.
Model description
How to use
You can use this model directly like so..
>>> from transformers import pipeline
>>> clm = pipeline('text-generation', model='panigrah/winberto-gpt2')
>>> clm("California Cabernet is", max_length=30, num_return_sequences=3)
[{'generated_text': 'California Pinot is a dark golden color. black plum and cherry aromas and flavors show their aromatic flair amidst ripe black fruit, cola and'},
{'generated_text': 'California Pinot is a wine made from a grape that was aged in large oak tanks. the fruit is balanced by acidity and a crisp'},
{'generated_text': 'California Pinot is a great surprise at all levels of age, but this delivers a soft, supple and luscious feel on the palate.'}]```
Training data
The GPT2 model was trained from scratch on 150K wine review descriptions. The training was cut short due at 5 epochs due to resource issues and still has a relatively high training and validatioan loss. The model is able to generate passable wine descriptions but they are not well correlated to the type of wine provided at the prompt itself.
- Downloads last month
- 7