Llama-3.1-8B-Instruct - GPTQ (4-bit)
Source model: meta-llama/Llama-3.1-8B-Instruct
This model was quantized to 4-bit using GPTQModel.
Quantization parameters:
- bits: 4
- group_size: 32
- damp_percent: 0.05
- desc_act: False
Usage
# pip install transformers gptqmodel --no-build-isolation
from gptqmodel import GPTQModel
model_id = "iproskurina/Llama-3.1-8B-Instruct-gptqmodel-4bit-g32"
model = GPTQModel.load(model_id)
result = model.generate("Uncovering deep insights")[0]
print(model.tokenizer.decode(result))
- Downloads last month
- 5
Model tree for iproskurina/Llama-3.1-8B-Instruct-gptqmodel-4bit-g32
Base model
meta-llama/Llama-3.1-8B