Llama-3.1-8B-Instruct - GPTQ (4-bit)

Source model: meta-llama/Llama-3.1-8B-Instruct

This model was quantized to 4-bit using GPTQModel.

Quantization parameters:

  • bits: 4
  • group_size: 32
  • damp_percent: 0.05
  • desc_act: False

Usage

# pip install transformers gptqmodel --no-build-isolation
from gptqmodel import GPTQModel
model_id = "iproskurina/Llama-3.1-8B-Instruct-gptqmodel-4bit-g32"
model = GPTQModel.load(model_id)
result = model.generate("Uncovering deep insights")[0]
print(model.tokenizer.decode(result))
Downloads last month
5
Safetensors
Model size
8B params
Tensor type
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for iproskurina/Llama-3.1-8B-Instruct-gptqmodel-4bit-g32

Quantized
(297)
this model

Dataset used to train iproskurina/Llama-3.1-8B-Instruct-gptqmodel-4bit-g32