Quantizations of https://huggingface.co/icefog72/IceCoffeeRP-7b

Open source inference clients/UIs

Closed source inference clients/UIs


From original readme

This is a merge of pre-trained language models created using mergekit. Prompt template: Alpaca, maybe ChatML

  • measurement.json for quanting exl2 included.

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

  • G:\FModels\IceCoffeeTest10
  • G:\FModels\IceCoffeeTest5

Configuration

The following YAML configuration was used to produce this model:


slices:
  - sources:
      - model: G:\FModels\IceCoffeeTest5
        layer_range: [0, 32]
      - model: G:\FModels\IceCoffeeTest10
        layer_range: [0, 32]
merge_method: slerp
base_model: G:\FModels\IceCoffeeTest5
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 73.19
AI2 Reasoning Challenge (25-Shot) 71.16
HellaSwag (10-Shot) 87.74
MMLU (5-Shot) 63.54
TruthfulQA (0-shot) 70.03
Winogrande (5-shot) 82.48
GSM8k (5-shot) 64.22

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 20.24
IFEval (0-Shot) 49.59
BBH (3-Shot) 29.40
MATH Lvl 5 (4-Shot) 4.83
GPQA (0-shot) 4.70
MuSR (0-shot) 11.00
MMLU-PRO (5-shot) 21.94
Downloads last month
235
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support