File size: 48,978 Bytes
879cb97 27ea7f3 879cb97 0abf234 e8da02a 1ae5d08 0abf234 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 0abf234 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 37ea34a 27ea7f3 879cb97 37ea34a 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 27ea7f3 879cb97 485bc47 879cb97 37ea34a 879cb97 27ea7f3 879cb97 6330c67 879cb97 6330c67 879cb97 6330c67 879cb97 0762265 6330c67 879cb97 27ea7f3 879cb97 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 |
---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
language:
- ca
- en
- es
- eu
- gl
datasets:
- CohereLabs/aya_dataset
- projecte-aina/CoQCat
- databricks/databricks-dolly-15k
- projecte-aina/dolly3k_ca
- projecte-aina/MentorES
- projecte-aina/MentorCA
- HuggingFaceH4/no_robots
- projecte-aina/RAG_Multilingual
- Unbabel/TowerBlocks-v0.2
- OpenAssistant/oasst2
- open-r1/OpenR1-Math-220k
- HuggingFaceFW/fineweb-edu
base_model:
- BSC-LT/ALIA-40b
---

> [!WARNING]
> **WARNING:** ALIA-40b-Instruct is an instruction-tuned model with a preliminary alignment process. It has not yet undergone a full alignment procedure to ensure safety. The model may generate biased, factually incorrect, harmful, or inappropriate content. Users should **refer to the Limitations section** and apply additional filtering and alignment processes before deploying this model in production.
> [!NOTE]
> **Work In Progress** New versions will be available during the coming weeks/months.
>
> **Sampling Parameters:** For optimal performance, we recommend using temperatures close to zero (0 - 0.2). Additionally, we advise against using any type of repetition penalty, as from our experience, [it negatively impacts instructed model's responses](https://www.reddit.com/r/LocalLLaMA/comments/1g383mq/repetition_penalties_are_terribly_implemented_a/).
# ALIA-40b-instruct Model Card
The ALIA-40b-instruct model is an instructed variant of a context-extended [base ALIA-40b model](https://huggingface.co/BSC-LT/ALIA-40b), which was pre-trained from scratch on 9.83 trillion tokens of carefully curated data spanning 35 European languages (including code). This instructed version is optimized to follow user prompts and engage in dialogue. It supports a broad range of languages (e.g. Spanish, Catalan, Basque, English, etc.) and is capable of text generation, translation, summarization, and question-answering in these languages. This version has also gone through a preliminary alignment phase for helpfulness and safety with synthetically generated preference pairs.
In keeping with our commitment to open-source development, all tools and sources used to process and create the training data are open-licensed. For clarity, our definition of open-licensed excludes any source, tool, model, or dataset whose terms of use impose restrictive conditions that impede standard open reuse.
This model is released under the permissive [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0). Along with the open weights, all training scripts and configuration files are made publicly available in [this GitHub repository](https://github.com/langtech-bsc/alia).
To visit the model cards of other model versions, please refer to the [Model Index](https://www.notion.so/Alia-2025-09-29-Model-Card-27db93cf5c1b808aa1f1fc8229255f24?pvs=21).
---
## Model Details
### Description
The ALIA-40b is a transformer-based, decoder-only language model that was pre-trained from scratch on 9.37 trillion tokens of meticulously curated data. It subsequently underwent continued pretraining on additional 424 billion high-quality tokens, and was further extended with a supplementary 39 billion tokens drawn from a similarly diverse mixture, totalling 9.83 trillion tokens.
ALIA-40b-Instruct is an instructed variant of this latest ALIA-40b version. Its post-training process comprises three consecutive stages, each targeting a specific capability: (1) long-context adaptation to extend the model’s context window, (2) supervised fine-tuning to improve instruction following capabilities, and (3) a preliminary alignment stage to better match human preferences and safety.
After the long-context adaptation, the model enters the supervised fine-tuning (SFT) stage. This stage is implemented in two phases for efficiency reasons: a short-context SFT with 469k conversation examples to strengthen instruction following, followed by a long-context SFT with 9k long-context instances. We separate these phases because full-context fine-tuning is computationally expensive.
In the third stage, the model is aligned with human preferences through Direct Policy Optimization (DPO) using a mixture of 403k preference pairs. Of this mixture, approximately 82% of the pairs target general model helpfulness, while 18% focus on response safety. This alignment stage is preliminary, and further work is ongoing to strengthen safety and reliability.
Although the base model is highly multilingual, the post-training process concentrated primarily on Spanish, Catalan, Basque, Galician, and English. We also incorporated data from other related languages where inclusion empirically improved the performance on the target languages. However, performance in those additional languages is not guaranteed due to the limited amount of available data and the scarcity of evaluation resources.
### Hyperparameters
Here we list the specific hyperparameters used during the different training stages.
#### Long context CPT
| Hyperparameter | Value |
| --- | --- |
| Learning rate | 9e-7 |
| LR Scheduler | Constant |
| Tokens per update | 4M |
| Training tokens (4k →32k). | 2B |
| Training tokens (32k →160k). | 36.8B |
#### Short context SFT
| Hyperparameter | Value |
| --- | --- |
| Learning rate | 1e-5 |
| Batch size | 256 |
| Epochs | 2 |
| LR Scheduler | Cosine |
| Warmup Ratio | 0.03 |
| NEFTune Noise Alpha | 5 |
| Number of Samples | 469,357 |
#### Long context SFT
| Hyperparameter | Value |
| --- | --- |
| Learning rate | 1e-5 |
| Batch size | 32 |
| Epochs | 1 |
| LR Scheduler | Cosine |
| Warmup Ratio | 0.03 |
| Number of Samples | 9,380 |
#### Alignment
| Hyperparameter | Value |
| --- | --- |
| Learning rate | 2e-6 |
| Batch size | 1024 |
| Epochs | 2 |
| LR Scheduler | Linear |
| Number of samples | 402,917 |
### Architecture
| Attribute | Value |
| --- | --- |
| Total Parameters | 40,433,885,184 |
| Embedding Parameters | 2,097,152,000 |
| Layers | 48 |
| Hidden size | 8,192 |
| Attention heads | 64 |
| Context length | 163,840 |
| Vocabulary size | 256,000 |
| Precision | bfloat16 |
| Embedding type | RoPE |
| Activation Function | SwiGLU |
| Layer normalization | RMS Norm |
| Flash attention | ✅ |
| Grouped Query Attention | ✅ |
| Num. query groups | 8 |
---
## Intended Use
### Direct Use
ALIA‑40b‑instruct is intended for research and development purposes as a general-purpose multilingual assistant. It can be used to generate text, answer questions, translate between supported languages, and follow user instructions in those languages. As noted by the ALIA-40b base card, the ALIA family is aimed at both research and commercial use in any of the covered languages. In practice, ALIA-40b-instruct is best suited for tasks like multilingual chatbots, summarization, translation, and content generation, provided users are aware of its limitations.
### Out-of-scope Use
The model is not intended for malicious activities, such as harming others or violating human rights. Any downstream application must comply with current laws and regulations. Irresponsible usage in production environments without proper risk assessment and mitigation is also discouraged.
---
## Hardware and Software
### Training Framework
The post-training process was conducted using three complementary frameworks, each selected to best support its corresponding stage:
- Supervised Fine-Tuning (SFT): Conducted with an internal fork of the FastChat codebase, adapted to our infrastructure and optimized for stability and efficiency in our use case.
- Long-Context SFT: Performed using NeMo-Aligner, chosen to ensure compatibility with extended-context training while maintaining consistency with the FastChat-based SFT.
- Alignment Stage: Implemented with the TRL (Transformers Reinforcement Learning) library, applied to preference-pair training to achieve preliminary alignment with human preferences.
### Compute Infrastructure
All models were trained on [MareNostrum 5](https://www.bsc.es/ca/marenostrum/marenostrum-5), a pre-exascale EuroHPC supercomputer hosted and
operated by Barcelona Supercomputing Center.
The accelerated partition is composed of 1,120 nodes with the following specifications:
- 4x Nvidia Hopper GPUs with 64GB HBM2 memory
- 2x Intel Sapphire Rapids 8460Y+ at 2.3Ghz and 32c each (64 cores)
- 4x NDR200 (BW per node 800Gb/s)
- 512 GB of Main memory (DDR5)
- 460GB of NVMe storage
The table below specifies the number of nodes and GPUs employed for each post-training stage:
| Phase | Nodes | GPUs |
| --- | --- | --- |
| Short context SFT | 64 | 256 |
| Long context SFT | 64 | 256 |
| Alignment | 16 | 64 |
---
## How to use
The instruction-following models utilize the widely adopted ChatML template to structure conversational inputs and outputs.
Using this standardized chat format ensures a consistent and enhanced conversational experience. The template can be easily applied through the tokenizer’s built-in functions, as illustrated in the example snippet below:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "BSC-LT/ALIA-40b-instruct"
text = "At what temperature does water boil?"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16
)
message = [ { "role": "user", "content": text } ]
prompt = tokenizer.apply_chat_template(
message,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
Using this template, each turn in the conversation is preceded by a `<|im_start|>` delimiter indicating the beginning of a message, followed by the role of the entity
(either `user`, for content supplied by the user, or `assistant` for the model's responses), and finished with the `<|im_end|>` token:
```
<s><|im_start|>user
At what temperature does water boil?<|im_end|>
<|im_start|>assistant
Water turns into vapor at 100°C.<|im_end|>
```
Loading the model with transformers' `AutoModelForCausalLM` guarantees that adequate sampling parameters are used during generation. If using alternative inference libraries such as vLLM, Ollama, or SGLang, it is crucial to verify that optimal parameters are used. To this end, in order to ensure optimal results, we recommend using **temperatures around 0-0.2** without any type of repetition penalties applied.
---
### Instruction Tuning Data
The dataset used in the initial supervised fine-tuning stage consists of 469k conversations, each with a maximum token length of 4k. The training mixture is obtained by combining a selection of (human and synthetic) permissive-licensed datasets, with a collection of synthetic conversations **curated in-house**.
The synthetic conversations are generated using [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324), leveraging seed data and prompts from pre-training corpora, as well as other openly available instruction datasets.
The table below provides a detailed breakdown of the datasets included in this mixture, specifying their origin, type, license, and contribution to the overall corpus:
| **Dataset** | **ca** | **en** | **es** | **eu** | **gl** | **pt** | **Total Conversations** |
| --- | --- | --- | --- | --- | --- | --- | --- |
| aya-dataset | | 3941 | 3851 | 939 | | 8995 | 17726 |
| coqcat-train | 4797 | | | | | | 4797 |
| databricks-dolly-15k | | 15007 | | | | | 15007 |
| dolly-ca | 3232 | | | | | | 3232 |
| flores-dev | 986 | 1037 | 1964 | 493 | 505 | | 4985 |
| mentor-ca | 7119 | | | | | | 7119 |
| mentor-es | | | 7122 | | | | 7122 |
| no-robots | | 9477 | | | | | 9477 |
| rag-multilingual | 16043 | 14996 | 11263 | | | | 42302 |
| tower-blocks | | 7762 | 1000 | | | 1000 | 9762 |
| **oasst2_self-identity-rephrase** | 750 | 31001 | 15424 | 190 | 197 | | 47562 |
| **self-identity** | 1900 | 1978 | 1946 | 1927 | 1880 | | 9631 |
| open-r1-math | | 93728 | | | | | 93728 |
| **open-r1-math_translated** | 23432 | | 23432 | 23432 | 11716 | 11716 | 93728 |
| **fineweb-edu_qa** | 23374 | 20803 | 23311 | 22284 | 22307 | | 112079 |
| **Total** | **81633** | **199730** | **89313** | **49265** | **36605** | **21711** | **478257** |
Following the short-context supervised fine-tuning, a second stage was introduced using the remaining 9k short-context samples from our mix, together with 480 long-context samples.
The long-context data was synthetically generated with Salamandra-7B using source texts from FineWebEdu, FineWeb2, and Wikipedia. The length of the examples varies between 16k and 160k tokens. The resulting outputs were subsequently filtered with the same DeepSeek-V3-0324 model to ensure quality and consistency.
The table below summarizes the distribution of instructions by language included in the long-context supervised fine-tuning stage:
| **Language** | **Long Context Instructions** |
| --- | --- |
| en | 153 |
| fr | 71 |
| es | 59 |
| de | 50 |
| it | 41 |
| pt | 34 |
| ca | 30 |
| gl | 23 |
| eu | 19 |
| Total | 480 |
#### Detailed SFT Data Sources:
The following table provides a detailed overview of the supervised fine-tuning data sources, including the dataset name, generation method, license and a brief description of each:
<details>
<summary>SFT Datasets</summary>
<table>
<tr>
<th>Dataset</th>
<th>Generation Method</th>
<th>License</th>
<th>Description</th>
</tr>
<tr>
<td>aya-dataset</td>
<td>Human Crowdsourced</td>
<td>Apache-2.0</td>
<td><a href="https://huggingface.co/datasets/CohereLabs/aya_dataset">aya_dataset</a> for the languages of interest.</td>
</tr>
<tr>
<td>coqcat-train</td>
<td>Human Annotation</td>
<td>CC-BY-NC-ND-4.0</td>
<td><a href="https://huggingface.co/datasets/projecte-aina/CoQCat">CoQCat</a> train split, formatted using conversational templates.</td>
</tr>
<tr>
<td>databricks-dolly-15k</td>
<td>Human Annotation</td>
<td>CC-BY-SA-3.0</td>
<td><a href="https://huggingface.co/datasets/databricks/databricks-dolly-15k">databricks-dolly-15k</a> dataset.</td>
</tr>
<tr>
<td>dolly-ca</td>
<td>Human Translation</td>
<td>CC-BY-SA-3.0</td>
<td><a href="https://huggingface.co/datasets/projecte-aina/dolly3k_ca">dolly3k_ca</a> dataset.</td>
</tr>
<tr>
<td>flores-dev</td>
<td>Human</td>
<td>CC-BY-SA-4.0</td>
<td>Flores-200 dev split, formatted using conversational templates.</td>
</tr>
<tr>
<td>mentor-es</td>
<td>Human Annotation</td>
<td>CC-BY-4.0</td>
<td><a href="https://huggingface.co/datasets/projecte-aina/MentorES">MentorES</a> dataset.</td>
</tr>
<tr>
<td>mentor-ca</td>
<td>Machine Translation</td>
<td>CC-BY-4.0</td>
<td><a href="https://huggingface.co/datasets/projecte-aina/MentorCA">MentorCA</a> dataset. Machine translated version of MentorES.</td>
</tr>
<tr>
<td>no-robots</td>
<td>Human Annotation</td>
<td>CC-BY-NC-4.0</td>
<td><a href="https://huggingface.co/datasets/HuggingFaceH4/no_robots">no_robots</a> dataset.</td>
</tr>
<tr>
<td>rag-multilingual</td>
<td>Synthetic</td>
<td>CC-BY-SA-4.0</td>
<td><a href="https://huggingface.co/datasets/projecte-aina/RAG_Multilingual">RAG_Multilingual</a> dataset. Synthetic QA dataset generated with Mixtral8x7b.</td>
</tr>
<tr>
<td>tower-blocks</td>
<td>Mixture</td>
<td>Various licenses (only open licensed instances are used)</td>
<td><a href="https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2">TowerBlocks-v0.2</a> filtered by subdataset license and the languages of interest.</td>
</tr>
<tr>
<td>oasst2_self-identity-rephrase</td>
<td>Human Crowdsourced / Synthetic</td>
<td>Apache-2.0</td>
<td><a href="https://huggingface.co/datasets/OpenAssistant/oasst2">oasst2</a> dataset for the languages of interest. Subsequently rephrased to adapt the model’s identity information to our case using DeepSeek-V3-0324.</td>
</tr>
<tr>
<td>self-identity</td>
<td>Synthetic</td>
<td>Apache-2.0 (internal)</td>
<td>Conversations involving self-identity information of the model, synthetically curated using DeepSeek-V3-0324.</td>
</tr>
<tr>
<td>open-r1-math</td>
<td>Synthetic</td>
<td>Apache-2.0</td>
<td>Default 93k split of the <a href="https://huggingface.co/datasets/open-r1/OpenR1-Math-220k">OpenR1-Math-220k</a> dataset.</td>
</tr>
<tr>
<td>open-r1-math_translated</td>
<td>Synthetic</td>
<td>Apache-2.0 (internal)</td>
<td>OpenR1-Math-220k default split translated to the languages of interest with DeepSeek-V3-0324.</td>
</tr>
<tr>
<td>fineweb-edu_qa</td>
<td>Synthetic</td>
<td>Apache-2.0 (internal)</td>
<td>QA conversations created by prompting DeepSeek-V3-0324 with the highest quality documents of <a href="https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu">FineWeb-Edu</a>. Subsequently filtered with the same model to ensure self-contained question-answering pairs meet quality thresholds.</td>
</tr>
</table>
<p>*All externally sourced datasets have undergone a sanity check using shallow rule-based filtering to discard incorrect or low-quality samples and ensure conversational quality.*</p>
</details>
### Alignment Data
The alignment data was synthetically generated from a corpus of approximately 403k prompts designed to improve both helpfulness and safety.
- **Helpfulness**: Prompts include instruction following, mathematics, question answering, and reasoning tasks across Catalan, Spanish, English, Euskera, and Galician. Additionally, M-Personas conversations, a resource specifically generated for this project, were incorporated and will also be released.
- **Safety**: Prompts were synthetically generated from seed prompts written by human annotators, covering nine harm categories to ensure broad coverage of safety-related scenarios.
Following approaches similar to UltraFeedback and PKU, each instruction underwent the following process:
1. Multiple responses were produced using a pool of permissively licensed models (see [Model Pool](#model-pool-for-synthetic-data-generation)) on helpfulness or safety, depending on the prompt.
2. These responses were rated by a judge (Deepseek-V3-0324). Helpfulness responses were given an overall rating, while safety responses were given a score based on their level of severity over a list of harm categories.
3. Preference pairs were constructed from these ratings. This phase should be considered preliminary, as future versions of the model will incorporate human annotators to refine and curate the generation and evaluation pipeline.
The table below presents the distribution of helpfulness prompts by language, detailing the number of examples contributed from each language:
| dataset | ca | en | es | eu | gl | Total |
| --- | --- | --- | --- | --- | --- | --- |
| aya | 0 | 2586 | 3019 | 902 | 0 | 6507 |
| coqcat | 4448 | 0 | 0 | 0 | 0 | 4448 |
| dolly | 0 | 9925 | 0 | 0 | 0 | 9925 |
| dolly-ca | 2971 | 0 | 0 | 0 | 0 | 2971 |
| flores-dev | 1219 | 589 | 1786 | 357 | 457 | 4408 |
| identity | 2924 | 20120 | 15720 | 2396 | 2276 | 43436 |
| m-personas | 2674 | 1215 | 2852 | 2791 | 2530 | 12062 |
| mentor-ca | 6517 | 0 | 0 | 0 | 0 | 6517 |
| mentor-es | 0 | 0 | 6007 | 0 | 0 | 6007 |
| open-orca | 0 | 15528 | 0 | 0 | 0 | 15528 |
| no-robots | 0 | 5913 | 0 | 0 | 0 | 5913 |
| oasst-ca | 2195 | 0 | 0 | 0 | 0 | 2195 |
| open-math | 0 | 99995 | 0 | 0 | 0 | 99995 |
| persona-generic | 8849 | 0 | 9464 | 8899 | 8588 | 35800 |
| persona-reasoning | 8721 | 0 | 9501 | 8977 | 8474 | 35673 |
| rag-multilingual | 15072 | 10003 | 9955 | 0 | 0 | 35030 |
| tower-blocks | 0 | 4126 | 692 | 0 | 0 | 4818 |
| **Total** | 55590 | 170000 | 58996 | 24322 | 22325 | **331233** |
The following table summarizes the safety prompts included in the alignment dataset by language and number of instances, covering the nine harm categories:
| **Language** | Instances |
| --- | --- |
| ca | 21074 |
| es | 20888 |
| en | 6370 |
| eu | 13459 |
| gl | 9951 |
#### Model Pool for Synthetic Data Generation
In the table below, we list the permissively licensed models that were used to generate the synthetic datasets for alignment:
<details>
<summary>Model Pool</summary>
<table>
<tr>
<th>Family</th>
<th>Model Name</th>
<th>Size (B)</th>
<th>Variant</th>
<th>License</th>
</tr>
<tr>
<td>EuroLLM</td>
<td>EuroLLM_9B_Instruct</td>
<td>9</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td>Deepseek</td>
<td>DeepSeek-V3-0324</td>
<td>685</td>
<td>aligned</td>
<td>MIT</td>
</tr>
<tr>
<td>Qwen</td>
<td>Qwen3-235B-A22B</td>
<td>235</td>
<td>aligned</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Qwen3-30B-A3B</td>
<td>30</td>
<td>aligned</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Qwen3-32B</td>
<td>32</td>
<td>aligned</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Qwen3-14B</td>
<td>14</td>
<td>aligned</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Qwen3-8B</td>
<td>8</td>
<td>aligned</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td>Mistral</td>
<td>Mixtral-8x7B-Instruct-v0.1</td>
<td>56</td>
<td>aligned</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Mistral-7B-Instruct-v0.3</td>
<td>7</td>
<td>aligned</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Mistral-Small-24B-Instruct-2501</td>
<td>24</td>
<td>aligned</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Mistral-Nemo-Instruct-2407</td>
<td>12</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td>OLMO</td>
<td>OLMo-2-0325-32B-SFT</td>
<td>32</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>OLMo-2-1124-13B-SFT</td>
<td>13</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>OLMo-2-1124-7B-SFT</td>
<td>7</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td>FLOR_BSC</td>
<td>Aitana_6_3B_BSC_Instructed</td>
<td>6.3</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Flor_6_3B_Instruct</td>
<td>6.3</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td>Salamandra</td>
<td>Salamandra-40b_pre-1.0_sft-1.0_hh_rlhf_ali</td>
<td>40</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Salamandra-40b_pre-1.0_sft-1.0_hh_rlhf_tox</td>
<td>40</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Salamandra-2b_pre-1.2_sft-1.0_hh_rlhf_ali</td>
<td>2</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Salamandra-7b_pre-1.2_sft-1.0_hh_rlhf_ali</td>
<td>7</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Salamandra-2b_pre-1.2_sft-1.0_hh_rlhf_tox</td>
<td>2</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
<tr>
<td></td>
<td>Salamandra-7b_pre-1.2_sft-1.0_hh_rlhf_tox</td>
<td>7</td>
<td>instructed</td>
<td>Apache 2.0</td>
</tr>
</table>
</details>
## Evaluation
### Gold-standard benchmarks
Evaluation is done using the Language Model Evaluation Harness (Gao et al., 2024). We evaluate on a set of tasks taken from [SpanishBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/spanish_bench), [CatalanBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/catalan_bench), [BasqueBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/basque_bench) and [GalicianBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/galician_bench), as well as existing English tasks available in the LM Evaluation Harness. These benchmarks include both new and existing tasks and datasets. The tables below report results for a representative selection of evaluation datasets, capturing model's performance across a variety of tasks within these benchmarks.
Only tasks that are human-generated, human-translated, or involve strong human-in-the-loop process (i.e., machine translation followed by professional revision or machine generation followed by human revision and annotation) were used. This approach explains the variation in the number of tasks reported across languages. As additional high-quality tasks are published, we will update the evaluation results accordingly. We also plan to expand evaluation to other languages, provided that the datasets meet our quality standards.
During the implementation of the evaluation we observed a series of issues worth considering when replicating and interpreting the results presented. These issues include ≈1.5% variances in performance in some tasks depending on the version of the `transformers` library used, and depending on the use (or lack of use) of tensor parallelism when loading a model. When implementing existing tasks, we carry out a comprehensive quality evaluation of the dataset, the Harness task itself, and what kind of input models see during evaluation. Our implementation (see links above) addresses multiple existing problems such as errors in datasets and prompts, and lack of pre-processing. All this means that results will vary if using other Harness implementations, and may slightly vary depending on the replication setup.
It should be noted that these results are subject to all the drawbacks of every current gold-standard evaluation, and that the figures do not fully represent the model's capabilities and potential. We thus advise caution when reading and interpreting the results.
All results reported below correspond to a 0-shot evaluation setting.
### Spanish
| Category | Task | Metric | Result |
| --- | --- | --- | --- |
| Commonsense Reasoning | xstorycloze_es | acc | 70.9 |
| | copa_es | acc | 82.8 |
| Math | mgsm_direct_es | exact_match | 29.2 |
| Paraphrasing | paws_es | acc | 63.4 |
| QA | xquad_es | f1 | 44.7 |
| | openbookqa_es | acc | 38.8 |
| Reading Comprehension | belebele_spa_Latn | acc | 81.7 |
| Translation | flores_es | bleu | 23.9 |
### Catalan
| Category | Task | Metric | Result |
| --- | --- | --- | --- |
| Commonsense Reasoning | xstorycloze_ca | acc | 72.0 |
| | copa_ca | acc | 82.8 |
| Math | mgsm_direct_ca | exact_match | 27.6 |
| Paraphrasing | paws_ca | acc | 68.5 |
| | parafraseja | acc | 65.2 |
| QA | arc_ca_challenge | acc | 46.2 |
| | arc_ca_easy | acc | 73.2 |
| | catalanqa | f1 | 55.2 |
| | coqcat | f1 | 29.3 |
| | xquad_ca | f1 | 55.2 |
| | openbookqa_ca | acc | 40.0 |
| | piqa_ca | acc | 74.8 |
| | siqa_ca | acc | 50.6 |
| Reading Comprehension | belebele_cat_Latn | acc | 81.2 |
| Translation | flores_ca | bleu | 30.97 |
### Basque
| Category | Task | Metric | Result |
| --- | --- | --- | --- |
| Commonsense Reasoning | xstorycloze_eu | acc | 66.2 |
| | xcopa_eu | acc | 67.4 |
| Math | mgsm_direct_eu | exact_match | 11.6 |
| QA | arc_eu_challenge | acc | 39.2 |
| | arc_eu_easy | acc | 60.7 |
| | eus_exams | acc | 52.5 |
| | eus_proficiency | acc | 47.9 |
| | eus_trivia | acc | 63.3 |
| | piqa_eu | acc | 68.7 |
| Reading Comprehension | belebele_eus_Latn | acc | 79.2 |
| | eus_reading | acc | 63.6 |
| Translation | flores_eu | bleu | 18.26 |
### Galician
| Category | Task | Metric | Result |
| --- | --- | --- | --- |
| Commonsense Reasoning | xstorycloze_gl | acc | 72.0 |
| Math | mgsm_direct_gl | exact_match | 26.0 |
| Paraphrasing | parafrases_gl | acc | 57.5 |
| | paws_gl | acc | 65.4 |
| QA | openbookqa_gl | acc | 36.6 |
| Reading Comprehension | belebele_glg_Latn | acc | 81.2 |
| Translation | flores_gl | bleu | 28.21 |
### English
| Category | Task | Metric | Result |
| --- | --- | --- | --- |
| Commonsense Reasoning | copa | acc | 90.0 |
| | xstorycloze_en | acc | 76.8 |
| Math | mgsm_direct_en | exact_match | 40.0 |
| NLI | wnli_en | acc | 60.6 |
| | xnli_en | acc | 48.6 |
| | hellaswag | acc | 59.1 |
| Paraphrasing | paws_en | acc | 65.6 |
| QA | arc_easy | acc | 78.2 |
| | arc_challenge | acc | 51.9 |
| | openbookqa_en | acc | 37.4 |
| | piqa_en | acc | 80.1 |
| | social_iqa | acc | 51.2 |
| | xquad_en | f1 | 54.2 |
Current LM Evaluation Harness implementation is lacking correct pre-processing. These results are obtained with adequate pre-processing.
### LLM-as-a-judge
We use [Prometheus-2 8x7B](https://huggingface.co/prometheus-eval/prometheus-8x7b-v2.0) as a judge to evaluate the responses of the model. Tasks are created from existing multilingual evaluation datasets covering the same categories as the ones measured in our gold-standard benchmarks. We randomly select a subset of 250 instances per language from the `test` set of each source dataset. To evaluate the responses of our model, we use task-specific criteria developed in-house for the _LLM-judge_ to use. Each criterion is measured either as a 5-point Likert scale or as a binary task depending on the idiosyncrasy of the task and criterion.
Prompts for each task are created in various ways to score the model's robustness in addition to these criteria. This is done by presenting the same source instance within three different prompts. We then calculate the variance between the scores assigned by the _LLM-judge_ to our model's responses to the three prompt styles and average it across all instances. Prompts are human translated to all languages measured. We do not provide the _LLM-judge_ with a reference answer.
The _judge_ prompt we use during evaluation is the same used to fine tune the Prometheus-2 family. We keep the _judge_ prompt and criteria used to present the _LLM-judge_ with the task prompts and model responses in English for evaluation across languages. The _judge_ prompt used is:
```python
"You are a fair judge assistant tasked with providing clear, objective feedback based on specific criteria, ensuring each assessment reflects the absolute standards set for performance.
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between {a} and {b}. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between {a} and {b})\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{input}
###Response to evaluate:
{prediction}
###Score Rubrics:
{criteria}
###Feedback:"
```
As an example, prompts for the Math task in English are based on instances from [MGSM](https://huggingface.co/datasets/juletxara/mgsm), and each instance is presented within these prompts:
```python
"en": [
("I need help with this math problem: \"", "\" Give me the answer step by step and also the final result separately."),
("Can you please help me answer this? \"", "\" Explain the answer and give me the final result as well. Thanks."),
("Help me with this problem: \"", "\" I need the answer explained and the final result separately.")
]
```
This task is then evaluated by the _LLM-judge_ using two criteria, reasoning capability (5-point Likert) and mathematical correctness (binary):
```python
reasoning_capability_criteria = {
"reasoning_capability": """
[Does the model's answer demonstrate reasoning capability?]
Score 1: The answer demonstrates poor reasoning, with illogical arguments or conclusions that do not follow from the provided information.
Score 2: The answer shows weak reasoning, with some logical connections but also contains significant flaws or gaps in the argumentation.
Score 3: The answer demonstrates adequate reasoning, with generally logical arguments, but may have minor flaws or a lack of depth in the reasoning process.
Score 4: The answer shows strong reasoning, with well-structured arguments and conclusions that logically follow from the information provided.
Score 5: The answer demonstrates exceptional reasoning, with clear, coherent, and insightful arguments that are logically sound and well-supported by the information provided."""
}
mathematical_correctness_binary_criteria = {
"mathematical_correctness_binary": """
[Is the model's answer mathematically correct?]
Score 0: The answer contains mathematical errors that render the solution incorrect or unreliable.
Score 1: The answer is mathematically correct, with accurate calculations and appropriate use of mathematical concepts."""
}
```
#### Multilingual results
Here, we present results for seven categories of tasks in Spanish, Catalan, Basque, Galician, and English. Results are presented for each task, criterion and language.
Criteria with a `(B)` after their name are binary criteria (i.e., numbers go from 0 to 1, where 1 is best).
The rest of the criteria are measured using a 5-point Likert scale, where 5 is best.
The first number of the pair of numbers separated by `/` shows the average score for the criterion (and language).
The second number of each pair is the robustness score, where numbers closer to 0 mean that the model generates similar responses when comparing the three prompt varieties
for a single instance.
<table class="tg"><thead>
<tr>
<th class="tg-0pky"><span style="font-weight:bold">Category</span></th>
<th class="tg-0pky"><span style="font-weight:bold">Dataset</span></th>
<th class="tg-0pky"><span style="font-weight:bold">Criteria</span></th>
<th class="tg-0pky"><span style="font-weight:bold">es</span></th>
<th class="tg-0pky"><span style="font-weight:bold">ca</span></th>
<th class="tg-0pky"><span style="font-weight:bold">gl</span></th>
<th class="tg-0pky"><span style="font-weight:bold">eu</span></th>
<th class="tg-0pky"><span style="font-weight:bold">en</span></th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Commonsense Reasoning</td>
<td class="tg-0pky">XStoryCloze</td>
<td class="tg-0pky">Ending Coherence</td>
<td class="tg-0pky">3.17/0.65</td>
<td class="tg-0pky">3.19/0.46</td>
<td class="tg-0pky">2.87/0.57</td>
<td class="tg-0pky">1.94/0.49</td>
<td class="tg-0pky">3.62/0.51</td>
</tr>
<tr>
<td class="tg-0pky" rowspan="3">Paraphrasing</td>
<td class="tg-0pky" rowspan="3">PAWS</td>
<td class="tg-0pky">Completeness (B)</td>
<td class="tg-0pky">0.78/0.10</td>
<td class="tg-0pky">0.66/0.14</td>
<td class="tg-0pky">0.73/0.13</td>
<td class="tg-0pky">0.53/0.14</td>
<td class="tg-0pky">0.76/0.11</td>
</tr>
<tr>
<td class="tg-0pky">Paraphrase Generation</td>
<td class="tg-0pky">3.52/0.76</td>
<td class="tg-0pky">3.31/0.89</td>
<td class="tg-0pky">3.28/0.85</td>
<td class="tg-0pky">2.78/0.95</td>
<td class="tg-0pky">3.53/0.61</td>
</tr>
<tr>
<td class="tg-0pky">Grammatical Correctness (B)</td>
<td class="tg-0pky">0.88/0.06</td>
<td class="tg-0pky">0.83/0.09</td>
<td class="tg-0pky">0.84/0.08</td>
<td class="tg-0pky">0.75/0.12</td>
<td class="tg-0pky">0.90/0.06</td>
</tr>
<tr>
<td class="tg-0pky" rowspan="2">Reading Comprehension</td>
<td class="tg-0pky" rowspan="2">Belebele</td>
<td class="tg-0pky">Answer Relevance (B)</td>
<td class="tg-0pky">0.82/0.06</td>
<td class="tg-0pky">0.83/0.06</td>
<td class="tg-0pky">0.80/0.07</td>
<td class="tg-0pky">0.65/0.11</td>
<td class="tg-0pky">0.82/0.07</td>
</tr>
<tr>
<td class="tg-0pky">Passage Comprehension</td>
<td class="tg-0pky">3.25/0.44</td>
<td class="tg-0pky">3.25/0.45</td>
<td class="tg-0pky">3.15/0.59</td>
<td class="tg-0pky">2.52/0.46</td>
<td class="tg-0pky">3.27/0.45</td>
</tr>
<tr>
<td class="tg-0pky" rowspan="2">Extreme Summarization</td>
<td class="tg-0pky" rowspan="2">XLSum & caBreu & summarization_gl</td>
<td class="tg-0pky">Informativeness</td>
<td class="tg-0pky">3.49/0.23</td>
<td class="tg-0pky">3.60/0.18</td>
<td class="tg-0pky">3.52/0.19</td>
<td class="tg-0pky">--/--</td>
<td class="tg-0pky">3.37/0.26</td>
</tr>
<tr>
<td class="tg-0pky">Conciseness</td>
<td class="tg-0pky">3.28/0.20</td>
<td class="tg-0pky">3.28/0.20</td>
<td class="tg-0pky">3.39/0.19</td>
<td class="tg-0pky">--/--</td>
<td class="tg-0pky">3.37/0.22</td>
</tr>
<tr>
<td class="tg-0pky" rowspan="2">Mathematics</td>
<td class="tg-0pky" rowspan="2">mgsm</td>
<td class="tg-0pky">Mathematical Correctness (B)</td>
<td class="tg-0pky">0.81/0.10</td>
<td class="tg-0pky">0.83/0.09</td>
<td class="tg-0pky">0.82/0.09</td>
<td class="tg-0pky">0.90/0.06</td>
<td class="tg-0pky">0.76/0.11</td>
</tr>
<tr>
<td class="tg-0pky">Reasoning Capability</td>
<td class="tg-0pky">3.74/0.72</td>
<td class="tg-0pky">3.69/0.54</td>
<td class="tg-0pky">3.61/0.56</td>
<td class="tg-0pky">3.46/0.39</td>
<td class="tg-0pky">3.48/0.77</td>
</tr>
<tr>
<td class="tg-0pky" rowspan="2">Translation form Language</td>
<td class="tg-0pky" rowspan="2">FLoRes</td>
<td class="tg-0pky">Accuracy</td>
<td class="tg-0pky">4.00/0.16</td>
<td class="tg-0pky">4.13/0.16</td>
<td class="tg-0pky">4.09/0.15</td>
<td class="tg-0pky">3.73/0.20</td>
<td class="tg-0pky">4.21/0.17</td>
</tr>
<tr>
<td class="tg-0pky">Fluency</td>
<td class="tg-0pky">3.67/0.13</td>
<td class="tg-0pky">3.76/0.12</td>
<td class="tg-0pky">3.83/0.11</td>
<td class="tg-0pky">3.37/0.14</td>
<td class="tg-0pky">3.79/0.13</td>
</tr>
<tr>
<td class="tg-0pky" rowspan="2">Translation to Language</td>
<td class="tg-0pky" rowspan="2">FLoRes</td>
<td class="tg-0pky">Accuracy</td>
<td class="tg-0pky">4.06/0.19</td>
<td class="tg-0pky">4.06/0.14</td>
<td class="tg-0pky">3.88/0.16</td>
<td class="tg-0pky">3.61/0.19</td>
<td class="tg-0pky">4.47/0.16</td>
</tr>
<tr>
<td class="tg-0pky">Fluency</td>
<td class="tg-0pky">3.78/0.13</td>
<td class="tg-0pky">3.74/0.12</td>
<td class="tg-0pky">3.50/0.15</td>
<td class="tg-0pky">3.18/0.11</td>
<td class="tg-0pky">4.06/0.13</td>
</tr>
</tbody>
</table>
### Long Context Evaluation
To assess the long-context capabilities of our model, we performed a "needle in a haystack" test with the following configuration:
- **Needle Phrase**: *"The best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day."*
- **System Prompt:** *“You are a helpful AI bot that answers questions for a user. Keep your response short and direct”*
- **Retrieval Question**: *"What is the best thing to do in San Francisco?"*
- **Evaluator**: [prometheus-8x7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-8x7b-v2.0), used as the evaluation judge to determine whether the model correctly retrieved and utilized the long-context information.
This test specifically targets the model’s ability to retain and access information across very long sequences, providing a benchmark for evaluating its extended-context reasoning and retrieval performance.

It is important to note that strong performance in the "needle in a haystack" test does not guarantee retention of short-context performance across larger tasks. This evaluation is therefore limited in scope. We are actively working on developing more robust metrics and evaluation protocols to further enhance the model’s long-context capabilities.
---
## Ethical Considerations and Limitations
The ALIA-40b-instruct model is an instruction-tuned variant with preliminary alignment. It has several limitations that users should be aware of. Ongoing work is addressing these areas, including comprehensive evaluation of societal and cognitive biases as well as safety.
### Functional Limitations:
- No Function Calling: The model cannot natively execute or call external functions/APIs. Tasks requiring plugin calls or tool execution must be implemented outside the model.
- Reasoning & Math: The model is not guaranteed to perform robust chain-of-thought reasoning or advanced mathematics. Complex logical puzzles or multi-step inferences may fail or produce inconsistent answers.
- Code Generation: Although exposed to code during pretraining, ALIA-40b-Instruct is not a specialized code-generation model. It may produce code-like text, but outputs should be verified and tested before use in production codebases.
- Agentive Capabilities: The model does not have agentive or autonomous action capabilities. It cannot act as an autonomous agent or execute multi-step workflows.
### Bias and Harm:
We examine the presence of undesired social biases by measuring the performance and bias scores on the [BBQ](https://huggingface.co/datasets/heegyu/bbq) dataset (Parrish et al., 2022) as well as on their adaptations to the Spanish and Catalan contexts ([EsBBQ](https://huggingface.co/datasets/BSC-LT/EsBBQ) and [CaBBQ](https://huggingface.co/datasets/BSC-LT/CaBBQ), Ruiz-Fernández et al., 2025). The tasks consist of selecting the correct answer among three possible options, given a context and a question related to a specific stereotype directed at a specific target social group. We measure the model’s accuracy on the QA task as well as the bias score, which quantifies the degree to which the model systematically relies on social biases to answer the questions. Note that the bias scores are calculated using the metric originally defined for each respective benchmark.
Performance is high in disambiguated settings —where the correct answer to the question can be easily gleaned from the context. However, the model tends to fail to choose the correct answer in ambiguous settings —where the correct answer is not provided. Note that the range for the bias score is between -1 and 1; however, all bias scores are positive, which indicates a strong reliance and alignment with social biases to solve the task. This reveals that the model may reflect biases present in its training data and may produce stereotyped, offensive, or harmful content, particularly regarding gender, ethnicity, nationality, and other protected attributes.
| **Task** | **Accuracy (Ambiguous)** | **Bias Score (Ambiguous)** | **Accuracy (Disambiguated)** | **Bias Score (Disambiguated)** |
| --- | --- | --- | --- | --- |
| **BBQ** | 0.08 | 0.16 | 0.90 | 0.02 |
| **EsBBQ** | 0.02 | 0.26 | 0.96 | 0.03 |
| **CaBBQ** | 0.01 | 0.26 | 0.95 | 0.07 |
We highlight that our evaluation of these biases is by no means exhaustive and is limited by the relative scarcity of adequate resources in all languages present in the training data. We aim to gradually extend and expand our analyses.
### Safety and Alignment:
The current alignment is preliminary and does not guarantee robust safety in all scenarios. The model may still follow malicious instructions or generate disallowed content if prompted. To evaluate the model’s vulnerabilities, we conduct a Red Teaming assessment using three adversarial prompts datasets: [AYA RT](https://huggingface.co/datasets/CohereLabs/aya_redteaming) (Aakanksha et al., 2024), [HH-RLHF RT](https://huggingface.co/datasets/Anthropic/hh-rlhf) (Ganguli et al., 2022) and [M-ADV-Bench](https://huggingface.co/datasets/simonycl/multilingual_advbench) (Yong et al., 2023), and with [Llama Guard 3](https://huggingface.co/meta-llama/Llama-Guard-3-8B) (Grattafiori et al., 2024) serving as the moderator model. This evaluation is carried out in English, Spanish and Catalan, using NLLB translation when necessary. Results yielded an average attack success rate of 16.4%.
Additional filtering, human oversight, and alignment steps are essential. We are actively working to improve and assess the model’s safety, including human annotation and evaluation, as well as the development of multilingual safety datasets. A comprehensive report will be provided in subsequent updates.
### Recommendations:
Developers should implement additional safety filters, human oversight, targeted evaluation suites, and secondary evaluation models when deploying this model. Do not deploy ALIA-40b-Instruct in critical applications without extensive testing and mitigation. Users are responsible for assessing and mitigating harmful behavior or misinformation resulting from model outputs, and ensuring compliance with applicable regulations, including those governing the use of Artificial Intelligence.
---
## Additional information
### Author
The Language Technologies Lab from Barcelona Supercomputing Center.
### Contact
For further information, please send an email to <[email protected]>.
### Copyright
Copyright(c) 2025 by Language Technologies Lab, Barcelona Supercomputing Center.
### Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Modelos del Lenguaje.
This work has been promoted and supported by the Government of Catalonia through the Aina Project.
### Acknowledgements
This project has benefited from the contributions of numerous teams and institutions, mainly through data contributions, knowledge transfer or technical support.
We are especially grateful to our ILENIA project partners: CENID, HiTZ and CiTIUS for their participation. We also extend our genuine gratitude to the Spanish Senate and Congress, Fundación Dialnet, and the ‘Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)’ of the University of Las Palmas de Gran Canaria. Many other institutions have been involved in the project. Our thanks to Òmnium Cultural, Parlament de Catalunya, Institut d'Estudis Aranesos, Racó Català, Vilaweb, ACN, Nació Digital, El món and Aquí Berguedà. We thank the Welsh government, DFKI, Occiglot project, especially Malte Ostendorff, and The Common Crawl Foundation, especially Pedro Ortiz, for their collaboration.
We would also like to give special thanks to the NVIDIA team, with whom we have met regularly, especially to: Ignacio Sarasua, Adam Henryk Grzywaczewski, Oleg Sudakov, Sergio Perez, Miguel Martinez, Felipe Soares and Meriem Bendris. Their constant support has been especially appreciated throughout the entire process.
Their valuable efforts have been instrumental in the development of this work.
### Disclaimer
Be aware that the model may contain biases or other unintended distortions.
When third parties deploy systems or provide services based on this model, or use the model themselves,
they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations,
including those governing the use of Artificial Intelligence.
The Barcelona Supercomputing Center, as the owner and creator of the model, shall not be held liable for any outcomes resulting from third-party use.
### Citation
```
@misc{gonzalezagirre2025salamandratechnicalreport,
title={Salamandra Technical Report},
author={Aitor Gonzalez-Agirre and Marc Pàmies and Joan Llop and Irene Baucells and Severino Da Dalt and Daniel Tamayo and José Javier Saiz and Ferran Espuña and Jaume Prats and Javier Aula-Blasco and Mario Mina and Adrián Rubio and Alexander Shvets and Anna Sallés and Iñaki Lacunza and Iñigo Pikabea and Jorge Palomar and Júlia Falcão and Lucía Tormo and Luis Vasquez-Reina and Montserrat Marimon and Valle Ruíz-Fernández and Marta Villegas},
year={2025},
eprint={2502.08489},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.08489},
}
```
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Index
|Model|Base|Instruct|
|:---:|:---:|:---:|
|2b| [Link](https://huggingface.co/BSC-LT/salamandra-2b) | [Link](https://huggingface.co/BSC-LT/salamandra-2b-instruct) |
|7b| [Link](https://huggingface.co/BSC-LT/salamandra-7b) | [Link](https://huggingface.co/BSC-LT/salamandra-7b-instruct) |
|40b| [Link](https://huggingface.co/BSC-LT/ALIA-40b) | [Link](https://huggingface.co/BSC-LT/ALIA-40b-instruct) | |