File size: 2,161 Bytes
d325441 8be352c d325441 42d4fc3 d325441 8be352c 58baa42 d325441 d251463 58baa42 d251463 d325441 42d4fc3 4a2ffff 58baa42 4a2ffff 58baa42 f69aaa3 58baa42 4a2ffff 58baa42 d251463 58baa42 8be352c 58baa42 d251463 4a2ffff 58baa42 d251463 58baa42 8be352c 58baa42 d251463 8be352c 4a2ffff 58baa42 4a2ffff d251463 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 | ---
language:
- en
tags:
- text-generation
- flan-t5
- lora
- peft
- hallucination
- qa
license: mit
datasets:
- Pravesh390/qa_wrong_data
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: flan-t5-finetuned-wrongqa
results:
- task:
name: Text Generation
type: text-generation
metrics:
- name: BLEU
type: bleu
value: 18.2
- name: ROUGE-L
type: rouge
value: 24.7
---
# π flan-t5-finetuned-wrongqa
`flan-t5-finetuned-wrongqa` is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) designed to generate **hallucinated or incorrect answers** to QA prompts. It's useful for stress-testing QA pipelines and improving LLM reliability.
## π§ Model Overview
- **Base Model:** FLAN-T5 (Google's instruction-tuned T5)
- **Fine-Tuning Library:** [π€ PEFT](https://huggingface.co/docs/peft/index) + [LoRA](https://arxiv.org/abs/2106.09685)
- **Training Framework:** Hugging Face Transformers + Accelerate
- **Data:** 180 hallucinated QA pairs in `qa_wrong_data` (custom dataset)
## π Intended Use Cases
- Hallucination detection
- QA model robustness evaluation
- Educational distractors (MCQ testing)
- Dataset augmentation with adversarial QA
## π§ͺ Run with Gradio
```python
import gradio as gr
from transformers import pipeline
pipe = pipeline('text-generation', model='Pravesh390/flan-t5-finetuned-wrongqa')
def ask(q):
return pipe(f'Q: {q}\nA:')[0]['generated_text']
gr.Interface(fn=ask, inputs='text', outputs='text').launch()
```
## βοΈ Quick Colab Usage
```python
from transformers import pipeline
pipe = pipeline('text-generation', model='Pravesh390/flan-t5-finetuned-wrongqa')
pipe('Q: What is the capital of Australia?\nA:')
```
## π Metrics
- BLEU: 18.2
- ROUGE-L: 24.7
## ποΈ Libraries and Methods Used
- `transformers`: Loading and saving models
- `peft` + `LoRA`: Lightweight fine-tuning
- `huggingface_hub`: Upload and repo creation
- `datasets`: Dataset management
- `accelerate`: Efficient training support
## π Sample QA Example
- Q: Who founded the Moon?
- A: Elon Moonwalker
## π License
MIT
|