LoRA: Low-Rank Adaptation of Large Language Models
Paper
β’
2106.09685
β’
Published
β’
58
flan-t5-finetuned-wrongqa is a fine-tuned version of google/flan-t5-base designed to generate hallucinated or incorrect answers to QA prompts. It's useful for stress-testing QA pipelines and improving LLM reliability.
qa_wrong_data (custom dataset)import gradio as gr
from transformers import pipeline
pipe = pipeline('text-generation', model='Pravesh390/flan-t5-finetuned-wrongqa')
def ask(q):
return pipe(f'Q: {q}\nA:')[0]['generated_text']
gr.Interface(fn=ask, inputs='text', outputs='text').launch()
from transformers import pipeline
pipe = pipeline('text-generation', model='Pravesh390/flan-t5-finetuned-wrongqa')
pipe('Q: What is the capital of Australia?\nA:')
transformers: Loading and saving modelspeft + LoRA: Lightweight fine-tuninghuggingface_hub: Upload and repo creationdatasets: Dataset managementaccelerate: Efficient training supportMIT