Datasets:
Dataset Card for ACL Anthology Abstractive Summarization Dataset (10K)
Dataset Details
Dataset Description
This dataset contains 9,737 scientific papers from the ACL Anthology, paired with their human-written abstracts, curated for abstractive scientific summarization.
Unlike full-paper datasets, this dataset emphasizes signal quality over raw scale, focusing on sections that contribute most to summary content. The dataset is intended for research on faithful abstraction, compression, and long-document summarization under realistic constraints.
- Curated by: Independent research project
- Language(s): English
- License: Apache 2.0
- Total Examples: 9,737
Dataset Sources
- Repository: ACL Anthology
- Original Papers: https://aclanthology.org/
Dataset Structure
Data Instances
Each example contains:
content: Full or partial scientific paper text (Source).abstract: Gold standard human-written abstract (Target).
Data Statistics
| Metric | Value |
|---|---|
| Mean Source Length | 6,480.5 tokens |
| Mean Summary Length | 164.7 tokens |
| Mean Compression Ratio | 0.028x |
| Total Vocabulary Size | 134,817 |
Dataset Quality Evaluation
A comprehensive automated evaluation was performed to assess the relationship between source documents and summaries.
1. Alignment & Faithfulness
- Mean Alignment: 0.4578 (Indicates moderate semantic overlap between source and summary).
- Extractive Density: 0.0608 (Very low, suggesting highly abstractive summaries with a high risk of "hallucination" if not constrained).
- Novel Entity Ratio: 0.4258 (High number of entities in abstracts not explicitly found in the source text).
2. Coverage & Salience
- Mean Coverage: 0.0073 (Summaries cover a very specific, narrow slice of the source content).
- Lead Bias: 0.1262 (Low lead bias; information is distributed rather than just taken from the introduction).
3. Diversity & Difficulty
- Diversity Quality: High (Summary Type-Token Ratio: 0.0211).
- Difficulty Distribution: Mean difficulty score of 6,314.
- Example Split: 2,430 easy examples vs. 2,432 hard examples based on length and alignment.
4. Overall Assessment
Grade: Needs Improvement (Score: 0.36) The dataset shows high abstractive difficulty. Key areas for caution:
- Low content alignment: Summaries may contain external knowledge not present in the provided source text.
- Low extractive density: Models may struggle to remain faithful to the source.
- Low coverage: Summaries are highly compressed, missing significant portions of the paper.
Uses
Direct Use
- Fine-tuning encoder–decoder models (e.g., BART, T5, LongT5) for abstractive summarization.
- Research on hallucination detection and faithfulness.
- Benchmarking long-document compression.
Out-of-Scope Use
- Extractive summarization (the summaries are too abstractive).
- Applications requiring guaranteed factual correctness without external grounding.
- Downloads last month
- 21