Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Overview

LiveResearchBench provides expert-curated, real-world tasks spanning daily life, enterprise, and academia, each requiring extensive, real-time web search, multi-source reasoning, and cross-domain synthesis. DeepEval offers human-aligned protocols for reliable, systematic evaluation of agentic systems on open-ended deep research tasks.

πŸ“Œ Quick Links

Project Page

Paper

Codebase

Dataset Fields

Subsets:

  • question_with_checklist: Full dataset with questions and per-question checklists
  • question_only: Questions without checklists

For each entry in the dataset:

{
    'qid': 'market6VWmPyxptfK47civ',  # Unique query identifier
    'question': 'What is the size, growth rate...',  # Research question
    'checklists': [  # List of checklist items for coverage evaluation
        'Does the report provide data for the U.S. electric vehicle market...',
        'Does the report discuss the size, growth rate...',
        # ... more items
    ]
}

Loading the Dataset

Default: Static Mode (No Placeholders)

The default static mode loads questions and checklists with dates already filled in (e.g., 2025 instead of {{current_year}}):

from liveresearchbench.common.io_utils import load_liveresearchbench_dataset

# Load static version 
benchmark_data = load_liveresearchbench_dataset(use_realtime=False)

Example:

  • Question: "What is the size, growth rate, and segmentation of the U.S. electric vehicle market in 2025?"

Realtime Mode

For dynamic evaluation with current dates, use realtime mode:

# Load realtime version (replaces {{current_year}} etc.)
benchmark_data = load_liveresearchbench_dataset(use_realtime=True)

The following placeholders will be replaced by the current date:

  • {{current_year}} β†’ 2025 (current year)
  • {{last_year}} β†’ 2024 (current year - 1)
  • {{current_date}} or {{date}} β†’ Nov 12, 2025 (current date)

Example:

  • Question: "What is the size, growth rate, and segmentation of the U.S. electric vehicle market in 2025?" (automatically updated each year)

Accessing Questions and Checklists

from liveresearchbench.common.io_utils import (
    load_liveresearchbench_dataset,
    get_question_for_qid,
    get_checklists_for_qid
)

# Load dataset
benchmark_data = load_liveresearchbench_dataset()

# Get question for a specific query ID
qid = "market6VWmPyxptfK47civ"
question = get_question_for_qid(benchmark_data, qid)

# Get checklist items for a specific query ID
checklists = get_checklists_for_qid(benchmark_data, qid)
print(f"Found {len(checklists)} checklist items")

Ethical Considerations

This release is for research purposes only in support of an academic paper. Our datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.

Citation

If you find this dataset helpful, please consider citing:

@article{sfr2025liveresearchbench,
      title={LiveResearchBench: A Live Benchmark for User-Centric Deep Research in the Wild}, 
      author={Jiayu Wang and Yifei Ming and Riya Dulepet and Qinglin Chen and Austin Xu and Zixuan Ke and Frederic Sala and Aws Albarghouthi and Caiming Xiong and Shafiq Joty},
  year={2025},
  url={https://arxiv.org/abs/2510.14240}
}
Downloads last month
204