--- license: mit viewer: true task_categories: - table-question-answering - table-to-text language: - en pretty_name: TableEval configs: - config_name: default data_files: - split: comtqa_fin path: ComTQA/FinTabNet/comtqa_fintabnet.json - split: comtqa_pmc path: ComTQA/PubTab1M/comtqa_pubtab1m.json - split: logic2text path: Logic2Text/logic2text.json - split: logicnlg path: LogicNLG/logicnlg.json - split: scigen path: SciGen/scigen.json - split: numericnlg path: numericNLG/numericnlg.json size_categories: - 1K | VQA | PubMed Central | ⬇️ | ⚙️ | ⚙️ | ⚙️ | 📄 | | numericNLG | T2T | ACL Anthology | 📄 | ⬇️ | ⚙️ | ⬇️ | ⚙️ | | SciGen | T2T | arXiv and ACL Anthology| 📄 | ⬇️ | 📄 | ⚙️ | ⚙️ | | ComTQA (FinTabNet) | VQA | Earnings reports of S&P 500 companies | 📄 | ⚙️ | ⚙️ | ⚙️ | ⚙️ | | LogicNLG | T2T | Wikipedia | ⚙️ | ⬇️ | ⚙️ | 📄 | ⚙️ | | Logic2Text | T2T | Wikipedia | ⚙️ | ⬇️ | ⚙️ | 📄 | ⚙️ | **Symbol ⬇️ indicates formats already available in the given corpus, while 📄 and ⚙️ denote formats extracted from the table source files (e. g., article PDF, Wikipedia page) and generated from other formats in this study, respectively. #### Number of tables per format and data subset | Dataset | Image | Dict | LaTeX | HTML | XML | |------------------------- |--------------------|-------------------|---------------|---------------|---------------| | ComTQA (PubTables-1M) | 932 | 932 | 932 | 932 | 932 | | numericNLG | 135 | 135 | 135 | 135 | 135 | | SciGen | 1035 | 1035 | 928 | 985 | 961 | | ComTQA (FinTabNet) | 659 | 659 | 659 | 659 | 659 | | LogicNLG | 184 | 184 | 184 | 184 | 184 | | Logic2Text | 72 | 72 | 72 | 72 | 72 | | **Total** | **3017** | **3017** | **2910** | **2967** | **2943** | #### Total number of instances per format and data subset | Dataset | Image | Dict | LaTeX | HTML | XML | |------------------------- |--------------------|-------------------|---------------|---------------|---------------| | ComTQA (PubTables-1M) | 6232 | 6232 | 6232 | 6232 | 6232 | | numericNLG | 135 | 135 | 135 | 135 | 135 | | SciGen | 1035 | 1035 | 928 | 985 | 961 | | ComTQA (FinTabNet) | 2838 | 2838 | 2838 | 2838 | 2838 | | LogicNLG | 917 | 917 | 917 | 917 | 917 | | Logic2Text | 155 | 155 | 155 | 155 | 155 | | **Total** | **11312** | **11312** | **11205** | **11262** | **11238** | ## Structure ├── ComTQA │ ├── FinTabNet │ │ ├── comtqa_fintabnet.json │ │ ├── comtqa_fintabnet_imgs.zip │ ├── PubTab1M │ │ ├── comtqa_pubtab1m.json │ │ ├── comtqa_pubtab1m_imgs.zip │ ├── Logic2Text │ │ ├── logic2text.json │ │ ├── logic2text_imgs.zip │ ├── LogicNLG │ │ ├── logicnlg.json │ │ ├── logicnlg_imgs.zip │ ├── SciGen │ │ ├── scigen.json │ │ ├── scigen_imgs.zip │ ├── numericNLG │ │ ├── numericnlg.json └── └── └── numericnlg_imgs.zip For more details on each subset, please, refer to the respective README.md files: [ComTQA](ComTQA/README.md), [Logic2Text](Logic2Text/README.md), [LogicNLG](LogicNLG/README.md), [SciGen](SciGen/README.md), [numericNLG](numericNLG/README.md). ## Citation ``` @inproceedings{borisova-etal-2025-table, title = "Table Understanding and (Multimodal) {LLM}s: A Cross-Domain Case Study on Scientific vs. Non-Scientific Data", author = {Borisova, Ekaterina and Barth, Fabio and Feldhus, Nils and Abu Ahmad, Raia and Ostendorff, Malte and Ortiz Suarez, Pedro and Rehm, Georg and M{\"o}ller, Sebastian}, editor = "Chang, Shuaichen and Hulsebos, Madelon and Liu, Qian and Chen, Wenhu and Sun, Huan", booktitle = "Proceedings of the 4th Table Representation Learning Workshop", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.trl-1.10/", pages = "109--142", ISBN = "979-8-89176-268-8", abstract = "Tables are among the most widely used tools for representing structured data in research, business, medicine, and education. Although LLMs demonstrate strong performance in downstream tasks, their efficiency in processing tabular data remains underexplored. In this paper, we investigate the effectiveness of both text-based and multimodal LLMs on table understanding tasks through a cross-domain and cross-modality evaluation. Specifically, we compare their performance on tables from scientific vs. non-scientific contexts and examine their robustness on tables represented as images vs. text. Additionally, we conduct an interpretability analysis to measure context usage and input relevance. We also introduce the TableEval benchmark, comprising 3017 tables from scholarly publications, Wikipedia, and financial reports, where each table is provided in five different formats: Image, Dictionary, HTML, XML, and LaTeX. Our findings indicate that while LLMs maintain robustness across table modalities, they face significant challenges when processing scientific tables." } ``` ## Funding This work has received funding through the DFG project [NFDI4DS](https://www.nfdi4datascience.de) (no. 460234259).
drawing