Update README.md
Browse files## Dataset Description
This dataset contains **895,954 examples** of natural language questions paired with their corresponding SPARQL queries. It spans **12 languages** and targets **15 distinct knowledge graphs**, with a significant portion focused on Wikidata and DBpedia.
The dataset was developed as part of a thesis on multilingual pretraining and cross lingual transferability. Its purpose is to facilitate research in text-to-SPARQL generation, particularly focusing on multilingual finetuning.
### Key Features:
* **Multilingual:** Covers 12 languages: English, German, Hebrew, Kannada, Chinese, Spanish, Italian, French, Dutch, Romanian, Farsi , and Russian.
* **Diverse Knowledge Graphs:** Includes queries for 15 KGs, prominently Wikidata and DBpedia.
* **Augmented Data:** Features German translations for many English questions and Wikidata entity/relationship mappings in the `context` column for most of the English and German queries.
## Dataset Structure
The dataset is provided in Parquet format and consists of the following columns:
* `text_query` (string): The natural language question.
* *(Example: "What is the boiling point of water?")*
* `language` (string): The language code of the `text_query` (e.g., 'de', 'en', 'es').
* `sparql_query` (string): The corresponding SPARQL query.
* *(Example: `PREFIX dbo: <http://dbpedia.org/ontology/> ... SELECT DISTINCT ?uri WHERE { ... }`)*
* `knowledge_graphs` (string): The knowledge graph targeted by the `sparql_query` (e.g., 'DBpedia', 'Wikidata').
* `context` (string): (Optional) Wikidata entity/relationship mappings in JSON string format (e.g., `{"entities": {"United States Army": "Q9212"}, "relationships": {"spouse": "P26"}}`).
### Data Splits
* `train`: 895,954 rows (as per thesis augmentation, the screenshot shows 895k)
* `test`: 788 rows (QALD-10 test set)
## How to Use
You can load the dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
# Load a specific split (e.g., train)
dataset = load_dataset("julioc-p/Question-Sparql", split="train")
# Iterate through the dataset
for example in dataset:
print(f"Question ({example['language']}): {example['text_query']}")
print(f"Knowledge Graph: {example['knowledge_graphs']}")
print(f"SPARQL Query: {example['sparql_query']}")
if example['context']:
print(f"Context: {example['context']}")
print("-" * 20)
break
|
@@ -1,31 +1,50 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
dataset_info:
|
| 4 |
-
features:
|
| 5 |
-
- name: text_query
|
| 6 |
-
dtype: string
|
| 7 |
-
- name: language
|
| 8 |
-
dtype: string
|
| 9 |
-
- name: sparql_query
|
| 10 |
-
dtype: string
|
| 11 |
-
- name: knowledge_graphs
|
| 12 |
-
dtype: string
|
| 13 |
-
- name: context
|
| 14 |
-
dtype: string
|
| 15 |
-
splits:
|
| 16 |
-
- name: train
|
| 17 |
-
num_bytes: 374237004
|
| 18 |
-
num_examples: 895166
|
| 19 |
-
- name: test
|
| 20 |
-
num_bytes: 230499
|
| 21 |
-
num_examples: 788
|
| 22 |
-
download_size: 97377947
|
| 23 |
-
dataset_size: 374467503
|
| 24 |
-
configs:
|
| 25 |
-
- config_name: default
|
| 26 |
-
data_files:
|
| 27 |
-
- split: train
|
| 28 |
-
path: data/train-*
|
| 29 |
-
- split: test
|
| 30 |
-
path: data/test-*
|
| 31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
dataset_info:
|
| 4 |
+
features:
|
| 5 |
+
- name: text_query
|
| 6 |
+
dtype: string
|
| 7 |
+
- name: language
|
| 8 |
+
dtype: string
|
| 9 |
+
- name: sparql_query
|
| 10 |
+
dtype: string
|
| 11 |
+
- name: knowledge_graphs
|
| 12 |
+
dtype: string
|
| 13 |
+
- name: context
|
| 14 |
+
dtype: string
|
| 15 |
+
splits:
|
| 16 |
+
- name: train
|
| 17 |
+
num_bytes: 374237004
|
| 18 |
+
num_examples: 895166
|
| 19 |
+
- name: test
|
| 20 |
+
num_bytes: 230499
|
| 21 |
+
num_examples: 788
|
| 22 |
+
download_size: 97377947
|
| 23 |
+
dataset_size: 374467503
|
| 24 |
+
configs:
|
| 25 |
+
- config_name: default
|
| 26 |
+
data_files:
|
| 27 |
+
- split: train
|
| 28 |
+
path: data/train-*
|
| 29 |
+
- split: test
|
| 30 |
+
path: data/test-*
|
| 31 |
+
task_categories:
|
| 32 |
+
- text-generation
|
| 33 |
+
language:
|
| 34 |
+
- en
|
| 35 |
+
- de
|
| 36 |
+
- he
|
| 37 |
+
- kn
|
| 38 |
+
- zh
|
| 39 |
+
- es
|
| 40 |
+
- it
|
| 41 |
+
- fr
|
| 42 |
+
- nl
|
| 43 |
+
- ro
|
| 44 |
+
- fa
|
| 45 |
+
- ru
|
| 46 |
+
tags:
|
| 47 |
+
- code
|
| 48 |
+
size_categories:
|
| 49 |
+
- 100K<n<1M
|
| 50 |
+
---
|