i-am-waqas commited on
Commit
478ddb3
·
verified ·
1 Parent(s): 735becf

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .argilla/dataset.json +1 -0
  2. .argilla/settings.json +1 -0
  3. README.md +200 -36
.argilla/dataset.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"id": "eaf478e7-c8c8-405b-9292-34874013ca5c", "inserted_at": "2025-10-04T18:07:47.688879", "updated_at": "2025-10-04T18:07:48.302304", "name": "my_dataset", "status": "ready", "guidelines": "These are some guidelines.", "allow_extra_metadata": false, "distribution": {"strategy": "overlap", "min_submitted": 1}, "workspace_id": "73be8219-ea13-4fd8-a384-6783e0a1a72b", "last_activity_at": "2025-10-04T18:07:48.302304"}
.argilla/settings.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"guidelines": "These are some guidelines.", "questions": [{"id": "28322dc9-4039-48ea-9e6b-f96ea2281386", "inserted_at": "2025-10-04T18:07:48.130119", "updated_at": "2025-10-04T18:07:48.130119", "name": "label", "settings": {"type": "label_selection", "options": [{"value": "yes", "text": "yes", "description": null}, {"value": "no", "text": "no", "description": null}], "visible_options": null}, "title": "label", "description": null, "required": true, "dataset_id": "eaf478e7-c8c8-405b-9292-34874013ca5c", "type": "label_selection"}], "fields": [{"id": "c629b0f6-5df0-40b5-86b4-bdb9961c21c4", "inserted_at": "2025-10-04T18:07:47.965794", "updated_at": "2025-10-04T18:07:47.965794", "name": "text", "settings": {"type": "text", "use_markdown": false}, "title": "text", "required": true, "description": null, "dataset_id": "eaf478e7-c8c8-405b-9292-34874013ca5c", "type": "text"}], "vectors": [], "metadata": [], "allow_extra_metadata": false, "distribution": {"strategy": "overlap", "min_submitted": 1}, "mapping": null}
README.md CHANGED
@@ -1,38 +1,202 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: id
5
- dtype: string
6
- - name: status
7
- dtype: string
8
- - name: _server_id
9
- dtype: string
10
- - name: text
11
- dtype: string
12
- - name: label.responses
13
- list: string
14
- - name: label.responses.users
15
- list: string
16
- - name: label.responses.status
17
- list: string
18
- - name: label.suggestion
19
- dtype:
20
- class_label:
21
- names:
22
- '0': 'yes'
23
- - name: label.suggestion.score
24
- dtype: 'null'
25
- - name: label.suggestion.agent
26
- dtype: 'null'
27
- splits:
28
- - name: train
29
- num_bytes: 207
30
- num_examples: 1
31
- download_size: 5316
32
- dataset_size: 207
33
- configs:
34
- - config_name: default
35
- data_files:
36
- - split: train
37
- path: data/train-*
38
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ size_categories: n<1K
3
+ tags:
4
+ - rlfh
5
+ - argilla
6
+ - human-feedback
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
+
9
+ # Dataset Card for my_dataset_copy
10
+
11
+
12
+
13
+
14
+
15
+
16
+
17
+ This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
18
+
19
+
20
+ ## Using this dataset with Argilla
21
+
22
+ To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
23
+
24
+ ```python
25
+ import argilla as rg
26
+
27
+ ds = rg.Dataset.from_hub("i-am-waqas/my_dataset_copy", settings="auto")
28
+ ```
29
+
30
+ This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
31
+
32
+ ## Using this dataset with `datasets`
33
+
34
+ To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
35
+
36
+ ```python
37
+ from datasets import load_dataset
38
+
39
+ ds = load_dataset("i-am-waqas/my_dataset_copy")
40
+ ```
41
+
42
+ This will only load the records of the dataset, but not the Argilla settings.
43
+
44
+ ## Dataset Structure
45
+
46
+ This dataset repo contains:
47
+
48
+ * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`.
49
+ * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
50
+ * A dataset configuration folder conforming to the Argilla dataset format in `.argilla`.
51
+
52
+ The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
53
+
54
+ ### Fields
55
+
56
+ The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
57
+
58
+ | Field Name | Title | Type | Required | Markdown |
59
+ | ---------- | ----- | ---- | -------- | -------- |
60
+ | text | text | text | True | False |
61
+
62
+
63
+ ### Questions
64
+
65
+ The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
66
+
67
+ | Question Name | Title | Type | Required | Description | Values/Labels |
68
+ | ------------- | ----- | ---- | -------- | ----------- | ------------- |
69
+ | label | label | label_selection | True | N/A | ['yes', 'no'] |
70
+
71
+
72
+ <!-- check length of metadata properties -->
73
+
74
+
75
+
76
+
77
+
78
+ ### Data Instances
79
+
80
+ An example of a dataset instance in Argilla looks as follows:
81
+
82
+ ```json
83
+ {
84
+ "_server_id": "99d1f10d-b81b-4379-8dbe-a7c859435d83",
85
+ "fields": {
86
+ "text": "Do you need oxygen to breathe?"
87
+ },
88
+ "id": "529e1a91-6f17-4a05-8c48-29d5897fd165",
89
+ "metadata": {},
90
+ "responses": {
91
+ "label": [
92
+ {
93
+ "user_id": "3eed552e-8105-4bbd-bf58-ea7443378c21",
94
+ "value": "yes"
95
+ }
96
+ ]
97
+ },
98
+ "status": "completed",
99
+ "suggestions": {
100
+ "label": {
101
+ "agent": null,
102
+ "score": null,
103
+ "value": "yes"
104
+ }
105
+ },
106
+ "vectors": {}
107
+ }
108
+ ```
109
+
110
+ While the same record in HuggingFace `datasets` looks as follows:
111
+
112
+ ```json
113
+ {
114
+ "_server_id": "99d1f10d-b81b-4379-8dbe-a7c859435d83",
115
+ "id": "529e1a91-6f17-4a05-8c48-29d5897fd165",
116
+ "label.responses": [
117
+ "yes"
118
+ ],
119
+ "label.responses.status": [
120
+ "submitted"
121
+ ],
122
+ "label.responses.users": [
123
+ "3eed552e-8105-4bbd-bf58-ea7443378c21"
124
+ ],
125
+ "label.suggestion": 0,
126
+ "label.suggestion.agent": null,
127
+ "label.suggestion.score": null,
128
+ "status": "completed",
129
+ "text": "Do you need oxygen to breathe?"
130
+ }
131
+ ```
132
+
133
+
134
+ ### Data Splits
135
+
136
+ The dataset contains a single split, which is `train`.
137
+
138
+ ## Dataset Creation
139
+
140
+ ### Curation Rationale
141
+
142
+ [More Information Needed]
143
+
144
+ ### Source Data
145
+
146
+ #### Initial Data Collection and Normalization
147
+
148
+ [More Information Needed]
149
+
150
+ #### Who are the source language producers?
151
+
152
+ [More Information Needed]
153
+
154
+ ### Annotations
155
+
156
+ #### Annotation guidelines
157
+
158
+ These are some guidelines.
159
+
160
+ #### Annotation process
161
+
162
+ [More Information Needed]
163
+
164
+ #### Who are the annotators?
165
+
166
+ [More Information Needed]
167
+
168
+ ### Personal and Sensitive Information
169
+
170
+ [More Information Needed]
171
+
172
+ ## Considerations for Using the Data
173
+
174
+ ### Social Impact of Dataset
175
+
176
+ [More Information Needed]
177
+
178
+ ### Discussion of Biases
179
+
180
+ [More Information Needed]
181
+
182
+ ### Other Known Limitations
183
+
184
+ [More Information Needed]
185
+
186
+ ## Additional Information
187
+
188
+ ### Dataset Curators
189
+
190
+ [More Information Needed]
191
+
192
+ ### Licensing Information
193
+
194
+ [More Information Needed]
195
+
196
+ ### Citation Information
197
+
198
+ [More Information Needed]
199
+
200
+ ### Contributions
201
+
202
+ [More Information Needed]