Update README.md with sources & keys
Browse filesUpdated list of data source and keys along with some extra description of dataset in general.
README.md
CHANGED
|
@@ -6,96 +6,82 @@ language:
|
|
| 6 |
- en
|
| 7 |
size_categories:
|
| 8 |
- 10K<n<100K
|
| 9 |
-
pretty_name:
|
| 10 |
---
|
| 11 |
# AI Alignment Research Dataset
|
| 12 |
-
This dataset is based on [alignment-research-dataset](https://github.com/moirage/alignment-research-dataset).
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
It is currently maintained and kept up-to-date by volunteers at StampyAI / AI Safety Info.
|
| 17 |
|
| 18 |
## Sources
|
| 19 |
|
| 20 |
-
The
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
-
|
| 29 |
-
-
|
| 30 |
-
-
|
| 31 |
-
-
|
| 32 |
-
-
|
| 33 |
-
-
|
| 34 |
-
-
|
| 35 |
-
-
|
| 36 |
-
-
|
| 37 |
-
-
|
| 38 |
-
-
|
| 39 |
-
-
|
| 40 |
-
-
|
| 41 |
-
-
|
| 42 |
-
-
|
| 43 |
-
-
|
| 44 |
-
-
|
| 45 |
-
-
|
| 46 |
-
-
|
| 47 |
-
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
-
|
| 54 |
-
-
|
| 55 |
-
|
| 56 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
## Usage
|
| 59 |
|
| 60 |
Execute the following code to download and parse the files:
|
| 61 |
-
|
|
|
|
| 62 |
from datasets import load_dataset
|
| 63 |
data = load_dataset('StampyAI/alignment-research-dataset')
|
| 64 |
```
|
| 65 |
|
| 66 |
To only get the data for a specific source, pass it in as the second argument, e.g.:
|
| 67 |
|
| 68 |
-
```
|
| 69 |
from datasets import load_dataset
|
| 70 |
data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')
|
| 71 |
```
|
| 72 |
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
##### source1
|
| 76 |
-
+ id
|
| 77 |
-
+ name
|
| 78 |
-
+ description
|
| 79 |
-
+ author
|
| 80 |
-
|
| 81 |
-
##### source2
|
| 82 |
-
+ id
|
| 83 |
-
+ name
|
| 84 |
-
+ url
|
| 85 |
-
+ text
|
| 86 |
-
|
| 87 |
-
Then the resulting data object with have 6 columns, i.e. `id`, `name`, `description`, `author`, `url` and `text`, where rows from `source1` will have `None` in the `url` and `text` columns, and the `source2` rows will have `None` in their `description` and `author` columns.
|
| 88 |
-
|
| 89 |
-
## Limitations and bias
|
| 90 |
|
| 91 |
-
LessWrong posts have overweighted content on
|
| 92 |
|
| 93 |
## Contributing
|
| 94 |
|
| 95 |
-
|
| 96 |
|
| 97 |
## Citing the Dataset
|
| 98 |
|
| 99 |
-
Please use the following citation when using
|
| 100 |
|
| 101 |
Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).
|
|
|
|
| 6 |
- en
|
| 7 |
size_categories:
|
| 8 |
- 10K<n<100K
|
| 9 |
+
pretty_name: alignment-research-dataset
|
| 10 |
---
|
| 11 |
# AI Alignment Research Dataset
|
|
|
|
| 12 |
|
| 13 |
+
The AI Alignment Research Dataset is a collection of documents related to AI Alignment and Safety from various books, research papers, and alignment related blog posts. This is a work in progress. Components are still undergoing a cleaning process to be updated more regularly.
|
|
|
|
|
|
|
| 14 |
|
| 15 |
## Sources
|
| 16 |
|
| 17 |
+
The following list of sources may change and items may be renamed:
|
| 18 |
+
|
| 19 |
+
- [agentmodels](https://agentmodels.org/)
|
| 20 |
+
- [aiimpacts.org](https://aiimpacts.org/)
|
| 21 |
+
- [aisafety.camp](https://aisafety.camp/)
|
| 22 |
+
- [arbital](https://arbital.com/)
|
| 23 |
+
- arxiv_papers - alignment research papers from [arxiv](https://arxiv.org/)
|
| 24 |
+
- audio_transcripts - transcripts from interviews with various researchers and other audio recordings
|
| 25 |
+
- [carado.moe](https://carado.moe/)
|
| 26 |
+
- [cold.takes](https://www.cold-takes.com/)
|
| 27 |
+
- [deepmind.blog](https://deepmindsafetyresearch.medium.com/)
|
| 28 |
+
- [distill](https://distill.pub/)
|
| 29 |
+
- [eaforum](https://forum.effectivealtruism.org/) - selected posts
|
| 30 |
+
- gdocs
|
| 31 |
+
- gdrive_ebooks - books include [Superintelligence](https://www.goodreads.com/book/show/20527133-superintelligence), [Human Compatible](https://www.goodreads.com/book/show/44767248-human-compatible), [Life 3.0](https://www.goodreads.com/book/show/34272565-life-3-0), [The Precipice](https://www.goodreads.com/book/show/50485582-the-precipice), and others
|
| 32 |
+
- [generative.ink](https://generative.ink/posts/)
|
| 33 |
+
- [gwern_blog](https://gwern.net/)
|
| 34 |
+
- [intelligence.org](https://intelligence.org/) - MIRI
|
| 35 |
+
- [jsteinhardt.wordpress.com](https://jsteinhardt.wordpress.com/)
|
| 36 |
+
- [lesswrong](https://www.lesswrong.com/) - selected posts
|
| 37 |
+
- markdown.ebooks
|
| 38 |
+
- nonarxiv_papers - other alignment research papers
|
| 39 |
+
- [qualiacomputing.com](https://qualiacomputing.com/)
|
| 40 |
+
- reports
|
| 41 |
+
- [stampy](https://aisafety.info/)
|
| 42 |
+
- [vkrakovna.wordpress.com](https://vkrakovna.wordpress.com)
|
| 43 |
+
- [waitbutwhy](https://waitbutwhy.com/)
|
| 44 |
+
- [yudkowsky.net](https://www.yudkowsky.net/)
|
| 45 |
+
|
| 46 |
+
## Keys
|
| 47 |
+
|
| 48 |
+
Not all of the entries contain the same keys, but they all have the following:
|
| 49 |
+
|
| 50 |
+
- id - unique identifier
|
| 51 |
+
- source - based on the data source listed in the previous section
|
| 52 |
+
- title - title of document
|
| 53 |
+
- text - full text of document content
|
| 54 |
+
- url - some values may be `'n/a'`, still being updated
|
| 55 |
+
- date_published - some `'n/a'`
|
| 56 |
+
|
| 57 |
+
The values of the keys are still being cleaned up for consistency. Additional keys are available depending on the source document.
|
| 58 |
|
| 59 |
## Usage
|
| 60 |
|
| 61 |
Execute the following code to download and parse the files:
|
| 62 |
+
|
| 63 |
+
```python
|
| 64 |
from datasets import load_dataset
|
| 65 |
data = load_dataset('StampyAI/alignment-research-dataset')
|
| 66 |
```
|
| 67 |
|
| 68 |
To only get the data for a specific source, pass it in as the second argument, e.g.:
|
| 69 |
|
| 70 |
+
```python
|
| 71 |
from datasets import load_dataset
|
| 72 |
data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')
|
| 73 |
```
|
| 74 |
|
| 75 |
+
## Limitations and Bias
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
|
| 77 |
+
LessWrong posts have overweighted content on doom and existential risk, so please beware in training or finetuning generative language models on the dataset.
|
| 78 |
|
| 79 |
## Contributing
|
| 80 |
|
| 81 |
+
The scraper to generate this dataset is open-sourced on [GitHub](https://github.com/StampyAI/alignment-research-dataset) and currently maintained by volunteers at StampyAI / AI Safety Info. [Learn more](https://coda.io/d/AI-Safety-Info_dfau7sl2hmG/Get-involved_susRF#_lufSr) or join us on [Discord](https://discord.gg/vjFSCDyMCy).
|
| 82 |
|
| 83 |
## Citing the Dataset
|
| 84 |
|
| 85 |
+
For more information, here is the [paper](https://arxiv.org/abs/2206.02841) and [LessWrong](https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai) post. Please use the following citation when using the dataset:
|
| 86 |
|
| 87 |
Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).
|