Update README.md
Browse files
README.md
CHANGED
|
@@ -17,7 +17,7 @@ datasets:
|
|
| 17 |
- Finetuned [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B), on variety of CoT tasks including Reasoning, Closed Book Question Answering, Ethics, and more.
|
| 18 |
- Datasets : Curated from - [kaist-ai/CoT-Collection](https://huggingface.co/datasets/kaist-ai/CoT-Collection), [euclaise/TinyCoT](https://huggingface.co/datasets/euclaise/TinyCoT) and a very small subset from [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5).
|
| 19 |
- This marks the fourth model in this series. This experiment aims to improve Chain of Thought (CoT) capabilities on smaller language models.
|
| 20 |
-
-
|
| 21 |
- Hyperparameter: adamw with eps of 1e-8, cosine decay with 20% warmup, lr=2e-5
|
| 22 |
|
| 23 |
## Benchamrks:
|
|
|
|
| 17 |
- Finetuned [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B), on variety of CoT tasks including Reasoning, Closed Book Question Answering, Ethics, and more.
|
| 18 |
- Datasets : Curated from - [kaist-ai/CoT-Collection](https://huggingface.co/datasets/kaist-ai/CoT-Collection), [euclaise/TinyCoT](https://huggingface.co/datasets/euclaise/TinyCoT) and a very small subset from [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5).
|
| 19 |
- This marks the fourth model in this series. This experiment aims to improve Chain of Thought (CoT) capabilities on smaller language models.
|
| 20 |
+
- I may rerun the finetuning experiment(with a more balanced dataset), using an iterative rationale-bootstrapping procedure inspired by euclaise/Memphis-CoT-3B.
|
| 21 |
- Hyperparameter: adamw with eps of 1e-8, cosine decay with 20% warmup, lr=2e-5
|
| 22 |
|
| 23 |
## Benchamrks:
|