Update README.md
Browse files
README.md
CHANGED
|
@@ -35,6 +35,19 @@ pip install transformers
|
|
| 35 |
|
| 36 |
See [https://github.com/dpfried/incoder](https://github.com/dpfried/incoder) for example code.
|
| 37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
## License
|
| 39 |
|
| 40 |
CC-BY-NC 4.0
|
|
|
|
| 35 |
|
| 36 |
See [https://github.com/dpfried/incoder](https://github.com/dpfried/incoder) for example code.
|
| 37 |
|
| 38 |
+
### Model
|
| 39 |
+
Load with
|
| 40 |
+
`model = AutoModelForCausalLM.from_pretrained("facebook/incoder-1B")`
|
| 41 |
+
|
| 42 |
+
### Tokenizer
|
| 43 |
+
`tokenizer = AutoTokenizer.from_pretrained("facebook/incoder-1B")`.
|
| 44 |
+
|
| 45 |
+
Note: the incoder-1B and incoder-6B tokenizers are identical, so 'facebook/incoder-6B' could also be used.
|
| 46 |
+
|
| 47 |
+
When calling `tokenizer.decode`, it's important to pass `clean_up_tokenization_spaces=False` to avoid removing spaces after punctuation:
|
| 48 |
+
|
| 49 |
+
`tokenizer.decode(tokenizer.encode("from ."), clean_up_tokenization_spaces=False)`
|
| 50 |
+
|
| 51 |
## License
|
| 52 |
|
| 53 |
CC-BY-NC 4.0
|