Update README.md
Browse files
README.md
CHANGED
|
@@ -18,6 +18,8 @@ This model card provides *FastVLM-0.5B converted for LiteRT* that are ready for
|
|
| 18 |
|
| 19 |
FastVLM was introduced in [FastVLM: Efficient Vision Encoding for Vision Language Models](https://www.arxiv.org/abs/2412.13303). *(CVPR 2025)*, this model demonstrates improvement in time-to-first-token (TTFT) with performance and is suitable for edge device deployment.
|
| 20 |
|
|
|
|
|
|
|
| 21 |
*Disclaimer*: This model converted for LiteRT is licensed under the [Apple Machine Learning Research Model License Agreement](https://huggingface.co/apple/deeplabv3-mobilevit-small/blob/main/LICENSE). The model is converted and quantized from PyTorch model weight into the LiteRT/Tensorflow-Lite format (no retraining or further customization).
|
| 22 |
|
| 23 |
# How to Use
|
|
|
|
| 18 |
|
| 19 |
FastVLM was introduced in [FastVLM: Efficient Vision Encoding for Vision Language Models](https://www.arxiv.org/abs/2412.13303). *(CVPR 2025)*, this model demonstrates improvement in time-to-first-token (TTFT) with performance and is suitable for edge device deployment.
|
| 20 |
|
| 21 |
+
The model is also converted for Qualcomm NPUs, see more details in this [blogpost](https://developers.googleblog.com/unlocking-peak-performance-on-qualcomm-npu-with-litert/).
|
| 22 |
+
|
| 23 |
*Disclaimer*: This model converted for LiteRT is licensed under the [Apple Machine Learning Research Model License Agreement](https://huggingface.co/apple/deeplabv3-mobilevit-small/blob/main/LICENSE). The model is converted and quantized from PyTorch model weight into the LiteRT/Tensorflow-Lite format (no retraining or further customization).
|
| 24 |
|
| 25 |
# How to Use
|