{ "base_model": "ByteDance-Seed/UI-TARS-1.5-7B", "tree": [ { "model_id": "ByteDance-Seed/UI-TARS-1.5-7B", "gated": "False", "card": "\n---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\nlibrary_name: transformers\n---\n\n\n# UI-TARS-1.5 Model\n\nWe shared the latest progress of the UI-TARS-1.5 model in [our blog](https://seed-tars.com/1.5/), which excels in playing games and performing GUI tasks.\n\n## Introduction\n\nUI-TARS-1.5, an open-source multimodal agent built upon a powerful vision-language model. It is capable of effectively performing diverse tasks within virtual worlds.\n\nLeveraging the foundational architecture introduced in [our recent paper](https://arxiv.org/abs/2501.12326), UI-TARS-1.5 integrates advanced reasoning enabled by reinforcement learning. This allows the model to reason through its thoughts before taking action, significantly enhancing its performance and adaptability, particularly in inference-time scaling. Our new 1.5 version achieves state-of-the-art results across a variety of standard benchmarks, demonstrating strong reasoning capabilities and notable improvements over prior models.\n\n
\n \n\n
\n
\n \n
\n\n\nCode: https://github.com/bytedance/UI-TARS\n\nApplication: https://github.com/bytedance/UI-TARS-desktop\n\n## Performance\n**Online Benchmark Evaluation**\n| Benchmark type | Benchmark | UI-TARS-1.5 | OpenAI CUA | Claude 3.7 | Previous SOTA |\n|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------|-------------|-------------|-------------|----------------------|\n| **Computer Use** | [OSworld](https://arxiv.org/abs/2404.07972) (100 steps) | **42.5** | 36.4 | 28 | 38.1 (200 step) |\n| | [Windows Agent Arena](https://arxiv.org/abs/2409.08264) (50 steps) | **42.1** | - | - | 29.8 |\n| **Browser Use** | [WebVoyager](https://arxiv.org/abs/2401.13919) | 84.8 | **87** | 84.1 | 87 |\n| | [Online-Mind2web](https://arxiv.org/abs/2504.01382) | **75.8** | 71 | 62.9 | 71 |\n| **Phone Use** | [Android World](https://arxiv.org/abs/2405.14573) | **64.2** | - | - | 59.5 |\n\n\n**Grounding Capability Evaluation**\n| Benchmark | UI-TARS-1.5 | OpenAI CUA | Claude 3.7 | Previous SOTA |\n|-----------|-------------|------------|------------|----------------|\n| [ScreensSpot-V2](https://arxiv.org/pdf/2410.23218) | **94.2** | 87.9 | 87.6 | 91.6 |\n| [ScreenSpotPro](https://arxiv.org/pdf/2504.07981v1) | **61.6** | 23.4 | 27.7 | 43.6 |\n\n\n\n**Poki Game**\n\n| Model | [2048](https://poki.com/en/g/2048) | [cubinko](https://poki.com/en/g/cubinko) | [energy](https://poki.com/en/g/energy) | [free-the-key](https://poki.com/en/g/free-the-key) | [Gem-11](https://poki.com/en/g/gem-11) | [hex-frvr](https://poki.com/en/g/hex-frvr) | [Infinity-Loop](https://poki.com/en/g/infinity-loop) | [Maze:Path-of-Light](https://poki.com/en/g/maze-path-of-light) | [shapes](https://poki.com/en/g/shapes) | [snake-solver](https://poki.com/en/g/snake-solver) | [wood-blocks-3d](https://poki.com/en/g/wood-blocks-3d) | [yarn-untangle](https://poki.com/en/g/yarn-untangle) | [laser-maze-puzzle](https://poki.com/en/g/laser-maze-puzzle) | [tiles-master](https://poki.com/en/g/tiles-master) |\n|-------------|-----------|--------------|-------------|-------------------|-------------|---------------|---------------------|--------------------------|-------------|--------------------|----------------------|---------------------|------------------------|---------------------|\n| OpenAI CUA | 31.04 | 0.00 | 32.80 | 0.00 | 46.27 | 92.25 | 23.08 | 35.00 | 52.18 | 42.86 | 2.02 | 44.56 | 80.00 | 78.27 |\n| Claude 3.7 | 43.05 | 0.00 | 41.60 | 0.00 | 0.00 | 30.76 | 2.31 | 82.00 | 6.26 | 42.86 | 0.00 | 13.77 | 28.00 | 52.18 |\n| UI-TARS-1.5 | 100.00 | 0.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |\n\n\n**Minecraft**\n\n| Task Type | Task Name | [VPT](https://openai.com/index/vpt/) | [DreamerV3](https://www.nature.com/articles/s41586-025-08744-2) | Previous SOTA | UI-TARS-1.5 w/o Thought | UI-TARS-1.5 w/ Thought |\n|-------------|---------------------|----------|----------------|--------------------|------------------|-----------------|\n| Mine Blocks | (oak_log) | 0.8 | 1.0 | 1.0 | 1.0 | 1.0 |\n| | (obsidian) | 0.0 | 0.0 | 0.0 | 0.2 | 0.3 |\n| | (white_bed) | 0.0 | 0.0 | 0.1 | 0.4 | 0.6 |\n| | **200 Tasks Avg.** | 0.06 | 0.03 | 0.32 | 0.35 | 0.42 |\n| Kill Mobs | (mooshroom) | 0.0 | 0.0 | 0.1 | 0.3 | 0.4 |\n| | (zombie) | 0.4 | 0.1 | 0.6 | 0.7 | 0.9 |\n| | (chicken) | 0.1 | 0.0 | 0.4 | 0.5 | 0.6 |\n| | **100 Tasks Avg.** | 0.04 | 0.03 | 0.18 | 0.25 | 0.31 |\n\n## Model Scale Comparison\n\nThis table compares performance across different model scales of UI-TARS on the OSworld benchmark.\n\n| **Benchmark Type** | **Benchmark** | **UI-TARS-72B-DPO** | **UI-TARS-1.5-7B** | **UI-TARS-1.5** |\n|--------------------|------------------------------------|---------------------|--------------------|-----------------|\n| Computer Use | [OSWorld](https://arxiv.org/abs/2404.07972) | 24.6 | 27.5 | **42.5** |\n| GUI Grounding | [ScreenSpotPro](https://arxiv.org/pdf/2504.07981v1) | 38.1 | 49.6 | **61.6** |\n\nThe released UI-TARS-1.5-7B focuses primarily on enhancing general computer use capabilities and is not specifically optimized for game-based scenarios, where the UI-TARS-1.5 still holds a significant advantage.\n\n## What's next\nWe are providing early research access to our top-performing UI-TARS-1.5 model to facilitate collaborative research. Interested researchers can contact us at TARS@bytedance.com.\n\n\n## Citation\nIf you find our paper and model useful in your research, feel free to give us a cite.\n\n```BibTeX\n@article{qin2025ui,\n title={UI-TARS: Pioneering Automated GUI Interaction with Native Agents},\n author={Qin, Yujia and Ye, Yining and Fang, Junjie and Wang, Haoming and Liang, Shihao and Tian, Shizuo and Zhang, Junda and Li, Jiahao and Li, Yunxin and Huang, Shijue and others},\n journal={arXiv preprint arXiv:2501.12326},\n year={2025}\n}\n```", "metadata": "\"N/A\"", "depth": 0, "children": [ "adriabama06/UI-TARS-1.5-7B-exl2" ], "children_count": 1, "adapters": [], "adapters_count": 0, "quantized": [ "adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF", "adriabama06/UI-TARS-1.5-7B-GGUF", "mradermacher/UI-TARS-1.5-7B-GGUF", "mradermacher/UI-TARS-1.5-7B-i1-GGUF", "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF", "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF", "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF", "rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF", "yujiepan/ui-tars-1.5-7B-GPTQ-W4A16g128" ], "quantized_count": 9, "merges": [], "merges_count": 0, "total_derivatives": 10, "spaces": [], "spaces_count": 0, "parents": [], "base_model": "ByteDance-Seed/UI-TARS-1.5-7B", "base_model_relation": "base" }, { "model_id": "adriabama06/UI-TARS-1.5-7B-exl2", "gated": "unknown", "card": "---\nlicense: apache-2.0\nbase_model:\n- ByteDance-Seed/UI-TARS-1.5-7B\ntags:\n- qwen2_5_vl\n- multimodal\n- gui\n- conversational\nlanguage:\n- en\npipeline_tag: image-text-to-text\nlibrary_name: transformers\n---\n\nEXL2 quants of [UI-TARS-1.5-7B](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B)\n\n[4.00 bits per weight](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-exl2/tree/4.0bpw) \n[6.00 bits per weight](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-exl2/tree/6.0bpw)\n\n| Model | Size |\n|----------|------------------|\n| 4.00 bpw | 7.49 GB |\n| 6.00 bpw | 9.13 GB |", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/UI-TARS-1.5-7B" ], "base_model": null, "base_model_relation": null }, { "model_id": "adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF", "gated": "unknown", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\n- llama-cpp\n- gguf-my-repo\nlibrary_name: transformers\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\n---\n\n# adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`ByteDance-Seed/UI-TARS-1.5-7B`](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/UI-TARS-1.5-7B" ], "base_model": null, "base_model_relation": null }, { "model_id": "adriabama06/UI-TARS-1.5-7B-GGUF", "gated": "unknown", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\n- llama-cpp\nlibrary_name: transformers\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\n---\n\nGGUF quants (with MMPROJ) of [UI-TARS-1.5-7B](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B)\n\n| Model | Size |\n|----------|-----------|\n| [mmproj](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/mmproj-ByteDance-Seed_UI-TARS-1.5-7B.gguf) | 1.32 GB |\n| [Q4_K_M](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/ByteDance-Seed_UI-TARS-1.5-7B-Q4_K_M.gguf) | 4.57 GB |\n| [Q6_K](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/ByteDance-Seed_UI-TARS-1.5-7B-Q6_K.gguf) | 6.11 GB |\n| [Q8_0](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/ByteDance-Seed_UI-TARS-1.5-7B-Q8_0.gguf) | 7.91 GB |\n| [F16](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/ByteDance-Seed_UI-TARS-1.5-7B-F16.gguf) | 14.88 GB |\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/UI-TARS-1.5-7B" ], "base_model": null, "base_model_relation": null }, { "model_id": "mradermacher/UI-TARS-1.5-7B-GGUF", "gated": "False", "card": "---\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\nlanguage:\n- en\nlibrary_name: transformers\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B\n\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q2_K.gguf) | Q2_K | 3.1 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/UI-TARS-1.5-7B" ], "base_model": "mradermacher/UI-TARS-1.5-7B-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/UI-TARS-1.5-7B-i1-GGUF", "gated": "False", "card": "---\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\nlanguage:\n- en\nlibrary_name: transformers\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B\n\n\nstatic quants are available at https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/UI-TARS-1.5-7B" ], "base_model": "mradermacher/UI-TARS-1.5-7B-i1-GGUF", "base_model_relation": "base" }, { "model_id": "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF", "gated": "False", "card": "---\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\n- llama-cpp\n- gguf-my-repo\n---\n\n# Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`ByteDance-Seed/UI-TARS-1.5-7B`](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/UI-TARS-1.5-7B" ], "base_model": "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF", "base_model_relation": "base" }, { "model_id": "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF", "gated": "False", "card": "---\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\n- llama-cpp\n- gguf-my-repo\n---\n\n# Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF\nThis model was converted to GGUF format from [`ByteDance-Seed/UI-TARS-1.5-7B`](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF --hf-file ui-tars-1.5-7b-q6_k.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF --hf-file ui-tars-1.5-7b-q6_k.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF --hf-file ui-tars-1.5-7b-q6_k.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF --hf-file ui-tars-1.5-7b-q6_k.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/UI-TARS-1.5-7B" ], "base_model": "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF", "base_model_relation": "base" }, { "model_id": "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF", "gated": "False", "card": "---\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\n- llama-cpp\n- gguf-my-repo\n---\n\n# Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF\nThis model was converted to GGUF format from [`ByteDance-Seed/UI-TARS-1.5-7B`](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF --hf-file ui-tars-1.5-7b-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF --hf-file ui-tars-1.5-7b-q8_0.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF --hf-file ui-tars-1.5-7b-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF --hf-file ui-tars-1.5-7b-q8_0.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/UI-TARS-1.5-7B" ], "base_model": "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF", "base_model_relation": "base" }, { "model_id": "rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF", "gated": "unknown", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\n- llama-cpp\n- gguf-my-repo\nlibrary_name: transformers\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\n---\n\n# rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`ByteDance-Seed/UI-TARS-1.5-7B`](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/UI-TARS-1.5-7B" ], "base_model": null, "base_model_relation": null }, { "model_id": "yujiepan/ui-tars-1.5-7B-GPTQ-W4A16g128", "gated": "unknown", "card": "---\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\npipeline_tag: image-text-to-text\n---\n\n\n## Codes\n\nSee [run_compression.py](https://huggingface.co/yujiepan/ui-tars-1.5-7B-GPTQ-W4A16g128/blob/main/run_compression.py)", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ByteDance-Seed/UI-TARS-1.5-7B" ], "base_model": null, "base_model_relation": null } ] }