Add metadata and links to paper, code, and project page
Browse filesThis PR improves the model card by adding:
- Metadata for `pipeline_tag` (image-text-to-text), `library_name` (transformers), and `license` (apache-2.0).
- Direct links to the research paper, the official GitHub repository, and the project's technical blog.
- Refined formatting for better readability on the Hugging Face Hub.
README.md
CHANGED
|
@@ -1,3 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# WebWatcher: Breaking New Frontier of Vision-Language Deep Research Agent
|
| 2 |
|
| 3 |
<p align="center">
|
|
@@ -6,6 +12,8 @@
|
|
| 6 |
|
| 7 |

|
| 8 |
|
|
|
|
|
|
|
| 9 |
## 🥇 Introduction
|
| 10 |
In this paper, we introduce **WebWatcher**, a multimodal agent for deep research that possesses enhanced visual-language reasoning capabilities. Our work presents a unified framework that combines complex vision-language reasoning with multi-tool interaction.
|
| 11 |
|
|
@@ -20,28 +28,28 @@ Key features of our approach include:
|
|
| 20 |
<img src="./assets/distribution_level.png" alt="logo" width="80%"/>
|
| 21 |
</p>
|
| 22 |
|
| 23 |
-
- BrowseComp-VL Benchmark
|
| 24 |
|
| 25 |
<p align="center">
|
| 26 |
<img src="./assets/data_pipelines.png" alt="logo" width="80%"/>
|
| 27 |
</p>
|
| 28 |
|
| 29 |
-
- Automated Trajectory Generation
|
| 30 |
|
| 31 |
-
- Superior Performance
|
| 32 |
|
| 33 |
## 🚀 Performance Highlights
|
| 34 |
<p align="center">
|
| 35 |
<img src="./assets/webwatcher_performance_general.png" alt="logo" width="80%"/>
|
| 36 |
</p>
|
| 37 |
|
| 38 |
-
1. Complex Reasoning (HLE-VL)
|
| 39 |
|
| 40 |
-
2. Information Retrieval (MMSearch)
|
| 41 |
|
| 42 |
-
3. Knowledge-Retrieval Integration (LiveVQA)
|
| 43 |
|
| 44 |
-
4. Information Optimization and Aggregation (BrowseComp-VL)
|
| 45 |
|
| 46 |
|
| 47 |
## 🔧 Quick Start
|
|
@@ -58,7 +66,7 @@ Before running inference, test set images need to be downloaded to the `infer/sc
|
|
| 58 |
|
| 59 |
Run `infer/scripts_eval/scripts/eval.sh` with the following required parameters:
|
| 60 |
|
| 61 |
-
- **benchmark**: Name of the dataset to test. Available options: `'hle'`, `'gaia'`, `'livevqa'`, `'mmsearch'`, `'simplevqa'`, `'bc_vl_v1'`, `'bc_vl_v2'`. These test sets should be pre-stored in `infer/vl_search_r1/eval_data` with naming convention like `hle.jsonl`.
|
| 62 |
- **EXPERIMENT_NAME**: Name for this experiment (user-defined)
|
| 63 |
- **MODEL_PATH**: Path to the trained model
|
| 64 |
- **DASHSCOPE_API_KEY**: GPT API key
|
|
@@ -83,10 +91,11 @@ Run `infer/vl_search_r1/pass3.sh` to use LLM-as-judge for evaluating Pass@3 and
|
|
| 83 |
|
| 84 |
If this work is helpful, please kindly cite as:
|
| 85 |
|
| 86 |
-
```
|
| 87 |
@article{geng2025webwatcher,
|
| 88 |
title={WebWatcher: Breaking New Frontiers of Vision-Language Deep Research Agent},
|
| 89 |
author={Geng, Xinyu and Xia, Peng and Zhang, Zhen and Wang, Xinyu and Wang, Qiuchen and Ding, Ruixue and Wang, Chenxi and Wu, Jialong and Zhao, Yida and Li, Kuan and others},
|
| 90 |
journal={arXiv preprint arXiv:2508.05748},
|
| 91 |
year={2025}
|
| 92 |
}
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: image-text-to-text
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
# WebWatcher: Breaking New Frontier of Vision-Language Deep Research Agent
|
| 8 |
|
| 9 |
<p align="center">
|
|
|
|
| 12 |
|
| 13 |

|
| 14 |
|
| 15 |
+
[**Paper**](https://huggingface.co/papers/2508.05748) | [**Code**](https://github.com/Alibaba-NLP/WebAgent) | [**Project Page**](https://tongyi-agent.github.io/blog/introducing-tongyi-deep-research/)
|
| 16 |
+
|
| 17 |
## 🥇 Introduction
|
| 18 |
In this paper, we introduce **WebWatcher**, a multimodal agent for deep research that possesses enhanced visual-language reasoning capabilities. Our work presents a unified framework that combines complex vision-language reasoning with multi-tool interaction.
|
| 19 |
|
|
|
|
| 28 |
<img src="./assets/distribution_level.png" alt="logo" width="80%"/>
|
| 29 |
</p>
|
| 30 |
|
| 31 |
+
- **BrowseComp-VL Benchmark**: We propose a new benchmark, BrowseComp-VL, to evaluate the capabilities of multimodal agents. This challenging dataset is designed for in-depth multimodal reasoning and strategic planning, mirroring the complexity of BrowseComp but extending it into the visual domain. It emphasizes tasks that require both visual perception and advanced information-gathering abilities.
|
| 32 |
|
| 33 |
<p align="center">
|
| 34 |
<img src="./assets/data_pipelines.png" alt="logo" width="80%"/>
|
| 35 |
</p>
|
| 36 |
|
| 37 |
+
- **Automated Trajectory Generation**: To provide robust tool-use capabilities, we developed an automated pipeline to generate high-quality, multi-step reasoning trajectories. These trajectories, which are grounded in actual tool-use behavior and reflect procedural decision-making, are used for efficient cold-start training and further optimization via reinforcement learning. The agent is equipped with several tools, including Web Image Search, Web Text Search, Webpage Visit, Code Interpreter, and an internal OCR tool.
|
| 38 |
|
| 39 |
+
- **Superior Performance**: WebWatcher significantly outperforms proprietary baselines, RAG workflows, and other open-source agents across four challenging VQA benchmarks: Humanity's Last Exam (HLE)-VL, BrowseComp-VL, LiveVQA, and MMSearch. The WebWatcher-32B model, in particular, achieves an average score of 18.2% on HLE, surpassing the GPT-4o-based OmniSearch baseline. It also achieves top-tier performance on LiveVQA (58.7%) and MMSearch (55.3%), demonstrating stable and superior results on demanding, real-world visual search benchmarks.
|
| 40 |
|
| 41 |
## 🚀 Performance Highlights
|
| 42 |
<p align="center">
|
| 43 |
<img src="./assets/webwatcher_performance_general.png" alt="logo" width="80%"/>
|
| 44 |
</p>
|
| 45 |
|
| 46 |
+
1. **Complex Reasoning (HLE-VL)**: On the Human Life Exam (HLE-VL), a benchmark for multi-step complex reasoning, WebWatcher achieved a commanding lead with a Pass@1 score of 13.6%, substantially outperforming representative models including GPT-4o (9.8%), Gemini2.5-flash (9.2%), and Qwen2.5-VL-72B (8.6%).
|
| 47 |
|
| 48 |
+
2. **Information Retrieval (MMSearch)**: In the MMSearch evaluation, WebWatcher demonstrated exceptional retrieval accuracy with a Pass@1 score of 55.3%, significantly surpassing Gemini2.5-flash (43.9%) and GPT-4o (24.1%), showcasing superior precision in retrieval tasks and robust information aggregation capabilities in complex scenarios.
|
| 49 |
|
| 50 |
+
3. **Knowledge-Retrieval Integration (LiveVQA)**: On the LiveVQA benchmark, WebWatcher achieved a Pass@1 score of 58.7%, outperforming Gemini2.5-flash (41.3%), Qwen2.5-VL-72B (35.7%), and GPT-4o (34.0%).
|
| 51 |
|
| 52 |
+
4. **Information Optimization and Aggregation (BrowseComp-VL)**: On BrowseComp-VL, the most comprehensively challenging benchmark, WebWatcher dominated with an average score of 27.0%, more than doubling the performance of mainstream models including GPT-4o (13.4%), Gemini2.5-flash (13.0%), and Claude-3.7 (11.2%).
|
| 53 |
|
| 54 |
|
| 55 |
## 🔧 Quick Start
|
|
|
|
| 66 |
|
| 67 |
Run `infer/scripts_eval/scripts/eval.sh` with the following required parameters:
|
| 68 |
|
| 69 |
+
- **benchmark**: Name of the dataset to test. Available options: `'hle'`, `'gaia'`, `'livevqa'`, `'mmsearch'`, `'simplevqa'`, `'bc_vl_v1'`, `'bc_vl_v2'`. These test sets should be pre-stored in `infer/vl_search_r1/eval_data` with naming convention like `hle.jsonl`.
|
| 70 |
- **EXPERIMENT_NAME**: Name for this experiment (user-defined)
|
| 71 |
- **MODEL_PATH**: Path to the trained model
|
| 72 |
- **DASHSCOPE_API_KEY**: GPT API key
|
|
|
|
| 91 |
|
| 92 |
If this work is helpful, please kindly cite as:
|
| 93 |
|
| 94 |
+
```bibtex
|
| 95 |
@article{geng2025webwatcher,
|
| 96 |
title={WebWatcher: Breaking New Frontiers of Vision-Language Deep Research Agent},
|
| 97 |
author={Geng, Xinyu and Xia, Peng and Zhang, Zhen and Wang, Xinyu and Wang, Qiuchen and Ding, Ruixue and Wang, Chenxi and Wu, Jialong and Zhao, Yida and Li, Kuan and others},
|
| 98 |
journal={arXiv preprint arXiv:2508.05748},
|
| 99 |
year={2025}
|
| 100 |
}
|
| 101 |
+
```
|