Enhance model card for VideoTG-R1
#1
by
nielsr
HF Staff
- opened
This PR replaces the placeholder model card with detailed information for the VideoTG-R1 model.
It includes:
- Updating metadata with
pipeline_tag: video-text-to-text,library_name: transformers,datasets: yeliudev/VideoMind-Dataset, and relevant tags likevideo-temporal-grounding,multimodal-llm,reinforcement-learning, andcurriculum-learning. - A link to the paper (VideoTG-R1: Boosting Video Temporal Grounding via Curriculum Reinforcement Learning on Reflected Boundary Annotations).
- A link to the GitHub repository (https://github.com/ldong1111/VideoTG-R1).
- A descriptive abstract and methodology visualizations.
- Comprehensive usage, training, and evaluation instructions with
bashcode snippets directly from the official GitHub README.
This update will significantly improve discoverability and usability for researchers interested in Video Temporal Grounding.
Lu9876
changed pull request status to
merged