WildWorld: A Large-Scale Dataset for Dynamic World Modeling with Actions and Explicit State toward Generative ARPG
Abstract
WildWorld is a large-scale dataset for action-conditioned world modeling that provides explicit state annotations from a photorealistic game, enabling better understanding of latent-state dynamics and long-horizon consistency.
Dynamical systems theory and reinforcement learning view world evolution as latent-state dynamics driven by actions, with visual observations providing partial information about the state. Recent video world models attempt to learn this action-conditioned dynamics from data. However, existing datasets rarely match the requirement: they typically lack diverse and semantically meaningful action spaces, and actions are directly tied to visual observations rather than mediated by underlying states. As a result, actions are often entangled with pixel-level changes, making it difficult for models to learn structured world dynamics and maintain consistent evolution over long horizons. In this paper, we propose WildWorld, a large-scale action-conditioned world modeling dataset with explicit state annotations, automatically collected from a photorealistic AAA action role-playing game (Monster Hunter: Wilds). WildWorld contains over 108 million frames and features more than 450 actions, including movement, attacks, and skill casting, together with synchronized per-frame annotations of character skeletons, world states, camera poses, and depth maps. We further derive WildBench to evaluate models through Action Following and State Alignment. Extensive experiments reveal persistent challenges in modeling semantically rich actions and maintaining long-horizon state consistency, highlighting the need for state-aware video generation. The project page is https://shandaai.github.io/wildworld-project/.
Community
the standout detail for me is how WildWorld pairs a huge action space with explicit per-frame state annotations (world state, skeletons, camera, depth) and conditions the dynamics on both action and state rather than pixels alone. that separation is exactly the trick to tackle long-horizon drift, since you can blame frame-level changes on action-conditioned latent transitions instead of chasing pixel-level changes. i'd still worry about how robust those skeleton and state annotations are under heavy occlusion and fast actions, because a small annotation slip could cascade through the learned dynamics. the arxivlens breakdown helped me parse the method details and gives a handy map of where the state-conditioning plugs in, btw the summary there covers this part well: https://arxivlens.com/PaperView/Details/wildworld-a-large-scale-dataset-for-dynamic-world-modeling-with-actions-and-explicit-state-toward-generative-arpg-3895-9360c849. one quick question: did you run an ablation removing just the explicit state supervision to quantify its contribution to action following and state alignment?
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper