Home MotionSight: Boosting Fine-Grained Motion Understanding in Multimodal LLMs

Yipeng Du*, Tiehan Fan*, Kepan Nan♡♠, Rui Xie♡♠,
Nanjing University, ByteDance Nankai University
* Equal contribution.    indicates corresponding author.

TL;DR: MotionSight: A zero-shot method and dataset (MotionVid-QA) for fine-grained video motion understanding with MLLMs.

Abstract

Despite advancements in Multimodal Large Language Models (MLLMs), their proficiency in fine-grained video motion understanding remains critically limited. They often lack inter-frame differencing and tend to average or ignore subtle visual cues. Furthermore, while visual prompting has shown potential in static images, its application to video's temporal complexities, particularly for fine-grained motion understanding, remains largely unexplored. We investigate whether inherent capability can be unlocked and boost MLLMs' motion perception and enable distinct visual signatures tailored to decouple object and camera motion cues. In this study, we introduce MotionSight, a novel zero-shot method pioneering object-centric visual spotlight and motion blur as visual prompts to effectively improve fine-grained motion understanding without training. To convert this into valuable data assets, we curated MotionVid-QA, the first large-scale dataset for fine-grained video motion understanding, with hierarchical annotations including SFT and preference data, Θ(40K) video clips and Θ(87K) QAs. Experiments show MotionSight achieves state-of-the-art open-source performance and competitiveness with commercial models. In particular, for fine-grained motion understanding we present a novel zero-shot technique and a large-scale, high-quality dataset. All the code and annotations will be publicly available.

What are the Challenges?

  • Temporal complexity: Unlike static images, videos contain a temporal dimension with continuous changes, making it difficult for models to capture and analyze fine-grained motion cues from both objects and camera movements.
  • Lack of explicit motion modeling: Current MLLMs often process spatial regions with uniform importance and lack mechanisms to discern subtle inter-frame variations, leading to suboptimal fine-grained motion understanding.
  • Inadequacy of image-based prompting: Visual prompting methods effective for images (e.g., background blur) do not transfer well to video, sometimes even degrading performance due to loss of contextual information.
  • Extraction and utilization of implicit knowledge: Even if MLLMs acquire fine-grained motion understanding, it is challenging to explicitly extract and structure this knowledge for downstream tasks and dataset construction.

Fine-grained motion understanding is challenging

challenges

Illustration of the challenges in fine-grained motion understanding.


Our dedicated pipeline

pipeline

Our dedicated MotionSight pipeline for fine-grained motion understanding.

Data Statistic

stat

Statistical analysis of the MotionVid-QA dataset.

More Examples

examples

Fine-grained motion understanding by MotionSight.

Main Results compared with SoTA Multi-modal Language Models

results

Quantitative results on MotionBench

results2

Quantitative results on FAVOR-Bench

BibTeX

@misc{du2025motionsightboostingfinegrainedmotion,
      title={MotionSight: Boosting Fine-Grained Motion Understanding in Multimodal LLMs}, 
      author={Yipeng Du and Tiehan Fan and Kepan Nan and Rui Xie and Penghao Zhou and Xiang Li and Jian Yang and Zhenheng Yang and Ying Tai},
      year={2025},
      eprint={2506.01674},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2506.01674}, 
}