VCRBench: Exploring Long-form Causal Reasoning Capabilities of Large Video Language Models

Preprint. Under review.

Pritam Sarkar   Ali Etemad
Paper
Website
Code
Benchmark
Leaderboard

Abstract

Despite recent advances in video understanding, the capabilities of Large Video Language Models (LVLMs) to perform video-based causal reasoning remains underexplored, largely due to the absence of relevant and dedicated benchmarks for evaluating causal reasoning in visually grounded and goal-driven settings. To fill this gap, we introduce a novel benchmark named Video-based long-form Causal Reasoning (VCRBench). We create VCRBench using procedural videos of simple everyday activities, where the steps are deliberately shuffled with each clip capturing a key causal event, to test whether LVLMs can identify, reason about, and correctly sequence the events needed to accomplish a specific goal. Moreover, the benchmark is carefully designed to prevent LVLMs from exploiting linguistic shortcuts, as seen in multiple-choice or binary QA formats, while also avoiding the challenges associated with evaluating open-ended QA. Our evaluation of state-of-the-art LVLMs on VCRBench suggests that these models struggle with video-based long-form causal reasoning, primarily due to their difficulty in modeling long-range causal dependencies directly from visual observations. As a simple step toward enabling such capabilities, we propose Recognition-Reasoning Decomposition (RRD), a modular approach that breaks video-based causal reasoning into two sub-tasks of video recognition and causal reasoning. Our experiments on VCRBench show that RRD significantly boosts accuracy on VCRBench, with gains of up to 25.2%. Finally, our thorough analysis reveals interesting insights, for instance, that LVLMs primarily rely on language knowledge for complex video-based long-form causal reasoning tasks.


Our contributions
    👉 We introduce VCRBench, a novel benchmark designed to evaluate LVLMs on video-based long-form causal reasoning. To the best of our knowledge, this is the first video evaluation benchmark to study multi-step causal reasoning capabilities of LVLMs. Our analysis on various state-of-the-art LVLMs reveals that current LVLMs struggle with long-form causal reasoning due to their inability of meaningfully connect a series of visual events toward a goal.

    👉 To improve the performance of open-source LVLMs on VCRBench, we introduce RRD, which decomposes video-based causal reasoning into two related sub-tasks video recognition and causal reasoning. This simple modular approach allows LVLMs to focus on one type of task at a time, first recognition, then reasoning, which results in notable performance gains of up to 25.2%.


We present an example of video-based long-form causal reasoning task from VCRBench. The correct order is: Clip 1: Cut lemon into slices, Clip 5: Squeeze lemon into the pitcher, Clip 4: Pour lemon juice and water into the pitcher, Clip 3: Stir the lemonade mixture, Clip 2: Pour lemonade into a glass.

Leaderboard

Models # Frames Acc ↑ Step Acc ↑
Random Guess7.824.1
InternVL2.5-1B641.410.3
InternVL2.5-2B646.316.2
LongVU-3B1fps0.07.0
InternVL2.5-4B641.69.5
VideoChat2-7B160.35.8
InternVL2.5-8B642.711.1
LLaVA-NeXT-Video-7B640.017.4
MiniCPM-o-V 2.6-7B642.511.0
Qwen2.5-VL-Instruct-7B1fps7.120.9
VideoLLaMA3-7B1281.613.1
LongVILA-7B1280.31.1
LongVU-7B1fps0.02.4
NVILA-15B80.63.6
InternVL2.5-26B642.713.7
InternVL2.5-38B6411.027.4
LLaVA-NeXT-Video-72B325.218.6
Qwen2.5-VL-Instruct-72B1fps29.044.0
InternVL2.5-78B6414.534.0
GPT-4o (gpt-4o-2024-11-20)3229.036.6
Gemini-1.5-Pro (gemini-1.5-pro)1fps48.265.3
Gemini-2.0-Flash-Thinking (gemini-2.0-flash-thinking-exp)1fps58.067.7
Human96.498.3

Read our paper for more insights!

Citation

Please cite our paper using the given BibTeX entry.




Contact me:

You may directly contact me at pritam.sarkar@queensu.ca or connect with me on LinkedIn.
I am on the job market for a full-time role as a researcher. If you find my experience a good fit, please reach out.