Maintaining consistent characters, props, and environments across multiple shots is a central challenge in narrative video generation. Existing models can produce high-quality short clips but often fail to preserve entity identity and appearance when scenes change or when entities reappear after long temporal gaps. We present VideoMemory, an entity-centric framework that integrates narrative planning with visual generation through a Dynamic Memory Bank. Given a structured script, a multi-agent system decomposes the narrative into shots, retrieves entity representations from memory, and synthesizes keyframes and videos conditioned on these retrieved states. The Dynamic Memory Bank stores explicit visual and semantic descriptors for characters, props, and backgrounds, and is updated after each shot to reflect story-driven changes while preserving identity. This retrieval–update mechanism enables consistent portrayal of entities across distant shots and supports coherent long-form generation. To evaluate this setting, we construct a 54-case multi-shot consistency benchmark covering character-, prop-, and background-persistent scenarios. Extensive experiments show that VideoMemory achieves strong entity-level coherence and high perceptual quality across diverse narrative sequences.
The framework of the proposed VideoMemory. Starting from a script synopsis, our system plans shot-level descriptions, interacts with a Dynamic Memory Bank to retrieve or create entity references, generates keyframes, and finally synthesizes a coherent multi-shot video.
Qualitative comparison demonstrating superior entity consistency. Across all three subclasses (Character, Prop, Background), VideoMemory (bottom row) maintains remarkable stability where baselines fail. Note how baselines exhibit severe identity drift—changing a character's appearance (left), morphing a red kite into other objects (middle), and altering a garage's layout (right). In contrast, our method preserves the identity of all entities across distant shots, a direct result of our explicit memory management.
Multi-shot consistency results. We evaluate character, prop, and background consistency using DINOv2 similarity. Our method achieves superior performance across all metrics, especially as the number of shots increases. Best and second-best scores are marked in bold and underlined, respectively.
| Character Consistency↑ | Prop Consistency↑ | Background Consistency↑ | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Shot Number | 4 | 8 | 12 | Avg. | 4 | 8 | 12 | Avg. | 4 | 8 | 12 | Avg. |
| Wan2.2 | 0.34 | - | - | - | 0.48 | - | - | - | 0.25 | - | - | - |
| EchoShot | 0.45 | 0.44 | - | - | 0.59 | 0.51 | - | - | 0.54 | 0.37 | - | - |
| IC-LoRA+Wan2.2 | 0.42 | 0.55 | 0.43 | 0.47 | 0.50 | 0.44 | 0.34 | 0.43 | 0.31 | 0.33 | 0.29 | 0.31 |
| StoryDiffusion+Wan2.2 | 0.53 | 0.62 | 0.46 | 0.54 | 0.43 | 0.47 | 0.52 | 0.47 | 0.51 | 0.40 | 0.36 | 0.42 |
| VGoT+Wan2.2 | 0.59 | 0.53 | 0.60 | 0.57 | 0.48 | 0.22 | 0.24 | 0.31 | 0.53 | 0.36 | 0.47 | 0.45 |
| VideoMemory (Ours) | 0.61 | 0.65 | 0.64 | 0.63 | 0.69 | 0.50 | 0.55 | 0.58 | 0.71 | 0.72 | 0.73 | 0.72 |
Multi-shot video generation results demonstrating consistent entity preservation across diverse narratives.
Please enable sound for the best experience
@article{zhou2025videomemory,
title={VideoMemory: Toward Consistent Video Generation via Memory Integration},
author={Zhou, Jinsong and Du, Yihua and Xu, Xinli and Wang, Luozhou and Zhuang, Zijie and Zhang, Yehang and Li, Shuaibo and Hu, Xiaojun and Su, Bolan and Chen, Ying-cong},
journal={arXiv preprint},
year={2025}
}