VIDEOMAMA
// Official implementation of "VideoMaMa: Mask-Guided Video Matting via Generative Prior", CVPR 2026
VideoMaMa
Official implementation of "VideoMaMa: Mask-Guided Video Matting via Generative Prior", CVPR 2026
13EmergingUnknown
What it does
VideoMaMa: Mask-Guided Video Matting via Generative Prior Sangbeom Lim1 · Seoung Wug Oh2 · Jiahui Huang2 · Heeji Yoon3 Seungryong Kim3 · Joon-Young Lee2 1Korea University    2Adobe Research    3KAIST AI ArXiv 2026 VideoMaMa is a mask-guided video matting framework that leverages a video generative prior. By utilizing this prior, it supports stable
Getting Started
git
git clone https://github.com/cvlab-kaist/VideoMaMa