Boundary Matting for View Synthesis Samuel W. Hasinoff [1], Sing Bing Kang [2], and Richard Szeliski [2] [1]Dept. of Computer Science University of Toronto Toronto, Canada M5S 3G4 hasinoff@cs.toronto.edu [2] Interactive Visual Media Group Microsoft Research Redmond, WA 98052 {sbkang,szeliski}@microsoft.com In the last few years, new view synthesis has emerged as an important application of 3D stereo reconstruction. While the quality of stereo has improved, it is still imperfect, and a unique depth is typically assigned to every pixel. This is problematic at object boundaries, where the boundaries of objects pixel colors are mixtures of foreground and background colors. Interpolating views without explicitly accounting for this effect results in objects with a "cut-out" appearance. To produce seamless view interpolation, we propose a method called boundary matting, which represents each occlusion boundary as a 3D curve. We show how this method exploits multiple views to perform fully automatic alpha matting and to simultaneously refine stereo depths at the boundaries. The key to our approach is the unifying 3D representation of occlusion boundaries estimated to sub-pixel accuracy. Starting from an initial estimate derived from stereo, we optimize the curve parameters and the foreground colors near the boundaries. Our objective function maximizes consistency with the input images, favors boundaries aligned with strong edges, and damps large perturbations of the curves. Experimental results suggest that this method enables high-quality view synthesis with reduced matting artifacts. Link http://www.cs.toronto.edu/~hasinoff/pubs/hasinoff-matting-2004.pdf