Projected Augmented Reality (Google Summer of Code 2015, OpenCV)
Projection mapping (spatical augmented reality) is a technique to augment a scene using a projector. Often in large scale projection mapping, a target object is modeled by hand in 3D and the model is manually aligned to the target object. However, since commodity depth sensors are available, such a model can be generated automatically. Moreover, projector-model alignment can be facilitated by projector-camera calibration. Thus, the motivation of this tutorial is to achieve smart projection mapping using computer vision approaches. Although most of the projection mapping tutorials introduce projection on planar surfaces by quad warping (homography), it does not give much flexibility nor exploit the depth cue. Instead, we demonstrate a method to calibrate projector-camera and to automatically segment background objects for projection target extraction. Finally, using Unity3D, a texture can be mapped on each 3D object to be projected on physical objects. To simplify the problem, we assume that the scene consists only of projection targets and flat objects (e.g., tabletop, wall).