In light of the tremendous advances in the fields of unmanned aerial vehicles (UAVs) and imaging sensors in recent years, UAV-photogrammetry has become an essential part of remote sensing methodology. An extension to video is immediate by enforcing temporal continuity of computed depth maps. Although far from perfect, our results demonstrate that repositories of 3D content can be used for effective 2D-to-3D image conversion. We demonstrate both the efficacy and the computational efficiency of our methods on numerous 2D images and discuss their drawbacks and benefits. The second method is based on globally estimating the entire depth map of a query image directly from a repository of 3D images (image+depth pairs or stereopairs) using a nearest-neighbor regression type idea. The first is based on learning a point mapping from local image/video attributes, such as color, spatial position, and, in the case of video, motion at each pixel, to scene-depth at that pixel using a regression type idea. In this paper, we propose a new class of methods that are based on the radically different approach of learning the 2D-to-3D conversion from examples. Automatic methods, that typically make use of a deterministic 3D scene model, have not yet achieved the same level of quality for they rely on assumptions that are often violated in practice. Methods involving human operators have been most successful but also time-consuming and costly. In order to close this gap, many 2D-to-3D image and video conversion methods have been proposed. Despite a significant growth in the last few years, the availability of 3D content is still dwarfed by that of its 2D counterpart.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |