Abstract
In this work we promote the asymmetric view + depth representation as an efficient representation of 3D visual scenes. Recently, it has been proposed in the context of aligned view and depth images and specifically for depth compression. The representation employs two techniques for image analysis and filtering. A super-pixel segmentation of the color image is used to sparsify the depth map in spatial domain and a regularizing spatially adaptive filter is used to reconstruct it back to the input resolution. The relationship between the color and depth images established through these two procedures leads to substantial reduction of the required depth data. In this work we modify the approach for representing 3D scenes, captured by RGB-Z capture setup formed by non-confocal RGB and range sensors with different spatial resolutions. We specifically quantify its performance for the case of low-resolution range sensor working in low-sensing mode that generates images impaired by rather extreme noise. We demonstrate its superiority against other upsampling methods in how it copes with the noise and reconstructs a depth map with good quality out of very low-resolution input range image.
Original language | English |
---|---|
Title of host publication | Ninth International Workshop on Video Processing and Quality Metrics for Consumer Electronics |
Subtitle of host publication | VPQM 2015 |
Pages | 1-6 |
Number of pages | 6 |
Publication status | Published - 16 Feb 2016 |
Publication type | D3 Professional conference proceedings |
Event | International Workshop on Video Processing and Quality Metrics for Consumer Electronics - Duration: 1 Jan 2000 → … |
Conference
Conference | International Workshop on Video Processing and Quality Metrics for Consumer Electronics |
---|---|
Period | 1/01/00 → … |