Real-time depth enhancement by fusion for RGB-D cameras

Frederic Garcia*, Djamila Aouada, Thomas Solignac, Bruno Mirbach, Björn Ottersten

*Corresponding author for this work

Research output: Contribution to journalArticleResearchpeer-review

21 Citations (Scopus)

Abstract

This study presents a real-time refinement procedure for depth data acquired by RGB-D cameras. Data from RGB-D cameras suffer from undesired artefacts such as edge inaccuracies or holes owing to occlusions or low object remission. In this work, the authors use recent depth enhancement filters intended for time-of-flight cameras, and extend them to structured lightbased depth cameras, such as the Kinect camera. Thus, given a depth map and its corresponding two-dimensional image, we correct the depth measurements by separately treating its undesired regions. To that end, the authors propose specific confidence maps to tackle areas in the scene that require a special treatment. Furthermore, in the case of filtering artefacts, the authors introduce the use of RGB images as guidance images as an alternative to real-time state-of-the-art fusion filters that use greyscale guidance images. The experimental results show that the proposed fusion filter provides dense depth maps with corrected erroneous or invalid depth measurements and adjusted depth edges. In addition, the authors propose a mathematical formulation that enables to use the filter in real-time applications.

Original languageEnglish
Pages (from-to)335-345
Number of pages11
JournalIET Computer Vision
Volume7
Issue number5
DOIs
Publication statusPublished - 2013
Externally publishedYes

Fingerprint

Dive into the research topics of 'Real-time depth enhancement by fusion for RGB-D cameras'. Together they form a unique fingerprint.

Cite this