SciELO - Scientific Electronic Library Online

 
vol.23 issue48Techno-Economic Study of Two Biodiesel Production Technologies from Soybean Oil Using Superpro Designer SimulatorReliability Analysis of Bored-pile Wall Stability Considering Parameter Uncertainties author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


TecnoLógicas

Print version ISSN 0123-7799On-line version ISSN 2256-5337

Abstract

SALAZAR, Isail; PERTUZ, Said  and  MARTINEZ, Fabio. Multi-modal RGB-D Image Segmentation from Appearance and Geometric Depth Maps. TecnoL. [online]. 2020, vol.23, n.48, pp.140-158. ISSN 0123-7799.  https://doi.org/10.22430/22565337.1538.

Classical image segmentation algorithms exploit the detection of similarities and discontinuities of different visual cues to define and differentiate multiple regions of interest in images. However, due to the high variability and uncertainty of image data, producing accurate results is difficult. In other words, segmentation based just on color is often insufficient for a large percentage of real-life scenes. This work presents a novel multi-modal segmentation strategy that integrates depth and appearance cues from RGB-D images by building a hierarchical region-based representation, i.e., a multi-modal segmentation tree (MM-tree). For this purpose, RGB-D image pairs are represented in a complementary fashion by different segmentation maps. Based on color images, a color segmentation tree (C-tree) is created to obtain segmented and over-segmented maps. From depth images, two independent segmentation maps are derived by computing planar and 3D edge primitives. Then, an iterative region merging process can be used to locally group the previously obtained maps into the MM-tree. Finally, the top emerging MM-tree level coherently integrates the available information from depth and appearance maps. The experiments were conducted using the NYU-Depth V2 RGB-D dataset, which demonstrated the competitive results of our strategy compared to state-of-the-art segmentation methods. Specifically, using test images, our method reached average scores of 0.56 in Segmentation Covering and 2.13 in Variation of Information.

Keywords : Image segmentation; over-segmentation; RGB-D images; depth information; multi-modal segmentation.

        · abstract in Spanish     · text in English     · English ( pdf )