Datenbestand vom 13. Juni 2019

Warenkorb Datenschutzhinweis Dissertationsdruck Dissertationsverlag Institutsreihen     Preisrechner

WICHTIGER HINWEIS
DER VERLAG IST IN DER ZEIT VOM 12.06.2019 BIS 23.06.2019 AUSCHLIESSLICH PER EMAIL ERREICHBAR.

aktualisiert am 13. Juni 2019

ISBN 9783843926805

Euro 72,00 inkl. 7% MwSt


978-3-8439-2680-5, Reihe Informatik

Lixing Jiang
Object Recognition and Saliency Detection for Indoor Robots using RGB-D Sensors

130 Seiten, Dissertation Eberhard-Karls-Universität Tübingen (2016), Softcover, A5

Zusammenfassung / Abstract

This thesis focuses on the task of object detection and recognition in the context of indoor service robots, which is inspired by the growing interest for potential applications of automation systems as support or assistance platforms for customers as well as employees. Through the introduction of low-cost sensors providing color as well as depth information of the observed environment, the cost-benefit ratio makes it possible to easily consider exemplary applications in supermarkets, shopping malls or homes. Additional depth cues, on the other hand, enhance performance compared with sole vision-based algorithms. For this purpose, the herein proposed approaches and extensions are particularly beseemed to capture and extract descriptive data features in RGB-D perception, which then may be forwarded to machine learning back-ends for the intention of object recognition. Besides generalized significant system requirements, like accuracy, robustness and reliability, the herein proposed algorithms are also selected and optimized with regard to their runtime-complexity, to fulfill the demands of a real-time human-machine interface.

The first building block of this work concentrates on an effective and robust object recognition system using global visual features, such as color, texture and shape, in order to classify objects under varying pose and lighting conditions tailored for an indoor robot platform. After image segmentation and feature extraction, the unified approach is validated using two multi-class RGB-D object categorization datasets. Experimental results compare different feature sets and classifiers and highlight the effectiveness as well as real time suitability of the proposed extensions for our mobile system based on real RGB-D data.

The proposed recognition system benefits from a simple and capable segmentation algorithm, which takes advantage of a static setup between a camera sensor and a tray on our robot. However, issues of recognition may arise when multiple objects are to be detected in more complex scenes. To achieve detecting multiple salient objects in a mostly unbounded environment, we introduce multiple salient regions detection for indoor robots using RGB-D data, and propose a novel fast and simple superpixel approach aiming at efficiently pre-processing and reducing data for subsequent processes.