Datenbestand vom 09. März 2024

Warenkorb Datenschutzhinweis Dissertationsdruck Dissertationsverlag Institutsreihen     Preisrechner

aktualisiert am 09. März 2024

ISBN 9783843903646

84,00 € inkl. MwSt, zzgl. Versand


978-3-8439-0364-6, Reihe Informatik

Jochen Björn Süßmuth
Surface Reconstruction from Static and Dynamic Point Data

228 Seiten, Dissertation Universität Erlangen-Nürnberg (2011), Softcover, A5

Zusammenfassung / Abstract

Three-dimensional (3D) computer models play a vital role in many fields of application: from medicine to engineering to arts and entertainment, digital 3D content can be found virtually everywhere. Due to the increasing demand for exact 3D models, the digitalization of real objects using 3D scanners that sample the object’s surface has gained increasing popularity over the last decade.

Since a 3D scan can only capture the part of the object that is directly visible to the scanner, several scans must be recorded from different points of view to cover the entire surface of the scanned object. The process of merging the sample points of the single scans to form a manifold surface is commonly referred to as surface reconstruction. In the first part of this thesis, we present two novel approaches for robust surface reconstruction from raw scanner data. The presented algorithms differ with respect to the requirements they pose on the input data and thus can be used for reconstructing point clouds which were acquired by different devices.

Recently, 3D scanners that operate at interactive frame rates were introduced. Using these devices now allows simultaneously recording the geometry and the dynamics of a moving object. The data collected by such scanners poses a completely new challenge: instead of reconstructing a single manifold mesh from several overlapping scans, it is now necessary to establish correspondences between the single scans that were acquired at different points in time. We focus on that problem in the second part of this thesis. First we present a novel algorithm for robust pairwise non-rigid registration, which can be used for computing correspondences between any two scans of the recorded sequence. Furthermore, we present an efficient technique for extracting a coherent animated mesh model directly from a sequence of time-varying raw 3D scans. Finally, we use the developed algorithms as the building blocks for an application that allows transferring facial performances recorded by a real-time 3D scanner onto arbitrary target face models, thereby requiring minimal user interaction and producing more convincing results than previous approaches.