Datenbestand vom 09. März 2024

Warenkorb Datenschutzhinweis Dissertationsdruck Dissertationsverlag Institutsreihen     Preisrechner

aktualisiert am 09. März 2024

ISBN 9783843914352

72,00 € inkl. MwSt, zzgl. Versand


978-3-8439-1435-2, Reihe Informatik

Jörg Edelmann
Advanced Direct Manipulation Techniques for Interactive Displays

128 Seiten, Dissertation Eberhard-Karls-Universität Tübingen (2013), Softcover, B5

Zusammenfassung / Abstract

In recent years, multi-touch interaction has evolved into a standard and ubiquitous interface for many electronic devices like smartphones, tablet computers or large interactive installations like tabletop systems. Beyond the advantage of reduced space consumption, this interface allows for direct manipulation of the visual representation directly in screen-space. We consider this property as the predominant distinct feature of interactive displays compared to other input technologies. In this thesis, we present our work on advanced direct interaction techniques for devices with large screen sizes.

For the development and evaluation of the proposed interaction techniques, we constructed custom-built interactive tabletop devices with optical sensing technology. Based on these hardware prototypes, we present a sensor data processing pipeline to extract touch contacts as well as visual markers to enable multi-touch and tangible interaction. By utilizing the massively parallel computation capabilities of the graphics processing unit, feedback latency is significantly reduced and thereby the illusion of direct manipulation is enhanced. Further, we transfer the interaction technique of manipulating 2D objects directly in screen-space to the problem of 3D camera control. Here, we derive a formulation for adjusting the parameters of a virtual camera intuitively according to multi-touch input. Beyond the extraction and tracking of 2D coordinates, a touch sensor captures more information for every finger contact. Based on this fact, we present a novel adaptive virtual touch keyboard which accounts for individual typing style and imprecise keystrokes. Utilizing machine-learning techniques, we incorporate the complete available sensor information from every keystroke and thereby improve typing accuracy significantly.

Founded on these novel input techniques, we developed methods to improve collaboration with interactive screens. First, we present a novel audio output device based on tangible interaction which integrates naturally in co-located multi-user applications. Second, we developed a system which transports the important aspects of workspace awareness available with interactive displays to a remote scenario. We provide a detailed discussion on application scenarios and present performance results.