Datenbestand vom 08. Juli 2024

Warenkorb Datenschutzhinweis Dissertationsdruck Dissertationsverlag Institutsreihen     Preisrechner

aktualisiert am 08. Juli 2024

ISBN 978-3-8439-1235-8

84,00 € inkl. MwSt, zzgl. Versand

978-3-8439-1235-8, Reihe Informatik

Sophie Stellmach
Gaze-supported Multimodal Interaction

205 Seiten, Dissertation Technische Universität Dresden (2013), Softcover, B5

Zusammenfassung / Abstract

We are faced with a growing variety of digital systems, media, and emerging human-computer interaction styles. These developments redefine our understanding of computers and of how we interact with them. While significant progress has been made in fields such as multi-touch and gestural interaction, using our eye gaze as a novel way to interact with computing systems has not yet been sufficiently explored. This is especially the case for the combination of our eye gaze with additional input possibilities for a richer, more flexible, and more convenient interaction in various application contexts. Whether standing in front of a wall-sized display, sitting on a couch looking at a television screen, or wearing augmented see-through glasses, we signal interest by implicitly looking towards objects that attract our attention. While traditional mouse input works excellent for pointing tasks in standard computer desktop environments, gaze-based pointing is suitable for diverse user contexts.

This dissertation contributes to a better understanding of gaze-based interaction by highlighting the high potential of gaze as supporting input. This thesis thoroughly investigates the state-of-the-art of gaze-based interaction and proposes novel ways how we can use our eye gaze for convenient and efficient human-computer interaction. For this, we take advantage of fast, implicit, yet imprecise gaze input for pointing tasks, while another modality (e.g., touch input) can be used for confirming or refining gaze input and for additional commands. On the one hand, the investigations focus on a combination of gaze data with input from modern smartphones (e.g., touch and accelerometer data) to conveniently interact with large distant displays. This offers high flexibility and mobility, while common problems associated with gaze input can be overcome. On the other hand, a combination of gaze and foot input is addressed to leave the hands free for additional manual input.