3 research outputs found

    ADAMAAS – Towards Smart Glasses for Mobile and Personalized Action Assistance

    Get PDF
    Essig K, Strenge B, Schack T. ADAMAAS – Towards Smart Glasses for Mobile and Personalized Action Assistance. In: Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments. PETRA '16. New York, NY, USA: ACM; 2016: 46:1-46:4

    Modeling Gaze-Guided Narratives for Outdoor Tourism

    Get PDF
    Many outdoor spaces have hidden stories connected with them that can be used to enrich a tourist’s experience. These stories are often related to environmental features which are far from the user and far apart from each other. Therefore they are difficult to explore by locomotion, but can be visually explored from a vantage point. Telling a story from a vantage point is challenging since the system must ensure that the user can identify the relevant features in the environment. Gaze-guided narratives are an interaction concept that helps in such situations by telling a story dynamically depending on the user’s current and previous gaze on a panorama. This chapter suggests a formal modeling approach for gaze-guided narratives, based on narrative mediation trees. The approach is illustrated with an example from the Swiss saga around ’Wilhelm Tell

    Guiding visual search tasks using gaze-contingent auditory feedback

    No full text
    Losing V, Rottkamp L, Zeunert M, Pfeiffer T. Guiding Visual Search Tasks Using Gaze-Contingent Auditory Feedback. In: Brush AJ, ed. UbiComp'14 Adjunct: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing Adjunct Publication. New York, NY: ACM Press; 2014: 1093-1102.In many applications it is necessary to guide humans' visual attention towards certain points in the environment. This can be to highlight certain attractions in a touristic application for smart glasses, to signal important events to the driver of a car or to draw the attention of a user of a desktop system to an important message of the user interface. The question we are addressing here is: How can we guide visual attention if we are not able to do it visually? In the presented approach we use gaze-contingent auditory feedback (sonification) to guide visual attention and show that people are able to make use of this guidance to speed up visual search tasks significantly
    corecore