1,114 research outputs found
Recommended from our members
Gestural Interaction with Spatiotemporal Linked Open Data
Exploring complex spatiotemporal data can be very challenging for non-experts. Recently, gestural interaction has emerged as a promising option, which has been successfully applied to various domains, including simple map control. In this paper, we investigate whether gestures can be used to enable non-experts to explore and understand complex spatiotemporal phenomena. In this case study we made use of large amounts of Linked Open Data about the deforestation of the Brazilian Amazon Rainforest and related ecological, economical and social factors. The results of our study indicate that people of all ages can easily learn gestures and successfully use them to explore the visualized and aggregated spatiotemporal data about the Brazilian Amazon Rainforest
Exploring heritage through time and space : Supporting community reflection on the highland clearances
On the two hundredth anniversary of the Kildonan clearances, when people were forcibly removed from their homes, the Timespan Heritage centre has created a program of community centred work aimed at challenging pre conceptions and encouraging reflection on this important historical process. This paper explores the innovative ways in which virtual world technology has facilitated community engagement, enhanced visualisation and encouraged reflection as part of this program. An installation where users navigate through a reconstruction of pre clearance Caen township is controlled through natural gestures and presented on a 300 inch six megapixel screen. This environment allows users to experience the past in new ways. The platform has value as an effective way for an educator, artist or hobbyist to create large scale virtual environments using off the shelf hardware and open source software. The result is an exhibit that also serves as a platform for experimentation into innovative ways of community co-creation and co-curation.Postprin
Bringing user experience empirical data to gesture-control and somatic interaction in virtual reality videogames: an exploratory study with a multimodal interaction prototype
Comunicação apresentada na SciTecIn15 - Conferência Ciências e Tecnologias da Interação, realizada em Coimbra, de 12-13 de novembro de 2015With the emergence of new low-cost gestural interaction devices various studies have been developed on multi-modal human-computer interaction to improve user experience. We present an exploratory study which analysed the user experience with a multimodal interaction game prototype. As a result, we propose a set of preliminary recommendations for combined use of such devices and present implications for advancing the multimodal field in human-computer interaction
Beyond visualization : designing interfaces to contextualize geospatial data
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 71-74).The growing sensor data collections about our environment have the potential to drastically change our perception of the fragile world we live in. To make sense of such data, we commonly use visualization techniques, enabling public discourse and analysis. This thesis describes the design and implementation of a series of interactive systems that integrate geospatial sensor data visualization and terrain models with various user interface modalities in an educational context to support data analysis and knowledge building using part-digital, part-physical rendering. The main contribution of this thesis is a concrete application scenario and initial prototype of a "Designed Environment" where we can explore the relationship between the surface of Japan's islands, the tension that originates in the fault lines along the seafloor beneath its east coast, and the resulting natural disasters. The system is able to import geospatial data from a multitude of sources on the "Spatial Web", bringing us one step closer to a tangible "dashboard of the Earth."Samuel Luescher.S.M
Interaction Methods for Smart Glasses : A Survey
Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human-computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals. Finally, we discuss several key design challenges and the possibility of multi-modal input for smart glasses.Peer reviewe
Recommended from our members
Human-display interaction technology: Emerging remote interfaces for pervasive display environments
This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.We're living in a world where information processing isn't confined to desktop computers - it's being integrated into everyday objects and activities. Pervasive computation is human centered: it permeates our physical world, helping us achieve goals and fulfill our needs with minimum effort by exploiting natural interaction styles. Remote interaction with screen displays requires a sensor-based, multimodal, touchless approach. For example, by processing user hand gestures, this paradigm removes constraints requiring physical contact and permits natural interaction with tangible digital information. Such touchless interaction can be multimodal, exploiting the visual, auditory, and olfactory senses.Ministerio de EducaciĂłn y Ciencia and Amper Sistemas, SA
Multimodal, Embodied and Location-Aware Interaction
This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of
gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case.
BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction.
GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality
of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon-
strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for
highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their
virtual environment
Multimodal, Embodied and Location-Aware Interaction
This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of
gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case.
BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction.
GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality
of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon-
strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for
highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their
virtual environment
- …