13,075 research outputs found
Ambient Gestures
We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing
Toward a model of computational attention based on expressive behavior: applications to cultural heritage scenarios
Our project goals consisted in the development of attention-based analysis of human expressive behavior and the implementation of real-time algorithm in EyesWeb XMI in order to improve naturalness of human-computer interaction and context-based monitoring of human behavior. To this aim, perceptual-model that mimic human attentional processes was developed for expressivity analysis and modeled by entropy. Museum scenarios were selected as an ecological test-bed to elaborate three experiments that focus on visitor profiling and visitors flow regulation
GlobalFestival: Evaluating Real World Interaction on a Spherical Display
Spherical displays present compelling opportunities for interaction in public spaces. However, there is little research into how touch interaction should control a spherical surface or how these displays are used in real world settings. This paper presents an in the wild deployment of an application for a spherical display called GlobalFestival that utilises two different touch interaction techniques. The first version of the application allows users to spin and tilt content on the display, while the second version only allows spinning the content. During the 4-day deployment, we collected overhead video data and on-display interaction logs. The analysis brings together quantitative and qualitative methods to understand how users approach and move around the display, how on screen interaction compares in the two versions of the application, and how the display supports social interaction given its novel form factor
Enter the Circle: Blending Spherical Displays and Playful Embedded Interaction in Public Spaces
Public displays are used a variety of contexts, from utility
driven information displays to playful entertainment displays.
Spherical displays offer new opportunities for interaction
in public spaces, allowing users to face each other
during interaction and explore content from a variety of
angles and perspectives. This paper presents a playful installation
that places a spherical display at the centre of a
playful environment embedded with interactive elements.
The installation, called Enter the Circle, involves eight
chair-sized boxes filled with interactive lights that can be
controlled by touching the spherical display. The boxes are
placed in a ring around the display, and passers-by must
“enter the circle” to explore and play with the installation.
We evaluated this installation in a pedestrianized walkway
for three hours over an evening, collecting on-screen logs
and video data. This paper presents a novel evaluation of a
spherical display in a public space, discusses an experimental
design concept that blends displays with embedded
interaction, and analyses real world interaction with the
installation
Understanding Public Evaluation: Quantifying Experimenter Intervention
Public evaluations are popular because some research
questions can only be answered by turning “to the wild.”
Different approaches place experimenters in different roles
during deployment, which has implications for the kinds of
data that can be collected and the potential bias introduced
by the experimenter. This paper expands our understanding
of how experimenter roles impact public evaluations and
provides an empirical basis to consider different evaluation
approaches. We completed an evaluation of a playful
gesture-controlled display – not to understand interaction at
the display but to compare different evaluation approaches.
The conditions placed the experimenter in three roles,
steward observer, overt observer, and covert observer, to
measure the effect of experimenter presence and analyse the
strengths and weaknesses of each approach
Intimate interfaces in action: assessing the usability and subtlety of emg-based motionless gestures
Mobile communication devices, such as mobile phones and networked personal digital assistants (PDAs), allow users to be constantly connected and communicate anywhere and at any time, often resulting in personal and private communication taking place in public spaces. This private -- public contrast can be problematic. As a remedy, we promote intimate interfaces: interfaces that allow subtle and minimal mobile interaction, without disruption of the surrounding environment. In particular, motionless gestures sensed through the electromyographic (EMG) signal have been proposed as a solution to allow subtle input in a mobile context. In this paper we present an expansion of the work on EMG-based motionless gestures including (1) a novel study of their usability in a mobile context for controlling a realistic, multimodal interface and (2) a formal assessment of how noticeable they are to informed observers. Experimental results confirm that subtle gestures can be profitably used within a multimodal interface and that it is difficult for observers to guess when someone is performing a gesture, confirming the hypothesis of subtlety
Visualyzart Project – The role in education
The VisualYzARt project intends to develop research on mobile platforms, web and social scenarios in order to bring augmented reality and natural interaction for the general public, aiming to study and validate the adequacy of YVision platform in various fields of activity such as digital arts, design, education, culture and leisure. The VisualYzARt project members analysed the components available in YVision platform and are defining new ones that allow the creation of applications to a chosen activity, effectively adding a new language to the domain YVision. In this paper we will present the role of the InstitutoPolitécnico de Santarém which falls into the field of education.VisualYzART is funded by QREN – Sistema de Incentivos à Investigação e Desenvolvimento Tecnológico (SI
I&DT), Project n. º 23201 - VisualYzARt (from January 2013 to December 2014). Partners: YDreams Portugal;
Instituto Politécnico de Santarém - Gabinete de e-Learning; Universidade de Coimbra - Centro de Informática e Sistemas; Instituto Politécnico de Leiria - Centro de Investigação em Informática e Comunicações; Universidade Católica do Porto - Centro de Investigação em Ciência e Tecnologia das Artes.info:eu-repo/semantics/publishedVersio
- …