17,415 research outputs found
The passive operating mode of the linear optical gesture sensor
The study evaluates the influence of natural light conditions on the
effectiveness of the linear optical gesture sensor, working in the presence of
ambient light only (passive mode). The orientations of the device in reference
to the light source were modified in order to verify the sensitivity of the
sensor. A criterion for the differentiation between two states: "possible
gesture" and "no gesture" was proposed. Additionally, different light
conditions and possible features were investigated, relevant for the decision
of switching between the passive and active modes of the device. The criterion
was evaluated based on the specificity and sensitivity analysis of the binary
ambient light condition classifier. The elaborated classifier predicts ambient
light conditions with the accuracy of 85.15%. Understanding the light
conditions, the hand pose can be detected. The achieved accuracy of the hand
poses classifier trained on the data obtained in the passive mode in favorable
light conditions was 98.76%. It was also shown that the passive operating mode
of the linear gesture sensor reduces the total energy consumption by 93.34%,
resulting in 0.132 mA. It was concluded that optical linear sensor could be
efficiently used in various lighting conditions.Comment: 10 pages, 14 figure
RGBD Datasets: Past, Present and Future
Since the launch of the Microsoft Kinect, scores of RGBD datasets have been
released. These have propelled advances in areas from reconstruction to gesture
recognition. In this paper we explore the field, reviewing datasets across
eight categories: semantics, object pose estimation, camera tracking, scene
reconstruction, object tracking, human actions, faces and identification. By
extracting relevant information in each category we help researchers to find
appropriate data for their needs, and we consider which datasets have succeeded
in driving computer vision forward and why.
Finally, we examine the future of RGBD datasets. We identify key areas which
are currently underexplored, and suggest that future directions may include
synthetic data and dense reconstructions of static and dynamic scenes.Comment: 8 pages excluding references (CVPR style
Machine Understanding of Human Behavior
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior
Wrist-worn pervasive gaze interaction
This paper addresses gaze interaction for smart home control, conducted from a wrist-worn unit. First we asked ten people to enact the gaze movements they would propose for e.g. opening a door or adjusting the room temperature. On basis of their suggestions we built and tested different versions of a prototype applying off-screen stroke input. Command prompts were given to twenty participants by text or arrow displays. The success rate achieved by the end of their first encounter with the system was 46% in average; it took them 1.28 seconds to connect with the system and 1.29 seconds to make a correct selection. Their subjective evaluations were positive with regard to the speed of the interaction. We conclude that gaze gesture input seems feasible for fast and brief remote control of smart home technology provided that robustness of tracking is improved
Towards the design of effective freehand gestural interaction for interactive TV
As interactive devices become pervasive, people are beginning to look for more advanced interaction with televisions in the living room. Interactive television has the potential to offer a very engaging experience. But most common user tasks are still challenging with such systems, such as menu selection or text input, and little work has been done on understanding and supporting the effective design of freehand interaction with a TV in the domestic environment. In this paper, we report two studies investigating freehand gestural interaction with a consumer level sensor that is suitable for TV use scenarios. In the first study, we investigate a range of design factors for tiled layout menu selection, including wearable feedback, push gesture depth, target size and position in motor space. The results show that tactile and audio feedback have no significant effect on user performance and preference, and these results inform potential designs for high selection performance. In the second study, we investigate using freehand gestures for the common TV user task of text input. We design and evaluate two virtual keyboard layouts and three freehand selection methods. Results show that ease of use and error tolerance can both be achieved using a text entry method utilizing a dual circle layout and an expanding target selection technique. Finally, we propose design guidelines for effective, usable and comfortable freehand gestural interaction for interactive TV based on the findings.</p
Brave New GES World:A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies
How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N=267 studies encompassing a total of 187, 265 gestures elicited from 6, 659 participants for 4, 106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2, 304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology, (ii) categorizing the referents, (iii) classifying the gestures elicited for those referents, and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies
- …