72 research outputs found
A human computer interactions framework for biometric user identification
Computer assisted functionalities and services have saturated our world becoming such an integral part of our daily activities that we hardly notice them. In this study we are focusing on enhancements in Human-Computer Interaction (HCI) that can be achieved by natural user recognition embedded in the employed interaction models. Natural identification among humans is mostly based on biometric characteristics representing what-we-are (face, body outlook, voice, etc.) and how-we-behave (gait, gestures, posture, etc.) Following this observation, we investigate different approaches and methods for adapting existing biometric identification methods and technologies to the needs of evolving natural human computer interfaces
A Review on Human-Computer Interaction and Intelligent Robots
In the field of artificial intelligence, human–computer interaction (HCI) technology and its related intelligent robot technologies are essential and interesting contents of research. From the perspective of software algorithm and hardware system, these above-mentioned technologies study and try to build a natural HCI environment. The purpose of this research is to provide an overview of HCI and intelligent robots. This research highlights the existing technologies of listening, speaking, reading, writing, and other senses, which are widely used in human interaction. Based on these same technologies, this research introduces some intelligent robot systems and platforms. This paper also forecasts some vital challenges of researching HCI and intelligent robots. The authors hope that this work will help researchers in the field to acquire the necessary information and technologies to further conduct more advanced research
Bacteria Hunt: Evaluating multi-paradigm BCI interaction
The multimodal, multi-paradigm brain-computer interfacing (BCI) game Bacteria Hunt was used to evaluate two aspects of BCI interaction in a gaming context. One goal was to examine the effect of feedback on the ability of the user to manipulate his mental state of relaxation. This was done by having one condition in which the subject played the game with real feedback, and another with sham feedback. The feedback did not seem to affect the game experience (such as sense of control and tension) or the objective indicators of relaxation, alpha activity and heart rate. The results are discussed with regard to clinical neurofeedback studies. The second goal was to look into possible interactions between the two BCI paradigms used in the game: steady-state visually-evoked potentials (SSVEP) as an indicator of concentration, and alpha activity as a measure of relaxation. SSVEP stimulation activates the cortex and can thus block the alpha rhythm. Despite this effect, subjects were able to keep their alpha power up, in compliance with the instructed relaxation task. In addition to the main goals, a new SSVEP detection algorithm was developed and evaluated
Subjective Annotations for Vision-Based Attention Level Estimation
Attention level estimation systems have a high potential in many use cases,
such as human-robot interaction, driver modeling and smart home systems, since
being able to measure a person's attention level opens the possibility to
natural interaction between humans and computers. The topic of estimating a
human's visual focus of attention has been actively addressed recently in the
field of HCI. However, most of these previous works do not consider attention
as a subjective, cognitive attentive state. New research within the field also
faces the problem of the lack of annotated datasets regarding attention level
in a certain context. The novelty of our work is two-fold: First, we introduce
a new annotation framework that tackles the subjective nature of attention
level and use it to annotate more than 100,000 images with three attention
levels and second, we introduce a novel method to estimate attention levels,
relying purely on extracted geometric features from RGB and depth images, and
evaluate it with a deep learning fusion framework. The system achieves an
overall accuracy of 80.02%. Our framework and attention level annotations are
made publicly available.Comment: 14th International Conference on Computer Vision Theory and
Application
Windows Vista media center controlled by speech
Trabalho de projecto de mestrado em Engenharia Informática, apresentado à Universidade de Lisboa, através da Faculdade de Ciências, 200
Recommended from our members
Towards disappearing user interfaces for ubiquitous computing: human enhancement from sixth sense to super senses
The enhancement of human senses electronically is possible when pervasive computers interact unnoticeably with humans in Ubiquitous Computing. The design of computer user interfaces towards “disappearing” forces the interaction with humans using a content rather than a menu driven approach, thus the emerging requirement for huge number of non-technical users interfacing intuitively with billions of computers in the Internet of Things is met. Learning to use particular applications in Ubiquitous Computing is either too slow or sometimes impossible so the design of user interfaces must be naturally enough to facilitate intuitive human behaviours. Although humans from different racial, cultural and ethnic backgrounds own the same physiological sensory system, the perception to the same stimuli outside the human bodies can be different. A novel taxonomy for Disappearing User Interfaces (DUIs) to stimulate human senses and to capture human responses is proposed. Furthermore, applications of DUIs are reviewed. DUIs with sensor and data fusion to simulate the Sixth Sense is explored. Enhancement of human senses through DUIs and Context Awareness is discussed as the groundwork enabling smarter wearable devices for interfacing with human emotional memories
Designing Computer Agents with Facial Personality to Improve Human-Machine Collaboration
The development of computer agents to enhance human-computer interfaces is an evolving field of study. This study examined whether people perceive personality in static digital faces that portray expressions of emotion, and if the digital faces would influence human performance on a simple human-machine collaborative task. The first experiment measured user-perception of personality, based on the emotional expression in two sets of five static digital faces. The results from this first phase revealed that participants provided different ratings, of the Big-Five personality model sub-traits, based on the emotional expression of a static digital face. This indicates a perception of personality based on expression. The second experiment measured how faces with identified personality traits influence decision making in a simple collaborative task. The results revealed that the different faces did not have a significant impact on performance criteria. Results from this study indicated some isolated differences related to gender and nationality
- …