4,733 research outputs found
Toward a model of computational attention based on expressive behavior: applications to cultural heritage scenarios
Our project goals consisted in the development of attention-based analysis of human expressive behavior and the implementation of real-time algorithm in EyesWeb XMI in order to improve naturalness of human-computer interaction and context-based monitoring of human behavior. To this aim, perceptual-model that mimic human attentional processes was developed for expressivity analysis and modeled by entropy. Museum scenarios were selected as an ecological test-bed to elaborate three experiments that focus on visitor profiling and visitors flow regulation
Modular Customizable ROS-Based Framework for Rapid Development of Social Robots
Developing socially competent robots requires tight integration of robotics,
computer vision, speech processing, and web technologies. We present the
Socially-interactive Robot Software platform (SROS), an open-source framework
addressing this need through a modular layered architecture. SROS bridges the
Robot Operating System (ROS) layer for mobility with web and Android interface
layers using standard messaging and APIs. Specialized perceptual and
interactive skills are implemented as ROS services for reusable deployment on
any robot. This facilitates rapid prototyping of collaborative behaviors that
synchronize perception with physical actuation. We experimentally validated
core SROS technologies including computer vision, speech processing, and GPT2
autocomplete speech implemented as plug-and-play ROS services. Modularity is
demonstrated through the successful integration of an additional ROS package,
without changes to hardware or software platforms. The capabilities enabled
confirm SROS's effectiveness in developing socially interactive robots through
synchronized cross-domain interaction. Through demonstrations showing
synchronized multimodal behaviors on an example platform, we illustrate how the
SROS architectural approach addresses shortcomings of previous work by lowering
barriers for researchers to advance the state-of-the-art in adaptive,
collaborative customizable human-robot systems through novel applications
integrating perceptual and social abilities
User interfaces for anyone anywhere
In a global context of multimodal man-machine interaction, we approach a wide spectrum of fields, such as software engineering, intelligent communication and speech dialogues. This paper presents technological aspects of the shifting from the traditional desktop interfaces to more expressive, natural, flexible and portable ones, where more persons, in a greater number of situations, will be able to interact with computers. Speech appears to be one of the best forms of interaction, especially in order to support non-skilled users. Modalities such as speech, among others, tend to be very relevant to accessing information in our future society, in which mobile devices will play a preponderant role. Therefore, we are placing an emphasis on verbal communication in open environments (Java/XML) using software agent technology.Fundação para a Ciência e a Tecnologia – PRAXIS XXI /BD/20095/99 ; Germany. Ministry of Science and Education – EMBASSI – 01IL90
A Model for Synthesizing a Combined Verbal and Nonverbal Behavior Based on Personality Traits in Human-Robot Interaction
International audienceIn Human-Robot Interaction (HRI) scenarios, an intelligent robot should be able to synthesize an appropriate behavior adapted to human profile (i.e., personality). Recent research studies discussed the effect of personality traits on human verbal and nonverbal behaviors. The dynamic characteristics of the generated gestures and postures during the nonverbal communication can differ according to personality traits, which similarly can influence the verbal content of human speech. This research tries to map human verbal behavior to a corresponding verbal and nonverbal combined robot behavior based on the extraversion-introversion personality dimension. We explore the human-robot personality matching aspect and the similarity attraction principle, in addition to the different effects of the adapted combined robot behavior expressed through speech and gestures, and the adapted speech-only robot behavior, on interaction. Experiments with the humanoid NAO robot are reported
Interactive voice response system and eye-tracking interface in assistive technology for disabled
Abstract. The development of ICT has been very fast in the last few decades and it is important that everyone can benefit from this progress. It is essential for designing user interfaces to keep up on this progress and ensure the usability and accessibility of new innovations. The purpose of this academic literature review has been to study the basics of multimodal interaction, emphasizing on context with multimodal assistive technology for disabled people. From various modalities, interactive voice response and eye-tracking were chosen for analysis. The motivation for this work is to study how technology can be harnessed for assisting disabled people in daily life
User-centred design of flexible hypermedia for a mobile guide: Reflections on the hyperaudio experience
A user-centred design approach involves end-users from the very beginning. Considering users at the early stages compels designers to think in terms of utility and usability and helps develop the system on what is actually needed. This paper discusses the case of HyperAudio, a context-sensitive adaptive and mobile guide to museums developed in the late 90s. User requirements were collected via a survey to understand visitors’ profiles and visit styles in Natural Science museums. The knowledge acquired supported the specification of system requirements, helping defining user model, data structure and adaptive behaviour of the system. User requirements guided the design decisions on what could be implemented by using simple adaptable triggers and what instead needed more sophisticated adaptive techniques, a fundamental choice when all the computation must be done on a PDA. Graphical and interactive environments for developing and testing complex adaptive systems are discussed as a further
step towards an iterative design that considers the user interaction a central point. The paper discusses
how such an environment allows designers and developers to experiment with different system’s behaviours and to widely test it under realistic conditions by simulation of the actual context evolving over time. The understanding gained in HyperAudio is then considered in the perspective of the
developments that followed that first experience: our findings seem still valid despite the passed time
- …