9 research outputs found

    Demo of Gaze Controlled Flying

    Get PDF
    Development of a control paradigm for unmanned aerial vehicles (UAV) is a new challenge to HCI. The demo explores how to use gaze as input for locomotion in 3D. A low-cost drone will be controlled by tracking user’s point of regard (gaze) on a live video stream from the UAV

    Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes

    Get PDF
    International audienceThis paper presents a new teleoperation system – called stereo gaze-contingent steering (SGCS) – able to seamlessly control the vergence, yaw and pitch of the eyes of a humanoid robot – here an iCub robot – from the actual gaze direction of a remote pilot. The video stream captured by the cameras embedded in the mobile eyes of the iCub are fed into an HTC Vive R Head-Mounted Display equipped with an SMI R binocular eye-tracker. The SGCS achieves the effective coupling between the eye-tracked gaze of the pilot and the robot's eye movements. SGCS both ensures a faithful reproduction of the pilot's eye movements – that is perquisite for the readability of the robot's gaze patterns by its interlocutor – and maintains the pilot's oculomotor visual clues – that avoids fatigue and sickness due to sensorimotor conflicts. We here assess the precision of this servo-control by asking several pilots to gaze towards known objects positioned in the remote environment. We demonstrate that we succeed in controlling vergence with similar precision as eyes' azimuth and elevation. This system opens the way for robot-mediated human interactions in the personal space, notably when objects in the shared working space are involved

    Evaluation of head-free eye tracking as an input device for air traffic control

    Get PDF
    International audienceThe purpose of this study was to investigate the possibility to integrate a free head motion eye-tracking system as input device in air traffic control (ATC) activity. Sixteen participants used an eye tracker to select targets displayed on a screen as quickly and accurately as possible. We assessed the impact of the presence of visual feedback about gaze position and the method of target selection on selection performance under different difficulty levels induced by variations in target size and target-to-target separation. We tend to consider that the combined use of gaze dwell-time selection and continuous eye-gaze feedback was the best condition as it suits naturally with gaze displacement over the ATC display and free the hands of the controller, despite a small cost in terms of selection speed. In addition, target size had a greater impact on accuracy and selection time than target distance. These findings provide guidelines on possible further implementation of eye tracking in ATC everyday activity

    Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes

    Get PDF
    International audienceThis paper presents a new teleoperation system – called stereo gaze-contingent steering (SGCS) – able to seamlessly control the vergence, yaw and pitch of the eyes of a humanoid robot – here an iCub robot – from the actual gaze direction of a remote pilot. The video stream captured by the cameras embedded in the mobile eyes of the iCub are fed into an HTC Vive R Head-Mounted Display equipped with an SMI R binocular eye-tracker. The SGCS achieves the effective coupling between the eye-tracked gaze of the pilot and the robot's eye movements. SGCS both ensures a faithful reproduction of the pilot's eye movements – that is perquisite for the readability of the robot's gaze patterns by its interlocutor – and maintains the pilot's oculomotor visual clues – that avoids fatigue and sickness due to sensorimotor conflicts. We here assess the precision of this servo-control by asking several pilots to gaze towards known objects positioned in the remote environment. We demonstrate that we succeed in controlling vergence with similar precision as eyes' azimuth and elevation. This system opens the way for robot-mediated human interactions in the personal space, notably when objects in the shared working space are involved

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    U2XECS : avaliação de usabilidade e experiência de usuário de sistemas conversacionais

    Get PDF
    Orientadora: Natasha Malveira Costa ValentimDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 26/11/2020Inclui referências: p. 111-121Área de concentração: Ciência da ComputaçãoResumo: Devido ao aumento do uso das tecnologias nos ultimos anos, novas formas de interacao estao presentes no cotidiano da sociedade. A interacao baseada em voz, caracteristica dos Sistemas Conversacionais (SC), e um exemplo dessas novas formas de interagir. Amazon Alexa, Siri, Google Assistant, Amazon Frame, dispostivos Amazon Echo e Google Home sao exemplos de SCs que utilizam a voz do usuario para desempenhar tarefas. Os SCs tem despertado interesse tanto da industria como da academia, recebendo investimentos e fazendo parte de pesquisa em Interacao Humano-Computador (IHC) e Engenharia de Software (ES). Como qualquer outro sistema, e necessario que os SCs fornecam uma boa experiencia e que atendam as necessidades de seus usuarios. Nesse sentido, a avaliacao de Usabilidade e de Experiencia do Usuario (User eXperience - UX) e vista como etapa importante que contribui com a verificacao da qualidade dos SCs. Na avaliacao da Usabilidade, geralmente sao verificados atributos referentes as metas comportamentais do sistema, como a eficacia, eficiencia e satisfacao do usuario. Ja na avaliacao da UX, geralmente sao considerados os atributos ligados ao sentimento do usuarios, como emocao e motivacao. No entanto, atraves de dois Mapeamentos Sistematicos da Literatura (MSL), foi identificado que as tecnologias de avaliacao utilizadas para avaliar a Usabilidade e/ou UX dos SCs eram genericas, e poderiam avaliar qualquer tipo software. Alem disso, foram identificados alguns questionarios de avaliacao de Usabilidade ou UX de interfaces conversacionais. Entretanto, estas tecnologias consideram apenas um aspecto de qualidade, Usabilidade, ou UX. Os MSLs tambem identificaram que alguns pesquisadores utilizam questionarios que desenvolveram para seus proprios estudos, sem passar por um processo de avaliacao empirica. Sendo assim, o objetivo deste trabalho e fornecer uma tecnologia de avaliacao conjunta de Usabilidade e UX especifica para SCs, a U2XECS (Usability and User eXperience Evaluation of Conversational Systems). A U2XECS e uma tecnologia de avaliacao baseada em questionario que fornece afirmacoes de Usabilidade e UX especificas para avaliar SCs. O objetivo do U2XECS e orientar pesquisadores e desenvolvedores para identificar melhorias e percepcoes dos usuarios nestes sistemas. Alem dos MSLs e da proposicao da tecnologia, sao apresentados tambem tres estudos que foram realizados no processo de elaboracao e refinamento da tecnologia: um estudo exploratorio, um survey e um estudo de viabilidade. Os resultados evidenciaram pontos positivos da U2XECS relacionados a facilidade de uso, utilidade e intencoes de uso. Alem disso, foram identificadas oportunidades de melhoria, tais como afirmacoes ambiguas, mudanca na estrutura e no tamanho do questionario. Palavras-chave: Avaliacao de Usabilidade. Avaliacao de Experiencia de Usuario. Interacao Baseada em Voz. Sistemas Conversacionais.Abstract: Due to the increased use of technologies in recent years, new forms of interaction are present in society. The voice-based interaction, characteristic of Conversational Systems (CSs), is an example of these new interaction forms. Amazon Alexa, Siri, Google Assistant, Amazon Frame, Amazon Echo devices, and Google Home are examples of CSs that use the voice to perform tasks. The CSs have aroused interest from both industry and academia, receiving investments and being part of research in Human-Computer Interaction (HCI) and Software Engineering (SE). Like any other system, CSs must provide a good experience and meet the needs of their users. In this sense, the evaluation of Usability and User eXperience (UX) is seen as an essential step that contributes to verifying the quality of the CSs. In the Usability evaluation, attributes regarding the system's behavioral goals, such as effectiveness, efficiency, and user satisfaction, are usually verified. In the UX evaluation, attributes related to the user's feelings, such as emotion and motivation, are usually considered. However, through two Systematic Mapping Studies (SMS), it was identified that the evaluation technologies used to evaluate the Usability and/or UX of the CSs were generic and could evaluate any software. Besides, some Usability or UX evaluation questionnaires of conversational interfaces were identified. However, these technologies consider only one aspect of quality, Usability, or UX. SMSs also identified that some researchers use questionnaires that they developed for their studies without going through an empirical evaluation process. Therefore, this work aims to provide a CS-specific joint Usability and UX evaluation technology, the U2XECS (Usability and User eXperience Evaluation of Conversational Systems). U2XECS is a questionnaire-based evaluation technology that provides Usability and UX specific statements to evaluate CSs. The goal of U2XECS is to guide researchers and developers to identify improvements and user perceptions in these systems. Besides the SMSs and the technology proposition, three studies that were carried out in the process of elaboration and refinement of the technology are presented: an exploratory study, a survey, and a feasibility study. The results showed positive points of U2XECS related to ease of use, utility, and intentions of use. Besides, opportunities for improvement were identified, such as ambiguous statements, change in the structure and size of the questionnaire. Keywords: Usability Evaluation. User Experience Evaluation. Voice-Based Interaction. Conversational Systems

    "Moving to the centre": A gaze-driven remote camera control for teleoperation

    No full text
    In general, conventional control interfaces such as joysticks, switches, and wheels are predominantly used in teleoperation. However, operators normally have to control multiple complex devices simultaneously. For example, controlling a rock breaker and a remote camera at the same time in mining teleoperation. This overloads the operator's control capability of using hands, increases workload and reduces productivity. We present a novel gaze-driven remote camera control with an implemented prototype, which follows a simple and natural design principle: "Whatever you look at on the screen, it moves to the centre!". A user study of modeled hands-busy experiment has been conducted, comparing the performance of using gaze-driven control and traditional joystick control through both objective measures and subjective measures. The experimental results clearly show the gaze-driven control significantly outperformed the conventional joystick control

    Modeling the Human Visuo-Motor System for Remote-Control Operation

    Get PDF
    University of Minnesota Ph.D. dissertation. 2018. Major: Computer Science. Advisors: Nikolaos Papanikolopoulos, Berenice Mettler. 1 computer file (PDF); 172 pages.Successful operation of a teleoperated miniature rotorcraft relies on capabilities including guidance, trajectory following, feedback control, and environmental perception. For many operating scenarios fragile automation systems are unable to provide adequate performance. In contrast, human-in-the-loop systems demonstrate an ability to adapt to changing and complex environments, stability in control response, high level goal selection and planning, and the ability to perceive and process large amounts of information. Modeling the perceptual processes of the human operator provides the foundation necessary for a systems based approach to the design of control and display systems used by remotely operated vehicles. In this work we consider flight tasks for remotely controlled miniature rotorcraft operating in indoor environments. Operation of agile robotic systems in three dimensional spaces requires a detailed understanding of the perceptual aspects of the problem as well as knowledge of the task and models of the operator response. When modeling the human-in-the-loop the dynamics of the vehicle, environment, and human perception-action are tightly coupled in space and time. The dynamic response of the overall system emerges from the interplay of perception and action. The main questions to be answered in this work are: i) what approach does the human operator implement when generating a control and guidance response? ii) how is information about the vehicle and environment extracted by the human? iii) can the gaze patterns of the pilot be decoded to provide information for estimation and control? In relation to existing research this work differs by focusing on fast acting dynamic systems in multiple dimensions and investigating how the gaze can be exploited to provide action-relevant information. To study human-in-the-loop systems the development and integration of the experimental infrastructure is described. Utilizing the infrastructure, a theoretical framework for computational modeling of the human pilot’s perception-action is proposed and verified experimentally. The benefits of the human visuo-motor model are demonstrated through application examples where the perceptual and control functions of a teleoperation system are augmented to reduce workload and provide a more natural human-machine interface

    Multimodal interactions in virtual environments using eye tracking and gesture control.

    Get PDF
    Multimodal interactions provide users with more natural ways to interact with virtual environments than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform virtual content selection and manipulation conveniently through the use of a combination of gaze and other hand control techniques/pointing devices, in this thesis, mid-air gestures. To establish a synergy between the two modalities and evaluate the affordance of this novel multimodal interaction technique, it is important to understand their behavioural patterns and relationship, as well as any possible perceptual conflicts and interactive ambiguities. More specifically, evidence shows that eye movements lead hand movements but the question remains that whether the leading relationship is similar when interacting using a pointing device. Moreover, as gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on users perception on the exact spatial mapping between the virtual space and the physical space. It raises an underexplored issue that whether gaze can introduce misalignment of the spatial mapping and lead to users misperception and interactive errors. Furthermore, the accuracy of eye tracking and mid-air gesture control are not comparable with the traditional pointing techniques (e.g., mouse) yet. This may cause pointing ambiguity when fine grainy interactions are required, such as selecting in a dense virtual scene where proximity and occlusion are prone to occur. This thesis addresses these concerns through experimental studies and theoretical analysis that involve paradigm design, development of interactive prototypes, and user study for verification of assumptions, comparisons and evaluations. Substantial data sets were obtained and analysed from each experiment. The results conform to and extend previous empirical findings that gaze leads pointing devices movements in most cases both spatially and temporally. It is testified that gaze does introduce spatial misperception and three methods (Scaling, Magnet and Dual-gaze) were proposed and proved to be able to reduce the impact caused by this perceptual conflict where Magnet and Dual-gaze can deliver better performance than Scaling. In addition, a coarse-to-fine solution is proposed and evaluated to compensate the degradation introduced by eye tracking inaccuracy, which uses a gaze cone to detect ambiguity followed by a gaze probe for decluttering. The results show that this solution can enhance the interaction accuracy but requires a compromise on efficiency. These findings can be used to inform a more robust multimodal inter- face design for interactions within virtual environments that are supported by both eye tracking and mid-air gesture control. This work also opens up a technical pathway for the design of future multimodal interaction techniques, which starts from a derivation from natural correlated behavioural patterns, and then considers whether the design of the interaction technique can maintain perceptual constancy and whether any ambiguity among the integrated modalities will be introduced
    corecore