293 research outputs found

    Asservissement d'un bras robotique d'assistance à l'aide d'un systÚme de stéréo vision artificielle et d'un suiveur de regard

    Get PDF
    RÉSUMÉ L’utilisation rĂ©cente de bras robotiques sĂ©riels dans le but d’assister des personnes ayant des problĂšmes de motricitĂ©s sĂ©vĂšres des membres supĂ©rieurs soulĂšve une nouvelle problĂ©matique au niveau de l’interaction humain-machine (IHM). En effet, jusqu’à maintenant le « joystick » est utilisĂ© pour contrĂŽler un bras robotiques d’assistance (BRA). Pour les utilisateurs ayant des problĂšmes de motricitĂ© sĂ©vĂšres des membres supĂ©rieurs, ce type de contrĂŽle n’est pas une option adĂ©quate. Ce mĂ©moire prĂ©sente une autre option afin de pallier cette problĂ©matique. La solution prĂ©sentĂ©e est composĂ©e de deux composantes principales. La premiĂšre est une camĂ©ra de stĂ©rĂ©o vision utilisĂ©e afin d’informer le BRA des objets prĂ©sents dans son espace de travail. Il est important qu’un BRA soit conscient de ce qui est prĂ©sent dans son espace de travail puisqu’il doit ĂȘtre en mesure d’éviter les objets non voulus lorsqu’il parcourt un trajet afin d’atteindre l’objet d’intĂ©rĂȘt pour l'utilisateur. La deuxiĂšme composante est l’IHM qui est dans ce travail reprĂ©sentĂ©e par un suiveur de regard Ă  bas coĂ»t. Effectivement, le suiveur de regard a Ă©tĂ© choisi puisque, gĂ©nĂ©ralement, les yeux d’un patient ayant des problĂšmes sĂ©vĂšres de motricitĂ©s au niveau des membres supĂ©rieurs restent toujours fonctionnels. Le suiveur de regard est gĂ©nĂ©ralement utilisĂ© avec un Ă©cran pour des applications en 2D ce qui n’est pas intuitif pour l’utilisateur puisque celui-ci doit constamment regarder une reproduction 2D de la scĂšne sur un Ă©cran. En d’autres mots, il faut rendre le suiveur de regard viable dans un environnement 3D sans l’utilisation d’un Ă©cran, ce qui a Ă©tĂ© fait dans ce mĂ©moire. Un systĂšme de stĂ©rĂ©o vision, un suiveur de regard ainsi qu’un BRA sont les composantes principales du systĂšme prĂ©sentĂ© qui se nomme PoGARA qui est une abrĂ©viation pour Point of Gaze Assistive Robotic Arm. En utilisant PoGARA, l’utilisateur a Ă©tĂ© capable d’atteindre et de prendre un objet pour 80% des essais avec un temps moyen de 13.7 secondes sans obstacles, 15.3 secondes avec un obstacle et 16.3 secondes avec deux obstacles.----------ABSTRACT The recent increased interest in the use of serial robots to assist individuals with severe upper limb disability brought-up an important issue which is the design of the right human computer interaction (HCI). Indeed, so far, the control of assistive robotic arms (ARA) is often done using a joystick. For the users who have a severe upper limb disability, this type of control is not a suitable option. In this master’s thesis, a novel solution is presented to overcome this issue. The developed solution is composed of two main components. The first one is a stereo vision system which is used to inform the ARA of the content of its workspace. It is important for the ARA to be aware of what is present in its workspace since it needs to avoid the unwanted objects while it is on its way to grasp the object of interest. The second component is the actual HCI, where an eye tracker is used. Indeed, the eye tracker was chosen since the eyes, often, remain functional even for patients with severe upper limb disability. However, usually, low-cost, commercially available eye trackers are mainly designed for 2D applications with a screen which is not intuitive for the user since he needs to constantly watch a reproduction of the scene on a 2D screen instead of the 3D scene itself. In other words, the eye tracker needs to be made viable for usage in a 3D environment without the use of a screen. This was achieved in this master thesis work. A stereo vision system, an eye tracker as well as an ARA are the main components of the developed system named PoGARA which is short for Point of Gaze Assistive Robotic Arm. Using PoGARA, during the tests, the user was able to reach and grasp an object for 80% of the trials with an average time of 13.7 seconds without obstacles, 15.3 seconds with one obstacles and 16.3 seconds with two obstacles

    A high speed Tri-Vision system for automotive applications

    Get PDF
    Purpose: Cameras are excellent ways of non-invasively monitoring the interior and exterior of vehicles. In particular, high speed stereovision and multivision systems are important for transport applications such as driver eye tracking or collision avoidance. This paper addresses the synchronisation problem which arises when multivision camera systems are used to capture the high speed motion common in such applications. Methods: An experimental, high-speed tri-vision camera system intended for real-time driver eye-blink and saccade measurement was designed, developed, implemented and tested using prototype, ultra-high dynamic range, automotive-grade image sensors specifically developed by E2V (formerly Atmel) Grenoble SA as part of the European FP6 project – sensation (advanced sensor development for attention stress, vigilance and sleep/wakefulness monitoring). Results : The developed system can sustain frame rates of 59.8 Hz at the full stereovision resolution of 1280 × 480 but this can reach 750 Hz when a 10 k pixel Region of Interest (ROI) is used, with a maximum global shutter speed of 1/48000 s and a shutter efficiency of 99.7%. The data can be reliably transmitted uncompressed over standard copper Camera-Link¼ cables over 5 metres. The synchronisation error between the left and right stereo images is less than 100 ps and this has been verified both electrically and optically. Synchronisation is automatically established at boot-up and maintained during resolution changes. A third camera in the set can be configured independently. The dynamic range of the 10bit sensors exceeds 123 dB with a spectral sensitivity extending well into the infra-red range. Conclusion: The system was subjected to a comprehensive testing protocol, which confirms that the salient requirements for the driver monitoring application are adequately met and in some respects, exceeded. The synchronisation technique presented may also benefit several other automotive stereovision applications including near and far-field obstacle detection and collision avoidance, road condition monitoring and others.Partially funded by the EU FP6 through the IST-507231 SENSATION project.peer-reviewe

    Adaptive Real-Time Image Processing for Human-Computer Interaction

    Get PDF

    Attractive, Informative, and Communicative Robot System on Guide Plate as an Attendant with Awareness of User’s Gaze

    Get PDF
    In this paper, we introduce an interactive guide plate system by adopting a gaze-communicative stuffed-toy robot and a gaze-interactive display board. An attached stuffed-toy robot on the system naturally show anthropomorphic guidance corresponding to the user’s gaze orientation. The guidance is presented through gaze-communicative behaviors of the stuffed-toy robot using joint attention and eye-contact reactions to virtually express its own mind in conjunction with b) vocal guidance and c) projection on the guide plate. We adopted our image-based remote gaze-tracking method to detect the user’s gazing orientation. The results from both empirical studies by subjective / objective evaluations and observations of our demonstration experiments in a semipublic space show i) the total operation of the system, ii) the elicitation of user’s interest by gaze behaviors of the robot, and iii) the effectiveness of the gaze-communicative guide adopting the anthropomorphic robot

    Direction Estimation Model for Gaze Controlled Systems

    Get PDF
    Detection of gaze requires estimation of the position and the relation between user’s pupil and glint. This position is mapped into the region of interest using different edge detectors by detecting the glint coordinates and further gaze direction. In this research paper, a Gaze Direction Estimation (GDE) model has been proposed for the comparative analysis of two standard edge detectors Canny and Sobel for estimating automatic detection of the glint, its coordinates and subsequently the gaze direction. The results indicate fairly good percentage of the cases where the correct glint coordinates and subsequently correct gaze direction quadrants have been estimated. These results can further be used for improving the accuracy and performance of different eye gaze based systems

    A State of the Art Overview on Biosignal-based User-Adaptive Video Conferencing Systems

    Get PDF
    Video conferencing systems are widely used in times of distributed teams since they support flexible work arrangements. However, they have negative impacts on users, such as lacking eye gaze or zoom fatigue. Adaptive interventions in video conferences based on user behavior provide interesting solutions to overcome these challenges, for example, by alerting users when looking tired. Specifically, biosignals measured by sensors like microphones or eye-trackers are a promising basis for adaptive interventions. To provide an overview of current biosignal-based user-adaptive video conferencing systems, we conducted a systematic literature review and identified 24 publications. We summarize existing knowledge in a morphological box and outline further research directions. Thereby, a focus on biooptical signals is visible. Current adaptations target audience feedback, expression understanding and eye gaze mostly by image and representation modifications. In future, we recommend including further biosignals and addressing more diverse problems by investigating adaptation capabilities of further software elements

    Depth Perception, Cueing, and Control

    Get PDF
    Humans rely on a variety of visual cues to inform them of the depth or range of a particular object or feature. Some cues are provided by physiological mechanisms, others from pictorial cues that are interpreted psychologically, and still others by the relative motions of objects or features induced by observer (or vehicle) motions. These cues provide different levels of information (ordinal, relative, absolute) and saliency depending upon depth, task, and interaction with other cues. Display technologies used for head-down and head-up displays, as well as out-the-window displays, have differing capabilities for providing depth cueing information to the observeroperator. In addition to technologies, display content and the source (camera sensor versus computer rendering) provide varying degrees of cue information. Additionally, most displays create some degree of cue conflict. In this paper, visual depth cues and their interactions will be discussed, as well as display technology and content and related artifacts. Lastly, the role of depth cueing in performing closed-loop control tasks will be discussed

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications
    • 

    corecore