4 research outputs found
Recommended from our members
Mobile robot teleoperation through eye-gaze (telegaze)
In most teleoperation applications the human operator is required to monitor the status of the robot, as well as, issue controlling commands for the whole duration of the operation. Using a vision based feedback system, monitoring the robot requires the operator to look at a continuous stream of images displayed on an interaction screen. The eyes of the operator therefore, are fully engaged in monitoring and the hands in controlling. Since the eyes of the operator are engaged in monitoring anyway, inputs from their gaze can be used to aid in controlling. This frees the hands of the operator, either partially or fully, from controlling which can then be used to perform any other necessary tasks. However, the challenge here lies in distinguishing between the inputs that can be used for controlling and the inputs that can be used for monitoring. In mobile robot teleoperation, controlling is mainly composed of issuing locomotion commands to drive the robot. Monitoring on the other hand, is looking where the robot goes and looking for any obstacles in the route. Interestingly, there exist a strong correlation between human's gazing behaviours and their moving intentions. This correlation has been exploited in this thesis to investigate novel means for mobile robot teleoperation through eye-gaze, which has been named TeleGaze for short
Human-Robotics Interface for the Interaction with Cognitive and Emotional Human Domains
For a human-robot interface it is important to have a good model of how the human subject operates. However, since such a model is difficult to obtain, then the robotics interface must observe accurately the subject's behaviour when interacting with him. We present here a new human-robot interface for active interaction with the cognitive and emotional human domains. Since eye movements convey a lot of information about one subject's cognitive and emotive status, we have designed a new human-robot interface which uses a video-based Eye-Tracker (ET) to observe the subject's line of gaze. Since we are also interested in using our interface for studying and treating depression, our interface can send stimulating inputs to the subject using both a Transcranial Magnetic Stimulator (TMS) and a visual stimulus. The latter elicits the subject's emotions and consists of a set of pictures of facial expressions, which have been shown according to a novel visualization protocol, called Memory-Guided Filtering (MGF). Its effectiveness has been verified by means of many experimental results. We also present the application of our human-robot interface for preliminary studies concerning new cognitive rehabilitation strategies in depression