328 research outputs found

    Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

    Get PDF
    In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving

    A wearable general-purpose solution for Human-Swarm Interaction

    Get PDF
    Swarms of robots will revolutionize many industrial applications, from targeted material delivery to precision farming. Controlling the motion and behavior of these swarms presents unique challenges for human operators, who cannot yet effectively convey their high-level intentions to a group of robots in application. This work proposes a new human-swarm interface based on novel wearable gesture-control and haptic-feedback devices. This work seeks to combine a wearable gesture recognition device that can detect high-level intentions, a portable device that can detect Cartesian information and finger movements, and a wearable advanced haptic device that can provide real-time feedback. This project is the first to envisage a wearable Human-Swarm Interaction (HSI) interface that separates the input and feedback components of the classical control loop (input, output, feedback), as well as being the first of its kind suitable for both indoor and outdoor environments

    Facilitating Human-Robot Collaboration Using a Mixed-Reality Projection System

    Get PDF
    abstract: Human-Robot collaboration can be a challenging exercise especially when both the human and the robot want to work simultaneously on a given task. It becomes difficult for the human to understand the intentions of the robot and vice-versa. To overcome this problem, a novel approach using the concept of Mixed-Reality has been proposed, which uses the surrounding space as the canvas to augment projected information on and around 3D objects. A vision based tracking algorithm precisely detects the pose and state of the 3D objects, and human-skeleton tracking is performed to create a system that is both human-aware as well as context-aware. Additionally, the system can warn humans about the intentions of the robot, thereby creating a safer environment to work in. An easy-to-use and universal visual language has been created which could form the basis for interaction in various human-robot collaborations in manufacturing industries. An objective and subjective user study was conducted to test the hypothesis, that using this system to execute a human-robot collaborative task would result in higher performance as compared to using other traditional methods like printed instructions and through mobile devices. Multiple measuring tools were devised to analyze the data which finally led to the conclusion that the proposed mixed-reality projection system does improve the human-robot team's efficiency and effectiveness and hence, will be a better alternative in the future.Dissertation/ThesisMasters Thesis Computer Science 201

    Iconic gestures for robot avatars, recognition and integration with speech

    Get PDF
    © 2016 Bremner and Leonards. Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances

    Effects of input modality and expertise on workload and video game performance

    Get PDF
    A recent trend in consumer and military electronics has been to allow operators the option to control the system via novel control methods. The most prevalent and available form of these methods is that of vocal control. Vocal control allows for the control of a system by speaking commands rather than manually inputting them. This has not only implications for increased productivity but also optimizing safety, and assisting the disabled population. Past research has examined the potential costs and benefits to this novel control scheme with varying results. The purpose of this study was to further examine the relationship between modality of input, operator workload, and expertise. The results obtained indicated that vocal control may not be ideal in all situations as a method of input as participants experienced significantly higher amounts of workload than those in the manual condition. Additionally, expertise may be more specific than previously thought as participants in the vocal condition performed nearly identical at the task regardless of gaming expertise. The implications of the findings for this study suggest that vocal control be further examined as an effective method of user input, especially with regards to expertise and training effects

    Computational Humor 2012:extended abstacts of the (3rd international) Workshop on Computational Humor

    Get PDF

    Direct Visual Servoing for Grasping Using Depth Maps

    Get PDF
    Visual servoing is extremely helpful for many applications such as tracking objects, controlling the position of end-effectors, grasping and many others. It has been helpful in industrial sites, academic projects and research. Visual servoing is a very challenging task in robotics and research has been done in order to address and improve the methods used for servoing and the grasping application in particular. Our goal is to use visual servoing to control the end-effector of a robotic arm bringing it to a grasping position for the object of interest. Gaining knowledge about depth was always a major challenge for visual servoing, yet necessary. Depth knowledge was either assumed to be available from a 3D model or was estimated using stereo vision or other methods. This process is computationally expensive and the results might be inaccurate because of its sensitivity to environmental conditions. Depth map usage has been recently more commonly used by researchers as it is an easy, fast and cheap way to capture depth information. This solved the problems faced estimating the 3-D information needed but the developed algorithms were only successful starting from small initial errors. An effective position controller capable of reaching the target location starting from large initial errors is needed. The thesis presented here uses Kinect depth maps to directly control a robotic arm to reach a determined grasping location specified by a target image. The algorithm consists of a 2-phase controller; the first phase is a feature based approach that provides a coarse alignment with the target image resulting in relatively small errors. The second phase is a depth map error minimization based control. The second-phase controller minimizes the difference in depth maps between the current and target images. This controller allows the system to achieve minimal steady state errors in translation and rotation starting from a relatively small initial error. To test the system's effectiveness, several experiments were conducted. The experimental setup consists of the Barrett WAM robotic arm with a Microsoft Kinect camera mounted on it in an eye-in-hand configuration. A defined goal scene taken from the grasping position is inputted to the system whose controller drives it to the target position starting from any initial condition. Our system outperforms previous work which tackled this subject. It functions successfully even with large initial errors. This successful operation is achieved by preceding the main control algorithm with a coarse image alignment achieved via a feature based control. Automating the system further by automatically detecting the best grasping position and making that location the robot's target would be a logical extension to improve and complete this work
    • …
    corecore