141 research outputs found

    Mechatronic design of the Twente humanoid head

    Get PDF
    This paper describes the mechatronic design of the Twente humanoid head, which has been realized in the purpose of having a research platform for human-machine interaction. The design features a fast, four degree of freedom neck, with long range of motion, and a vision system with three degrees of freedom, mimicking the eyes. To achieve fast target tracking, two degrees of freedom in the neck are combined in a differential drive, resulting in a low moving mass and the possibility to use powerful actuators. The performance of the neck has been optimized by minimizing backlash in the mechanisms, and using gravity compensation. The vision system is based on a saliency algorithm that uses the camera images to determine where the humanoid head should look at, i.e. the focus of attention computed according to biological studies. The motion control algorithm receives, as input, the output of the vision algorithm and controls the humanoid head to focus on and follow the target point. The control architecture exploits the redundancy of the system to show human-like motions while looking at a target. The head has a translucent plastic cover, onto which an internal LED system projects the mouth and the eyebrows, realizing human-like facial expressions

    The Twente humanoid head

    Get PDF
    This video shows the results of the project on the mechatronic development of the Twente humanoid head. The mechanical structure consists of a neck with four degrees of freedom (DOFs) and two eyes (a stereo pair system) which tilt on a common axis and rotate sideways freely providing a three more DOFs. The motion control algorithm is designed to receive, as an input, the output of a biological-inspired vision processing algorithm and to exploit the redundancy of the joints for the realization of the movements. The expressions of the humanoid head are implemented by projecting light from the internal part of the translucent plastic cover

    Motion control of the Twente humanoid head

    Get PDF
    In this work, we present the design and the realization of the motion control algorithm implemented in the Twente hu- manoid head, a seven degrees of freedom (dof) robotic sys- tem. The aim of the project is to have a humanoid head that can serve as a research platform for human-machine interac- tion purposes. The head should not only be able to percieve its environment and track objects, but also be able to move in a human-like way, i.e. to reproduce the motions of human beings and to mime the human expressions

    Mechatronic design of a fast and long range 4 degrees of freedom humanoid neck

    Get PDF
    This paper describes the mechatronic design of a humanoid neck. To research human machine interaction, the head and neck combination should be able to approach the human behavior as much as possible. We present a novel humanoid neck concept that is both fast, and has a long range of motion in 4 degrees of freedom (DOFs). This enables the head to track fast objects, and the neck design is suitable for mimicking expressions. The humanoid neck features a differential drive design for the lower 2 DOFs resulting in a low moving mass and the ability to use strong actuators. The performance of the neck has been\ud optimized by minimizing backlash in the mechanisms, and by using gravity compensation. Two cameras in the head are used for scanning and interaction with the environment

    Vision based motion control for a humanoid head

    Get PDF
    This paper describes the design of a motion control algorithm for a humanoid robotic head, which consists of a neck with four degrees of freedom and two eyes (a stereo pair system) that tilt on a common axis and rotate sideways freely. The kinematic and dynamic properties of the head are analyzed and modeled using screw theory. The motion control algorithm is designed to receive, as an input, the output of a vision processing algorithm and to exploit the redundancy of the system for the realization of the movements. This algorithm is designed to enable the head to focus on and to follow a target, showing human-like motions. The performance of the control algorithm has been tested in a simulated environment and, then, experimentally applied to the real humanoid head

    Design and control of the Twente humanoid head

    Get PDF
    The Twente humanoid head features a four degree of freedom neck and two eyes that are implemented by using cameras. The cameras tilt on a common axis, but can rotate sideways independently, thus implementing another three degrees of freedom. A vision processing algorithm has been developed that selects interesting targets in the camera images. The image coordinates of the selected target are provided to a motion control algorithm, which controls the head to look at the target. The degrees of freedom and redundancy of the system are controlled such that natural human-like motions are obtained. The head is capable of showing expressions through mouth and eyebrows by means of light projection from the inside part of the exterior shell

    A Spherical Active Joint for Humanoids and Humans

    Get PDF
    Both humanoid robotics and prosthetics rely on the possibility of implementing spherical active joints to build dexterous robots and useful prostheses. There are three possible kinematic implementations of spherical joints: serial, parallel, and hybrid, each one with its own advantages and disadvantages. In this letter, we propose a hybrid active spherical joint, that combines the advantages of parallel and serial kinematics, to try and replicate some of the features of biological articulations: large workspace, compact size, dynamical behavior, and an overall spherical shape. We compare the workspace of the proposed joint to that of human joints, showing the possibility of an almost-complete coverage by the device workspace, which is limited only by kinematic singularities. A first prototype is developed and preliminarly tested as part of a robotic shoulder joint

    A methodology for operationalising the robot centric HRI paradigm : enabling robots to leverage sociocontextual cues during human-robot interaction

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.The presence of social robots in society is increasing rapidly as their reach expands into more roles which are useful in our everyday lives. Many of these new roles require them to embody capabilities which were typically not accounted for in traditional Human-Robot Interaction (HRI) paradigms, for example increased agency and the ability to lead interactions and resolve ambiguity in situations of naïvety. The ability of such robots to leverage sociocontextual cues (i.e. non-verbal cues dependent on the social-interaction space and contextual-task space in order to be interpreted) is an important aspect of achieving these goals effectively and in a socially sensitive manner. This thesis presents a methodology which can be drawn on to successfully operationalise a contemporary paradigm of HRI – Kirchner & Alempijevic’s Robot Centric HRI paradigm – which frames the interaction between humans and robots as a loop, incorporating additional feedback mechanisms to enable robots to leverage sociocontextual cues. Given the complexities of human behaviour and the dynamics of interaction, this is a non-trivial task. The Robot Centric HRI paradigm and methodology were therefore developed, explored and verified through a series of real-world HRI studies (ntotal = 435 = 16 + 24 + 26 + 96 + 189 + 84). Firstly, by drawing on the methodology, it is demonstrated that sociocontextual cues can be successfully leveraged to increase the effectiveness of HRI in both directions of communication between humans and robots via the paradigm. Specifically, cues issued by social robots are shown to be recognisable to people, who generally respond to them in line with human-issued cues. Further, enabling robots to read interaction partners’ cues in situ is shown to be highly valuable to HRI, for example by enabling robots to intentionally and effectively issue cues. In light of the finding that people will display HHI-predicted sociocontextual cues such as gaze around robots, a novel head yaw estimation framework which showed promise for the HRI space was developed and evaluated. This enables robots to read human-issued gaze cues and mutual attention in situ. Next, it is illustrated that a robot’s effectiveness at achieving its goal(s) can be increased by adding to its ability to moderate the cues it issues based on information read from humans (i.e. increased interactivity). Finally, the above findings are shown to generalise to other sociocontextual cues, social robots and application spaces, demonstrating that the developed methodology can be drawn on to successfully operationalise the Robot Centric HRI paradigm, enabling robots to leverage sociocontextual cues to more effectively achieve their goal(s) and meet the requirements of their expanding roles
    • 

    corecore