28 research outputs found

    Lower body design of the ‘iCub’ a human-baby like crawling robot

    Get PDF
    The development of robotic cognition and a greater understanding of human cognition form two of the current greatest challenges of science. Within the RobotCub project the goal is the development of an embodied robotic child (iCub) with the physical and ultimately cognitive abilities of a 2frac12 year old human baby. The ultimate goal of this project is to provide the cognition research community with an open human like platform for understanding of cognitive systems through the study of cognitive development. In this paper the design of the mechanisms adopted for lower body and particularly for the leg and the waist are outlined. This is accompanied by discussion on the actuator group realisation in order to meet the torque requirements while achieving the dimensional and weight specifications. Estimated performance measures of the iCub are presented

    What makes a social robot good at interacting with humans?

    Get PDF
    This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: “Do social robots need to look like living creatures that already exist in the world for humans to interact well with them?”; “Do social robots need to have animated faces for humans to interact well with them?”; “Do social robots need to have the ability to speak a coherent human language for humans to interact well with them?” and “Do social robots need to have the capability to make physical gestures for humans to interact well with them?”. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethical/moral concerns have also been discussed

    What makes a social robot good at interacting with humans?

    Get PDF
    This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: “Do social robots need to look like living creatures that already exist in the world for humans to interact well with them?”; “Do social robots need to have animated faces for humans to interact well with them?”; “Do social robots need to have the ability to speak a coherent human language for humans to interact well with them?” and “Do social robots need to have the capability to make physical gestures for humans to interact well with them?”. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethical/moral concerns have also been discussed

    Control of legged locomotion using dynamical systems:design methods and adaptive frequency oscillators

    Get PDF
    Legged robots have gained an increased attention these past decades since they offer a promising technology for many applications in unstructured environments where the use of wheeled robots is clearly limited. Such applications include exploration and rescue tasks where human intervention is difficult (e.g. after a natural disaster) or impossible (e.g. on radioactive sites) and the emerging domain of assistive robotics where robots should be able to meaningfully and efficiently interact with humans in their environment (e.g. climbing stairs). Moreover the technology developed for walking machines can help designing new rehabilitation devices for disabled persons such as active prostheses. However the control of agile legged locomotion is a challenging problem that is not yet solved in a satisfactory manner. By taking inspiration from the neural control of locomotion in animals, we develop in this thesis controllers for legged locomotion. These controllers are based on the concept of Central Pattern Generators (CPGs), which are neural networks located in the spine of vertebrates that generate the rhythmic patterns that control locomotion. The use of a strong mathematical framework, namely dynamical systems theory, allows one to build general design methodologies for such controllers. The original contributions of this thesis are organized along three main axes. The first one is a work on biological locomotion and more specifically on crawling human infants. Comparisons of the detailed kinematics and gait pattern of crawling infants with those of other quadruped mammals show many similarities. This is quite surprising since infant morphology is not well suited for quadruped locomotion. In a second part, we use some of these findings as an inspiration for the design of our locomotion controllers. We try to provide a systematic design methodology for CPGs. Specifically we design an oscillator to independently control the swing and stance durations during locomotion, then using insights from dynamical systems theory we construct generic networks supporting different gaits and finally we integrate sensory feedback in the system. Experiments on three different simulated quadruped robots show the effectiveness of the approach. The third axis of research focus on dynamical systems theory and more specifically on the development of an adaptive mechanism for oscillators such that they can learn the frequency of any periodic signal. Interestingly this mechanism is generic enough to work with a large class of oscillators. Extensive mathematical analysis are provided in order to understand the fundamental properties of this mechanism. Then an extension to pools of adaptive frequency oscillators with a negative feedback loop is used to build programmable CPGs (i.e. CPGs that can encode any periodic pattern as a structurally stable limit cycle). We use the system to control the locomotion of a humanoid robot. We also show applications of this system to signal processing

    Towards a Cognitive Architecture for Socially Adaptive Human-Robot Interaction

    Get PDF
    People have a natural predisposition to interact in an adaptive manner with others, by instinctively changing their actions, tones and speech according to the perceived needs of their peers. Moreover, we are not only capable of registering the affective and cognitive state of our partners, but over a prolonged period of interaction we also learn which behaviours are the most appropriate and well-suited for each one of them individually. This universal trait that we share regardless of our different personalities is referred to as social adaptation (adaptability). Humans are always capable of adapting to the others although our personalities may influence the speed and efficacy of the adaptation. This means that in our everyday lives we are accustomed to partake in complex and personalized interactions with our peers. Carrying this ability to personalize to human-robot interaction (HRI) is highly desirable since it would provide user-personalized interaction, a crucial element in many HRI scenarios - interactions with older adults, assistive or rehabilitative robotics, child-robot interaction (CRI), and many others. For a social robot to be able to recreate this same kind of rich, human-like interaction, it should be aware of our needs and affective states and be capable of continuously adapting its behaviour to them. Equipping a robot with these functionalities however is not a straightforward task. A robust approach for solving this is implementing a framework for the robot supporting social awareness and adaptation. In other words, the robot needs to be equipped with the basic cognitive functionalities, which would allow the robot to learn how to select the behaviours that would maximize the pleasantness of the interaction for its peers, while being guided by an internal motivation system that would provide autonomy to its decision-making process. The goal of this research was threefold: attempt to design a cognitive architecture supporting social HRI and implement it on a robotic platform; study how an adaptive framework of this kind would function when tested in HRI studies with users; and explore how including the element of adaptability and personalization in a cognitive framework would in reality affect the users - would it bring an additional richness to the human-robot interaction as hypothesized, or would it instead only add uncertainty and unpredictability that would not be accepted by the robot`s human peers? This thesis covers the work done on developing a cognitive framework for human-robot interaction; analyzes the various challenges of implementing the cognitive functionalities, porting the framework on several robotic platforms and testing potential validation scenarios; and finally presents the user studies performed with the robotic platforms of iCub and MiRo, focused on understanding how a cognitive framework behaves in a free-form HRI context and if humans can be aware and appreciate the adaptivity of the robot. In summary, this thesis had the task of approaching the complex field of cognitive HRI and attempt to shed some light on how cognition and adaptation develop from both the human and the robot side in an HRI scenario

    Towards adaptive and autonomous humanoid robots: from vision to actions

    Get PDF
    Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions

    Improving Dynamics Estimations and Low Level Torque Control Through Inertial Sensing

    Get PDF
    In 1996, professors J. Edward Colgate and Michael Peshkin invented the cobots as robotic equipment safe enough for interacting with human workers. Twenty years later, collaborative robots are highly demanded in the packaging industry, and have already been massively adopted by companies facing issues for meeting customer demands. Meantime, cobots are still making they way into environments where value-added tasks require more complex interactions between robots and human operators. For other applications like a rescue mission in a disaster scenario, robots have to deal with highly dynamic environments and uneven terrains. All these applications require robust, fine and fast control of the interaction forces, specially in the case of locomotion on uneven terrains in an environment where unexpected events can occur. Such interaction forces can only be modulated through the control of joint internal torques in the case of under-actuated systems which is typically the case of mobile robots. For that purpose, an efficient low level joint torque control is one of the critical requirements, and motivated the research presented here. This thesis addresses a thorough model analysis of a typical low level joint actuation sub-system, powered by a Brushless DC motor and suitable for torque control. It then proposes procedure improvements in the identification of model parameters, particularly challenging in the case of coupled joints, in view of improving their control. Along with these procedures, it proposes novel methods for the calibration of inertial sensors, as well as the use of such sensors in the estimation of joint torques
    corecore