435 research outputs found
Lower body design of the ‘iCub’ a human-baby like crawling robot
The development of robotic cognition and a greater understanding of human cognition form two of the current greatest challenges of science. Within the RobotCub project the goal is the development of an embodied robotic child (iCub) with the physical and ultimately cognitive abilities of a 2frac12 year old human baby. The ultimate goal of this project is to provide the cognition research community with an open human like platform for understanding of cognitive systems through the study of cognitive development. In this paper the design of the mechanisms adopted for lower body and particularly for the leg and the waist are outlined. This is accompanied by discussion on the actuator group realisation in order to meet the torque requirements while achieving the dimensional and weight specifications. Estimated performance measures of the iCub are presented
Управління трудовими ресурсами в закладах та установах освіти як новий соціально-економічний стандарт
The aim of the present paper is to propose that the adoption of a framework of biological development is suitable for the construction of artificial systems. We will argue that a developmental approach does provide unique insights on how to build highly complex and adaptable artificial systems. To illustrate our point, we will use as an example the acquisition of goal-directed reaching. In the initial part of the paper we will outline a) how mechanisms of biological development can be adapted to the artificial world, and b) how this artificial development differs from traditional engineering approaches to robotics. An experiment performed on an artificial system initially controlled by motor reflexes is presented, showing the acquisition of visuo-motor maps for ballistic control of reaching without explicit knowledge of the system's kinematic parameters
Event-driven visual attention for the humanoid robot iCub.
Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend
Gaze Stabilization for Humanoid Robots: a Comprehensive Framework
6 pages, appears in 2014 IEEE-RAS International Conference on Humanoid RobotsGaze stabilization is an important requisite for humanoid robots. Previous work on this topic has focused on the integration of inertial and visual information. Little attention has been given to a third component, which is the knowledge that the robot has about its own movement. In this work we propose a comprehensive framework for gaze stabilization in a humanoid robot. We focus on the problem of compensating for disturbances induced in the cameras due to self-generated movements of the robot. In this work we employ two separate signals for stabilization: (1) an anticipatory term obtained from the velocity commands sent to the joints while the robot moves autonomously; (2) a feedback term from the on board gyroscope, which compensates unpredicted external disturbances. We first provide the mathematical formulation to derive the forward and the differential kinematics of the fixation point of the stereo system. We finally test our method on the iCub robot. We show that the stabilization consistently reduces the residual optical flow during the movement of the robot and in presence of external disturbances. We also demonstrate that proper integration of the neck DoF is crucial to achieve correct stabilization
Controlled Tactile Exploration and Haptic Object Recognition
In this paper we propose a novel method for in-hand object recognition. The method is composed of a grasp stabilization controller and two exploratory behaviours to capture the shape and the softness of an object. Grasp stabilization plays an important role in recognizing objects. First, it prevents the object from slipping and facilitates the exploration of the object. Second, reaching a stable and repeatable position adds robustness to the learning algorithm and increases invariance with respect to the way in which the robot grasps the object. The stable poses are estimated using a Gaussian mixture model (GMM). We present experimental results showing that using our method the classifier can successfully distinguish 30 objects.We also compare our method with a benchmark experiment, in which the grasp stabilization is disabled. We show, with statistical significance, that our method outperforms the benchmark method
Prioritized motion-force control of constrained fully-actuated robots: "Task Space Inverse Dynamics"
Pre-print submitted to "Robotics and Autonomous Systems"We present a new framework for prioritized multi-task motion-force control of fully-actuated robots. This work is established on a careful review and comparison of the state of the art. Some control frameworks are not optimal, that is they do not find the optimal solution for the secondary tasks. Other frameworks are optimal, but they tackle the control problem at kinematic level, hence they neglect the robot dynamics and they do not allow for force control. Still other frameworks are optimal and consider force control, but they are computationally less efficient than ours. Our final claim is that, for fully-actuated robots, computing the operational-space inverse dynamics is equivalent to computing the inverse kinematics (at acceleration level) and then the joint-space inverse dynamics. Thanks to this fact, our control framework can efficiently compute the optimal solution by decoupling kinematics and dynamics of the robot. We take into account: motion and force control, soft and rigid contacts, free and constrained robots. Tests in simulation validate our control framework, comparing it with other state-of-the-art equivalent frameworks and showing remarkable improvements in optimality and efficiency
A Flexible and Robust Large Scale Capacitive Tactile System for Robots
IEEE Sensor Journal, Vol. 13, Issue 10, 2013Capacitive technology allows building sensors that are small, compact and have high sensitivity. For this reason it has been widely adopted in robotics. In a previous work we presented a compliant skin system based on capacitive technology consisting of triangular modules interconnected to form a system of sensors that can be deployed on non-flat surfaces. This solution has been successfully adopted to cover various humanoid robots. The main limitation of this and all the approaches based on capacitive technology is that they require to embed a deformable dielectric layer (usually made using an elastomer) covered by a conductive layer. This complicates the production process considerably, introduces hysteresis and limits the durability of the sensors due to ageing and mechanical stress. In this paper we describe a novel solution in which the dielectric is made using a thin layer of 3D fabric which is glued to conductive and protective layers using techniques adopted in the clothing industry. As such, the sensor is easier to produce and has better mechanical properties. Furthermore, the sensor proposed in this paper embeds transducers for thermal compensation of the pressure measurements. We report experimental analysis that demonstrates that the sensor has good properties in terms of sensitivity and resolution. Remarkably we show that the sensor has very low hysteresis and effectively allows compensating drifts due to temperature variations
Modeling speech imitation and ecological learning of auditory-motor maps.
Classical models of speech consider an antero-posterior distinction between perceptive and productive functions. However, the selective alteration of neural activity in speech motor centers, via transcranial magnetic stimulation, was shown to affect speech discrimination. On the automatic speech recognition (ASR) side, the recognition systems have classically relied solely on acoustic data, achieving rather good performance in optimal listening conditions. The main limitations of current ASR are mainly evident in the realistic use of such systems. These limitations can be partly reduced by using normalization strategies that minimize inter-speaker variability by either explicitly removing speakers' peculiarities or adapting different speakers to a reference model. In this paper we aim at modeling a motor-based imitation learning mechanism in ASR. We tested the utility of a speaker normalization strategy that uses motor representations of speech and compare it with strategies that ignore the motor domain. Specifically, we first trained a regressor through state-of-the-art machine learning techniques to build an auditory-motor mapping, in a sense mimicking a human learner that tries to reproduce utterances produced by other speakers. This auditory-motor mapping maps the speech acoustics of a speaker into the motor plans of a reference speaker. Since, during recognition, only speech acoustics are available, the mapping is necessary to "recover" motor information. Subsequently, in a phone classification task, we tested the system on either one of the speakers that was used during training or a new one. Results show that in both cases the motor-based speaker normalization strategy slightly but significantly outperforms all other strategies where only acoustics is taken into account
Incremental Robot Learning of New Objects with Fixed Update Time
8 pages, 3 figuresWe consider object recognition in the context of lifelong learning, where a robotic agent learns to discriminate between a growing number of object classes as it accumulates experience about the environment. We propose an incremental variant of the Regularized Least Squares for Classification (RLSC) algorithm, and exploit its structure to seamlessly add new classes to the learned model. The presented algorithm addresses the problem of having an unbalanced proportion of training examples per class, which occurs when new objects are presented to the system for the first time. We evaluate our algorithm on both a machine learning benchmark dataset and two challenging object recognition tasks in a robotic setting. Empirical evidence shows that our approach achieves comparable or higher classification performance than its batch counterpart when classes are unbalanced, while being significantly faster
- …
