58 research outputs found

    Towards adaptive and autonomous humanoid robots: from vision to actions

    Get PDF
    Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions

    Development of Cognitive Capabilities in Humanoid Robots

    Get PDF
    Merged with duplicate record 10026.1/645 on 03.04.2017 by CS (TIS)Building intelligent systems with human level of competence is the ultimate grand challenge for science and technology in general, and especially for the computational intelligence community. Recent theories in autonomous cognitive systems have focused on the close integration (grounding) of communication with perception, categorisation and action. Cognitive systems are essential for integrated multi-platform systems that are capable of sensing and communicating. This thesis presents a cognitive system for a humanoid robot that integrates abilities such as object detection and recognition, which are merged with natural language understanding and refined motor controls. The work includes three studies; (1) the use of generic manipulation of objects using the NMFT algorithm, by successfully testing the extension of the NMFT to control robot behaviour; (2) a study of the development of a robotic simulator; (3) robotic simulation experiments showing that a humanoid robot is able to acquire complex behavioural, cognitive, and linguistic skills through individual and social learning. The robot is able to learn to handle and manipulate objects autonomously, to cooperate with human users, and to adapt its abilities to changes in internal and environmental conditions. The model and the experimental results reported in this thesis, emphasise the importance of embodied cognition, i.e. the humanoid robot's physical interaction between its body and the environment

    The State of Lifelong Learning in Service Robots: Current Bottlenecks in Object Perception and Manipulation

    Get PDF
    Service robots are appearing more and more in our daily life. The development of service robots combines multiple fields of research, from object perception to object manipulation. The state-of-the-art continues to improve to make a proper coupling between object perception and manipulation. This coupling is necessary for service robots not only to perform various tasks in a reasonable amount of time but also to continually adapt to new environments and safely interact with non-expert human users. Nowadays, robots are able to recognize various objects, and quickly plan a collision-free trajectory to grasp a target object in predefined settings. Besides, in most of the cases, there is a reliance on large amounts of training data. Therefore, the knowledge of such robots is fixed after the training phase, and any changes in the environment require complicated, time-consuming, and expensive robot re-programming by human experts. Therefore, these approaches are still too rigid for real-life applications in unstructured environments, where a significant portion of the environment is unknown and cannot be directly sensed or controlled. In such environments, no matter how extensive the training data used for batch learning, a robot will always face new objects. Therefore, apart from batch learning, the robot should be able to continually learn about new object categories and grasp affordances from very few training examples on-site. Moreover, apart from robot self-learning, non-expert users could interactively guide the process of experience acquisition by teaching new concepts, or by correcting insufficient or erroneous concepts. In this way, the robot will constantly learn how to help humans in everyday tasks by gaining more and more experiences without the need for re-programming

    Learning to reach and reaching to learn: a unified approach to path planning and reactive control through reinforcement learning

    Get PDF
    The next generation of intelligent robots will need to be able to plan reaches. Not just ballistic point to point reaches, but reaches around things such as the edge of a table, a nearby human, or any other known object in the robot’s workspace. Planning reaches may seem easy to us humans, because we do it so intuitively, but it has proven to be a challenging problem, which continues to limit the versatility of what robots can do today. In this document, I propose a novel intrinsically motivated RL system that draws on both Path/Motion Planning and Reactive Control. Through Reinforcement Learning, it tightly integrates these two previously disparate approaches to robotics. The RL system is evaluated on a task, which is as yet unsolved by roboticists in practice. That is to put the palm of the iCub humanoid robot on arbitrary target objects in its workspace, start- ing from arbitrary initial configurations. Such motions can be generated by planning, or searching the configuration space, but this typically results in some kind of trajectory, which must then be tracked by a separate controller, and such an approach offers a brit- tle runtime solution because it is inflexible. Purely reactive systems are robust to many problems that render a planned trajectory infeasible, but lacking the capacity to search, they tend to get stuck behind constraints, and therefore do not replace motion planners. The planner/controller proposed here is novel in that it deliberately plans reaches without the need to track trajectories. Instead, reaches are composed of sequences of reactive motion primitives, implemented by my Modular Behavioral Environment (MoBeE), which provides (fictitious) force control with reactive collision avoidance by way of a realtime kinematic/geometric model of the robot and its workspace. Thus, to the best of my knowledge, mine is the first reach planning approach to simultaneously offer the best of both the Path/Motion Planning and Reactive Control approaches. By controlling the real, physical robot directly, and feeling the influence of the con- straints imposed by MoBeE, the proposed system learns a stochastic model of the iCub’s configuration space. Then, the model is exploited as a multiple query path planner to find sensible pre-reach poses, from which to initiate reaching actions. Experiments show that the system can autonomously find practical reaches to target objects in workspace and offers excellent robustness to changes in the workspace configuration as well as noise in the robot’s sensory-motor apparatus

    Robotic manipulation for the shoe-packaging process

    Full text link
    [EN] This paper presents the integration of a robotic system in a human-centered environment, as it can be found in the shoe manufacturing industry. Fashion footwear is nowadays mainly handcrafted due to the big amount of small production tasks. Therefore, the introduction of intelligent robotic systems in this industry may contribute to automate and improve the manual production steps, such us polishing, cleaning, packaging, and visual inspection. Due to the high complexity of the manual tasks in shoe production, cooperative robotic systems (which can work in collaboration with humans) are required. Thus, the focus of the robot lays on grasping, collision detection, and avoidance, as well as on considering the human intervention to supervise the work being performed. For this research, the robot has been equipped with a Kinect camera and a wrist force/ torque sensor so that it is able to detect human interaction and the dynamic environment in order to modify the robot¿s behavior. To illustrate the applicability of the proposed approach, this work presents the experimental results obtained for two actual platforms, which are located at different research laboratories, that share similarities in their morphology, sensor equipment and actuation system.This work has been partly supported by the Ministerio de Economia y Competitividad of the Spanish Government (Key No.: 0201603139 of Invest in Spain program and Grant No. RTC-2016-5408-6) and by the Deutscher Akademischer Austauschdienst (DAAD) of the German Government (Projekt-ID 54368155).Gracia Calandin, LI.; Perez-Vidal, C.; Mronga, D.; Paco, JD.; Azorin, J.; Gea, JD. (2017). Robotic manipulation for the shoe-packaging process. The International Journal of Advanced Manufacturing Technology. 92(1-4):1053-1067. https://doi.org/10.1007/s00170-017-0212-6S10531067921-4Pedrocchi N, Villagrossi E, Cenati C, Tosatti LM (2017) Design of fuzzy logic controller of industrial robot for roughing the uppers of fashion shoes. Int J Adv Manuf Technol 77(5):939–953Hinojo-Perez JJ, Davia-Aracil M, Jimeno-Morenilla A, Sanchez-Romero L, Salas F (2016) Automation of the shoe last grading process according to international sizing systems. Int J Adv Manuf Technol 85(1):455–467Dura-Gil JV, Ballester-Fernandez A, Cavallaro M, Chiodi A, Ballarino A, von Arnim V., Brondi C, Stellmach D (2016) New technologies for customizing products for people with special necessities: project fashion-able. Int J Comput Integr Manuf. In Press, doi: 10.1080/0951192X.2016.1145803Jatta F, Zanoni L, Fassi I, Negri S (2004) A roughing/cementing robotic cell for custom made shoe manufacture. Int J Comput Integr Manuf 17(7):645–652Nemec B, Zlajpah L (2008) Robotic cell for custom finishing operations. Int J Comput Integr Manuf 21(1):33–42Molfino R, et al (2004) Modular, reconfigurable prehensor for grasping and handling limp materials in the shoe industry. In: IMS international forum, CernobbioIntelishoe - integration and linking of shoe and auxiliary industries. 5Th FPSpecial shoes movement. 7th FP, NMP-2008-SME-2-R.229261, http://www.sshoes.euVilaca JL, Fonseca J (2007) A new software application for footwear industry. In: IEEE international symposium on intelligent signal processing WISP 2007, pp 1–6Custom, environment and comfort made shoe. 6TH FP [2004-2008]Framework of integrated technologies for user centred products. Grant agreement no.: CP-TP 229336-2. NMP2-SE-2009-229336 FIT4U -7TH FPRobofoot project website. http://www.robofoot.eu/ . Accessed 2016/ 09/16Montiel E (2007) Customization in the footwear industry. In: proceedings of the MIT congress on mass customizationSucan I, Kavraki LE (2012) A sampling-based tree planner for systems with complex dynamics, vol 28Kuffner JJ Jr, LaValle SM (2000) Rrt-connect: an efficient approach to single-query path planning. In: Proceedings of the IEEE international conference on robotics and automation, 2000. ICRA ’00, vol 2, pp 995–1001Ratliff N, Zucker M, Andrew Bagnell J, Srinivasa S (2009) Chomp: gradient optimization techniques for efficient motion planning. In: IEEE international conference on robotics and automation, 2009. ICRA ’09, pp 489–494Brock O, Khatib O (1997) Elastic strips: real-time path modification for mobile manipulationKroger T (2011) Opening the door to new sensor-based robot applications #x2014;the reflexxes motion libraries. In: 2011 IEEE international conference on robotics and automation (ICRA), pp 1–4Berg J, Ferguson D, Kuffner J (2006) Anytime path planning and replanning in dynamic environments. In: Proceedings of the IEEE international conference on robotics and automation (ICRA), pp 2366–2371Berenson D, Abbeel P, Goldberg K (2012) A robot path planning framework that learns from experience. In: IEEE international conference on robotics and automation. IEEE, pp 3671–3678Bischoff R, Kurth J, Schreiber G, Koeppe R, Albu-Schaeffer A, Beyer A, Eiberger O, Haddadin S, Stemmer A, Grunwald G, Hirzinger G (2010) The kuka-dlr lightweight robot arm — a new reference platform for robotics research and manufacturing. In: Robotics (ISR), 2010 41st international symposium on and 2010 6th German conference on robotics (ROBOTIK), pp 1–8Rooks B (2006) The harmonious robot. Industrial Robot-an International Journal 33:125–130Vahrenkamp N, Wieland S, Azad P, Gonzalez D, Asfour T, Dillmann R (2008) Visual servoing for humanoid grasping and manipulation tasks. In: 8th IEEE-RAS international conference on humanoid robots, 2008, Humanoids 2008, pp 406–412Pieters RS, et al. (2012) Direct trajectory generation for vision-based obstacle avoidance. In: Proceedings of the 2012 IEEE/RSJ international conference on intelligent robots and systemsKinect for windows sensor components and specifications, website. http://msdn.microsoft.com/en-us/library/jj131033.aspx . Accessed 2016/09/16Jatta F, Zanoni L, Fassi I, Negri S (2004) A roughing cementing robotic cell for custom made shoe manufacture. Int J Comput Integr Manuf 17(7):645–652Maurtua I, Ibarguren A, Tellaeche A (2012) Robotics for the benefit of footwear industry. In: International conference on intelligent robotics and applications. Springer, Berlin, pp 235–244Arkin RC (1998) Behavior-based robotics. MIT PressNilsson NJ (1980) Principles of artificial intelligence. Morgan KaufmannAsada H, Slotine J-JE (1986) Robot analysis and control. WileyROS official web page. http://www.ros.org , (Accessed on 2017/ 02/03)Langmann B, Hartmann K, Loffeld O (2012) Depth camera technology comparison and performance evaluation. In: 1st international conference on pattern recognition applications and methods, pp 438–444The player project. free software tools for robot and sensor applications. http://playerstage.sourceforge.net/ , (Accessed on 2017/ 02/03)Yet another robot platform (YARP). http://www.yarp.it/ , (Accessed on 2017/02/03)The OROCOS project. smarter control in robotics and automation. http://www.orocos.org/ , (Accessed on 2017/02/03)CARMEN: Robot navigation toolkit. http://carmen.sourceforge.net/ , (Accessed on 2017/02/03)ORCA: Components for robotics. http://orca-robotics.sourceforge.net/ , (Accessed on 2017/02/03)MOOS: Mission oriented operating suite. http://www.robots.ox.ac.uk/mobile/MOOS/wiki/pmwiki.php/Main/HomePage , (Accessed on 2017/02/03)Microsoft robotics studio. https://www.microsoft.com/en-us/download/details.aspx?id=29081 , (Accessed on 2017/02/03)Pr2 ros website. http://www.ros.org/wiki/Robots/PR2 . Accessed 2016/09/16Care-o-bot 3 ros website. http://www.ros.org/wiki/Robots/Care-O-bot . Accessed 2016/09/16Aila, mobile dual-arm manipulation, website. http://robotik.dfki-bremen.de/de/forschung/robotersysteme/aila.html . Accessed 2016/09/16Package libpcan documentation, website. http://www.ros.org/wiki/libpcan . Accessed 2016/09/16Pcan driver for linux, user manual. http://www.peak-system.com . Document version 7.1 (2011-03-21)Pcan driver for linux, user manual. http://wiki.ros.org/schunk_powercube_chain . Accessed 2016/09/16Ros nodes documentation, website. http://www.ros.org/wiki/Nodes . Accessed 2016/09/16Ros messages documentation, website. http://www.ros.org/wiki/Messages . Accessed 2016/09/16Ros topics documentation, website. http://www.ros.org/wiki/Topics . Accessed 2016/09/16Ros services documentation, website. http://www.ros.org/wiki/Services . Accessed 2016/09/16Yaml files officials website. http://www.yaml.org/ . Accessed 2016/ 09/16Ros robot model (urdf) documentation website. http://www.ros.org/wiki/urdf . Accessed 2016/09/16Point cloud library (pcl), website. http://www.pointclouds.org/ . Accessed 2016/09/16Arm navigation ros stack, website. http://wiki.ros.org/arm_navigation . Accessed 2016/09/16Hornung A, Wurm KM, Bennewitz M, Stachniss C, Burgard W (2013) Octomap: an efficient probabilistic 3d mapping framework based on octrees Autonomous RobotsOrocos kdl documentation, website. http://www.orocos.org/kdl . Accessed 2016/09/16Ioan A, Şucan MM, Kavraki LE (2012) The open motion planning library, vol 19. http://ompl.kavrakilab.orgWaibel M, Beetz M, Civera J, D’Andrea R, Elfring J, Galvez-Lopez D, Haussermann K, Janssen R, Montiel JMM, Perzylo A, Schiessle B, Tenorth M, Zweigle O, van de Molengraft R (2011) Roboearth. IEEE Robot Autom Mag 18(2):69–82Simox toolbox. http://simox.sourceforge.net/ . Accessed 2016/09/16Moreels P, Perona P (2007) Evaluation of features detectors and descriptors based on 3d objects. Int J Comput Vis 73:263–284Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, 2001. CVPR 2001, vol 1Teuliere C, Marchand E, Eck L (2010) Using multiple hypothesis in model-based tracking. In: 2010 IEEE international conference on robotics and automation (ICRA), pp 4559–4565Moulianitis VC, Dentsoras AJ, Aspragathos NA (1999) A knowledge-based system for the conceptual design of grippers for handling fabrics. Artif Intell Eng Des Anal Manuf 13(1):13–25Davis S, Tsagarakis NG, Caldwell DG (2008) The initial design and manufacturing process of a low cost hand for the robot icub. In: 8th IEEE-RAS international conference on humanoid robots, pp 40–45Cerruti G, Chablat D, Gouaillier D, Sakka S (2017) Design method for an anthropomorphic hand able to gesture and grasp. In: IEEE international conference on robotics and automation. IEEE, pp 3671–367

    Simulation Tools for the Study of the Interaction between Communication and Action in Cognitive Robots

    Get PDF
    In this thesis I report the development of FARSA (Framework for Autonomous Robotics Simulation and Analysis), a simulation tool for the study of the interaction between language and action in cognitive robots and more in general for experiments in embodied cognitive science. Before presenting the tools, I will describe a series of experiments that involve simulated humanoid robots that acquire their behavioural and language skills autonomously through a trial-and-error adaptive process in which random variations of the free parameters of the robots’ controller are retained or discarded on the basis of their effect on the overall behaviour exhibited by the robot in interaction with the environment. More specifically the first series of experiments shows how the availability of linguistic stimuli provided by a caretaker, that indicate the elementary actions that need to be carried out in order to accomplish a certain complex action, facilitates the acquisition of the required behavioural capacity. The second series of experiments shows how a robot trained to comprehend a set of command phrases by executing the corresponding appropriate behaviour can generalize its knowledge by comprehending new, never experienced sentences, and by producing new appropriate actions. Together with their scientific relevance, these experiments provide a series of requirements that have been taken into account during the development of FARSA. The objective of this project is that to reduce the complexity barrier that currently discourages part of the researchers interested in the study of behaviour and cognition from initiating experimental activity in this area. FARSA is the only available tools that provide an integrated framework for carrying on experiments of this type, i.e. it is the only tool that provides ready to use integrated components that enable to define the characteristics of the robots and of the environment, the characteristics of the robots’ controller, and the characteristics of the adaptive process. Overall this enables users to quickly setup experiments, including complex experiments, and to quickly start collecting results

    Multi-contact tactile exploration and interaction with unknown objects

    Get PDF
    Humans rely on the sense of touch in almost every aspect of daily life, whether to tie shoelaces, place fingertips on a computer keyboard or find keys inside a bag. With robots moving into human-centered environment, tactile exploration becomes more and more important as vision may be occluded easily by obstacles or fail because of different illumination conditions. Traditional approaches mostly rely on position control for manipulating objects and are adapted to single grippers and known objects. New sensors make it possible to extend the control to tackle problems unsolved before: handling unknown objects and discovering local features on their surface. This thesis tackles the problem of controlling a robot which makes multiple contacts with an unknown environment. Generating and keeping multiple contacts points on different parts of the robot fingers during exploration is an essential feature that distinguishes our work from other haptic exploration work in the literature, where contacts are usually limited to one or more fingertips. In the first part of this thesis, we address the problem of exploring partially known surfaces and objects for modeling and identification. In multiple scenarios, control and exploration strategies are developed to compliantly follow the surface or contour of a surface with robotic fingers. Whereas the methods developed in the first part of this thesis perform well on objects with limited size and variation in shape, the second part of the thesis is devoted to the development of a controller that maximizes contact with unknown surfaces of any shape and size. Maximizing contact allows to gather information more rapidly and also to create stable grasps. To this end, we develop an algorithm based on the task-space formulation to quickly handle the control in torque of an actively compliant robot while keeping constraints, particularly on contact forces. We also develop a strategy to maximize the surface in contact, given only the current state of contact, i.e. without prior information on the object or surface. In the third part of the thesis, an additional application of the developed hand controller is explored. The problem of autonomous grasping using only tactile data is tackled. The arm motion is generated according to search and grasping strategies implemented with Dynamical Systems (DS). We extend existing approaches to locally modulate dynamical systems (DS) to enable sensing-based modulation, so as to change the dynamics of motion depending on task progress. This allows to generate fast and autonomous object localization and grasping in one flexible framework. We also apply this algorithm to teach a robot how to react to collisions in order to navigate between obstacles while reaching

    Evolution of Grasping Behaviour in Anthropomorphic Robotic Arms with Embodied Neural Controllers

    Get PDF
    The works reported in this thesis focus upon synthesising neural controllers for anthropomorphic robots that are able to manipulate objects through an automatic design process based on artificial evolution. The use of Evolutionary Robotics makes it possible to reduce the characteristics and parameters specified by the designer to a minimum, and the robot’s skills evolve as it interacts with the environment. The primary objective of these experiments is to investigate whether neural controllers that are regulating the state of the motors on the basis of the current and previously experienced sensors (i.e. without relying on an inverse model) can enable the robots to solve such complex tasks. Another objective of these experiments is to investigate whether the Evolutionary Robotics approach can be successfully applied to scenarios that are significantly more complex than those to which it is typically applied (in terms of the complexity of the robot’s morphology, the size of the neural controller, and the complexity of the task). The obtained results indicate that skills such as reaching, grasping, and discriminating among objects can be accomplished without the need to learn precise inverse internal models of the arm/hand structure. This would also support the hypothesis that the human central nervous system (cns) does necessarily have internal models of the limbs (not excluding the fact that it might possess such models for other purposes), but can act by shifting the equilibrium points/cycles of the underlying musculoskeletal system. Consequently, the resulting controllers of such fundamental skills would be less complex. Thus, the learning of more complex behaviours will be easier to design because the underlying controller of the arm/hand structure is less complex. Moreover, the obtained results also show how evolved robots exploit sensory-motor coordination in order to accomplish their tasks
    corecore