23 research outputs found

    Advances in flexible manipulation through the application of AI-based techniques

    Get PDF
    282 p.Objektuak hartu eta uztea oinarrizko bi eragiketa dira ia edozein aplikazio robotikotan. Gaur egun, "pick and place" aplikazioetarako erabiltzen diren robot industrialek zeregin sinpleak eta errepikakorrak egiteko duten eraginkortasuna dute ezaugarri. Hala ere, sistema horiek oso zurrunak dira, erabat kontrolatutako inguruneetan lan egiten dute, eta oso kostu handia dakarte beste zeregin batzuk egiteko birprogramatzeak. Gaur egun, industria-ingurune desberdinetako zereginak daude (adibidez, logistika-ingurune batean eskaerak prestatzea), zeinak objektuak malgutasunez manipulatzea eskatzen duten, eta oraindik ezin izan dira automatizatu beren izaera dela-eta. Automatizazioa zailtzen duten botila-lepo nagusiak manipulatu beharreko objektuen aniztasuna, roboten trebetasun falta eta kontrolatu gabeko ingurune dinamikoen ziurgabetasuna dira.Adimen artifizialak (AA) gero eta paper garrantzitsuagoa betetzen du robotikaren barruan, robotei zeregin konplexuak betetzeko beharrezko adimena ematen baitie. Gainera, AAk benetako esperientzia erabiliz portaera konplexuak ikasteko aukera ematen du, programazioaren kostua nabarmen murriztuz. Objektuak manipulatzeko egungo sistema robotikoen mugak ikusita, lan honen helburu nagusia manipulazio-sistemen malgutasuna handitzea da AAn oinarritutako algoritmoak erabiliz, birprogramatu beharrik gabe ingurune dinamikoetara egokitzeko beharrezko gaitasunak emanez

    Domain Randomization for Visual sim-to-real Object Pose Estimation

    Get PDF
    Data collection is a major bottleneck in vision based robot grasping and manipulation applications. The availability of data is not always possible in confidential or high-risk areas providing a constraint for conducting experiments that require deep learning models. Therefore, synthetic data can be created in simulation and then transferred to the real wold using Domain Randomization (DR) technique that introduces random variations in the data by modifying the physical parameters like colour, texture, background, lighting and orientation. The aim is to bridge the gap between the simulated and real environment as once the model is transferred to the real world, it sees the data as just another variation of the simulation. The project develops a framework for implementing DR technique on non-primitive shapes and using a synthetic dataset to train machine learning models that have comparable results to the real data. DR has been applied on five industrial parts used in diesel engine assembly through Blender rendering software. 3D CAD models were used in the simulator and modifications in the physical dynamics were applied. Texture synthesis was explored using image-based, procedural and Physically Based Rendering (PBR) techniques. In order to wrap the textures on custom objects, UV unwrapping method was used. The dataset was created in Gazebo using segmentation camera and it was trained on object detection algorithms. Then, the testing was done on bounding box, segmentation and keypoint detection using multiple datasets with single class, multi-class with random textures and multi-class with metallic textures. Distractor objects and lighting conditions were added randomly in the data. The detection and segmentation results showed that the model was able to transfer efficiently in the real world setting. However, some fuel lines showed false detections due to their similar curvature. It was found that the orientation of objects and illumination played a critical role in DR transfer to the real world. The study illustrates that it is fast and beneficial to train neural networks on entirely synthetic data and use the machine vision for automating key industrial processes

    From visuomotor control to latent space planning for robot manipulation

    Get PDF
    Deep visuomotor control is emerging as an active research area for robot manipulation. Recent advances in learning sensory and motor systems in an end-to-end manner have achieved remarkable performance across a range of complex tasks. Nevertheless, a few limitations restrict visuomotor control from being more widely adopted as the de facto choice when facing a manipulation task on a real robotic platform. First, imitation learning-based visuomotor control approaches tend to suffer from the inability to recover from an out-of-distribution state caused by compounding errors. Second, the lack of versatility in task definition limits skill generalisability. Finally, the training data acquisition process and domain transfer are often impractical. In this thesis, individual solutions are proposed to address each of these issues. In the first part, we find policy uncertainty to be an effective indicator of potential failure cases, in which the robot is stuck in out-of-distribution states. On this basis, we introduce a novel uncertainty-based approach to detect potential failure cases and a recovery strategy based on action-conditioned uncertainty predictions. Then, we propose to employ visual dynamics approximation to our model architecture to capture the motion of the robot arm instead of the static scene background, making it possible to learn versatile skill primitives. In the second part, taking inspiration from the recent progress in latent space planning, we propose a gradient-based optimisation method operating within the latent space of a deep generative model for motion planning. Our approach bypasses the traditional computational challenges encountered by established planning algorithms, and has the capability to specify novel constraints easily and handle multiple constraints simultaneously. Moreover, the training data comes from simple random motor-babbling of kinematically feasible robot states. Our real-world experiments further illustrate that our latent space planning approach can handle both open and closed-loop planning in challenging environments such as heavily cluttered or dynamic scenes. This leads to the first, to our knowledge, closed-loop motion planning algorithm that can incorporate novel custom constraints, and lays the foundation for more complex manipulation tasks

    Generative neural data synthesis for autonomous systems

    Get PDF
    A significant number of Machine Learning methods for automation currently rely on data-hungry training techniques. The lack of accessible training data often represents an insurmountable obstacle, especially in the fields of robotics and automation, where acquiring new data can be far from trivial. Additional data acquisition is not only often expensive and time-consuming, but occasionally is not even an option. Furthermore, the real world applications sometimes have commercial sensitivity issues associated with the distribution of the raw data. This doctoral thesis explores bypassing the aforementioned difficulties by synthesising new realistic and diverse datasets using the Generative Adversarial Network (GAN). The success of this approach is demonstrated empirically through solving a variety of case-specific data-hungry problems, via application of novel GAN-based techniques and architectures. Specifically, it starts with exploring the use of GANs for the realistic simulation of the extremely high-dimensional underwater acoustic imagery for the purpose of training both teleoperators and autonomous target recognition systems. We have developed a method capable of generating realistic sonar data of any chosen dimension by image-translation GANs with Markov principle. Following this, we apply GAN-based models to robot behavioural repertoire generation, that enables a robot manipulator to successfully overcome unforeseen impedances, such as unknown sets of obstacles and random broken joints scenarios. Finally, we consider dynamical system identification for articulated robot arms. We show how using diversity-driven GAN models to generate exploratory trajectories can allow dynamic parameters to be identified more efficiently and accurately than with conventional optimisation approaches. Together, these results show that GANs have the potential to benefit a variety of robotics learning problems where training data is currently a bottleneck
    corecore