6 research outputs found

    CasIL: Cognizing and Imitating Skills via a Dual Cognition-Action Architecture

    Full text link
    Enabling robots to effectively imitate expert skills in longhorizon tasks such as locomotion, manipulation, and more, poses a long-standing challenge. Existing imitation learning (IL) approaches for robots still grapple with sub-optimal performance in complex tasks. In this paper, we consider how this challenge can be addressed within the human cognitive priors. Heuristically, we extend the usual notion of action to a dual Cognition (high-level)-Action (low-level) architecture by introducing intuitive human cognitive priors, and propose a novel skill IL framework through human-robot interaction, called Cognition-Action-based Skill Imitation Learning (CasIL), for the robotic agent to effectively cognize and imitate the critical skills from raw visual demonstrations. CasIL enables both cognition and action imitation, while high-level skill cognition explicitly guides low-level primitive actions, providing robustness and reliability to the entire skill IL process. We evaluated our method on MuJoCo and RLBench benchmarks, as well as on the obstacle avoidance and point-goal navigation tasks for quadrupedal robot locomotion. Experimental results show that our CasIL consistently achieves competitive and robust skill imitation capability compared to other counterparts in a variety of long-horizon robotic tasks

    Learning for a robot:deep reinforcement learning, imitation learning, transfer learning

    Get PDF
    Dexterous manipulation of the robot is an important part of realizing intelligence, but manipulators can only perform simple tasks such as sorting and packing in a structured environment. In view of the existing problem, this paper presents a state-of-the-art survey on an intelligent robot with the capability of autonomous deciding and learning. The paper first reviews the main achievements and research of the robot, which were mainly based on the breakthrough of automatic control and hardware in mechanics. With the evolution of artificial intelligence, many pieces of research have made further progresses in adaptive and robust control. The survey reveals that the latest research in deep learning and reinforcement learning has paved the way for highly complex tasks to be performed by robots. Furthermore, deep reinforcement learning, imitation learning, and transfer learning in robot control are discussed in detail. Finally, major achievements based on these methods are summarized and analyzed thoroughly, and future research challenges are proposed

    Bimanual robotic manipulation based on potential fields

    Get PDF
    openDual manipulation is a natural skill for humans but not so easy to achieve for a robot. The presence of two end effectors implies the need to consider the temporal and spatial constraints they generate while moving together. Consequently, synchronization between the arms is required to perform coordinated actions (e.g., lifting a box) and to avoid self-collision between the manipulators. Moreover, the challenges increase in dynamic environments, where the arms must be able to respond quickly to changes in the position of obstacles or target objects. To meet these demands, approaches like optimization-based motion planners and imitation learning can be employed but they have limitations such as high computational costs, or the need to create a large dataset. Sampling-based motion planners can be a viable solution thanks to their speed and low computational costs but, in their basic implementation, the environment is assumed to be static. An alternative approach relies on improved Artificial Potential Fields (APF). They are intuitive, with low computational, and, most importantly, can be used in dynamic environments. However, they do not have the precision to perform manipulation actions, and dynamic goals are not considered. This thesis proposes a system for bimanual robotic manipulation based on a combination of improved Artificial Potential Fields (APF) and the sampling-based motion planner RRTConnect. The basic idea is to use improved APF to bring the end effectors near their target goal while reacting to changes in the surrounding environment. Only then RRTConnect is triggered to perform the manipulation task. In this way, it is possible to take advantage of the strengths of both methods. To improve this system APF have been extended to consider dynamic goals and a self-collision avoidance system has been developed. The conducted experiments demonstrate that the proposed system adeptly responds to changes in the position of obstacles and target objects. Moreover, the self-collision avoidance system enables faster dual manipulation routines compared to sequential arm movements

    CNN-based classifier as an offline Trigger for the CREDO experiment

    Get PDF
    Marcin Piekarczyk, Olaf Bar, Łukasz Bibrzycki, Michał Niedźwiecki, Krzysztof Rzecki, Sławomir Stuglik, Thomas Andersen, Nikolay M. Budnev, David E. Alvarez-Castillo, Kévin Almeida Cheminant, Dariusz Góra, Alok C. Gupta, Bohdan Hnatyk, Piotr Homola, Robert Kamiński, Marcin Kasztelan, Marek Knap, Péter Kovács, Matías Rosas, Oleksandr Sushchov, Katarzyna Smelcerz, Karel Smolek, Jarosław Stasielak, Tadeusz Wibig, Krzysztof W. Woźniak, Jilberto Zamora-SaaGamification is known to enhance users’ participation in education and research projects that follow the citizen science paradigm. The Cosmic Ray Extremely Distributed Observatory (CREDO) experiment is designed for the large-scale study of various radiation forms that continuously reach the Earth from space, collectively known as cosmic rays. The CREDO Detector app relies on a network of involved users and is now working worldwide across phones and other CMOS sensor-equipped devices. To broaden the user base and activate current users, CREDO extensively uses the gamification solutions like the periodical Particle Hunters Competition. However, the adverse effect of gamification is that the number of artefacts, i.e., signals unrelated to cosmic ray detection or openly related to cheating, substantially increases. To tag the artefacts appearing in the CREDO database we propose the method based on machine learning. The approach involves training the Convolutional Neural Network (CNN) to recognise the morphological difference between signals and artefacts. As a result we obtain the CNN-based trigger which is able to mimic the signal vs. artefact assignments of human annotators as closely as possible. To enhance the method, the input image signal is adaptively thresholded and then transformed using Daubechies wavelets. In this exploratory study, we use wavelet transforms to amplify distinctive image features. As a result, we obtain a very good recognition ratio of almost 99% for both signal and artefacts. The proposed solution allows eliminating the manual supervision of the competition process

    Development of a mixed reality application to perform feasibility studies on new robotic use cases

    Get PDF
    Dissertação de mestrado integrado em Engenharia e Gestão IndustrialManufacturing companies are trying to affirm their position in the market by introducing new concepts and processes to their production systems. For this purpose, new technologies must be employed to ensure better performance and quality of their processes. Robotics has evolved a lot in the past years, creating new hardware and software technologies to answer the increasing demands of the markets. Collaborative robots are seen as one of the emerging and most promising technologies to answer industry 4.0 necessities. However, the expertise needed to implement these robots is not often found in small and medium-sized enterprises that represent a large share of the existing manufacturing companies. At the same time, mixed reality represents a new and immersive way to test new processes without physically deploying them. To tackle this problem, a mixed reality application is developed from top to bottom, aiming to facilitate the research and feasibility studies of new robotic use cases in the pre-study implementation phase. This application serves as a proof-of-concept, and it is not developed for the end user. First, the application's requirements are set to answer the manufacturing companies’ needs, providing two testing robots, an intuitive robot placement method, a trajectory modeling and parameterization system, and a result framework. Then the development of the application’s functionalities is explained, answering the requirements previously established. A collision detection system was defined and developed to perceive self and environmental collisions. Furthermore, a novel process to configure the robot based on imitation learning was developed. In the end, a painting tool was integrated into the robot's 3D model and used for a use-case study of a painting task. Then, the results were registered, and the application was accessed according to the non-functional requirements. Finally, a qualitative analysis was made to evaluate the fields where this new concept can help manufacturing companies improve the implementation success of new robotic applications.As empresas de manufatura estão a tentar afirmar sua posição no mercado introduzindo novos conceitos e processos nos seus sistemas de produção. Para isso, novas tecnologias devem ser empregues para garantir um melhor desempenho e qualidade dos seus processos. O campo da robótica evoluiu bastante nos últimos anos, criando novas tecnologias de hardware e software para atender à crescente procura dos mercados. Neste sentido, os robots colaborativos surgem como uma das tecnologias mais promissoras para atender às necessidades da indústria 4.0. No entanto, o conhecimento necessário para implementar este tipo de robots não é frequentemente encontrado em pequenas e médias empresas que representam grande parte das empresas de manufatura existentes. Ao mesmo tempo, a realidade mista representa uma maneira nova e imersiva de testar novos processos sem implementá-los fisicamente. Para fazer face ao problema, uma aplicação de realidade mista é desenvolvida com o objetivo de facilitar a pesquisa e realização de estudos de viabilidade de novos casos de uso de robótica na fase de pré-estudo da sua implementação. A aplicação serve como prova de conceito e não é desenvolvida para o utilizador final. Primeiramente, os requisitos da aplicação são definidos de acordo com as necessidades das empresas de manufatura, sendo fornecidos dois robots de teste, um método intuitivo de posicionamento, um sistema de modelagem e parametrização de trajetórias e uma estrutura de resultados. Em seguida é apresentado o processo de desenvolvimento das funcionalidades da aplicação, tendo em conta os requisitos previamente estabelecidos. Um sistema de deteção de colisões foi pensado e desenvolvido para localizar e representar colisões do robot com a sua própria estrutura física e com o ambiente real. Além disso, foi desenvolvido um novo processo para definir a pose inicial do robot baseado na aprendizagem por imitação. No final, uma ferramenta de pintura foi desenvolvida e integrada no modelo 3D do robot com o objetivo de estudar o desempenho da aplicação numa tarefa de pintura. Em seguida, os resultados foram registados e a aplicação avaliada de acordo com os requisitos não funcionais. Por fim, foi realizada uma análise qualitativa para avaliar os campos em que este novo conceito pode ajudar as empresas de manufatura a melhorar o sucesso da implementação de novas aplicações robóticas
    corecore