119 research outputs found

    Automation and Robotics: Latest Achievements, Challenges and Prospects

    Get PDF
    This SI presents the latest achievements, challenges and prospects for drives, actuators, sensors, controls and robot navigation with reverse validation and applications in the field of industrial automation and robotics. Automation, supported by robotics, can effectively speed up and improve production. The industrialization of complex mechatronic components, especially robots, requires a large number of special processes already in the pre-production stage provided by modelling and simulation. This area of research from the very beginning includes drives, process technology, actuators, sensors, control systems and all connections in mechatronic systems. Automation and robotics form broad-spectrum areas of research, which are tightly interconnected. To reduce costs in the pre-production stage and to reduce production preparation time, it is necessary to solve complex tasks in the form of simulation with the use of standard software products and new technologies that allow, for example, machine vision and other imaging tools to examine new physical contexts, dependencies and connections

    NeBula: Team CoSTAR's robotic autonomy solution that won phase II of DARPA Subterranean Challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR¿s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.The work is partially supported by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004), and Defense Advanced Research Projects Agency (DARPA)

    UNIVERSAL NAVIGATION: LEARNING A GENERAL NAVIGATION POLICY FOR HETEROGENEOUS ROBOTS

    Get PDF
    Target-driven visual navigation is a challenging problem that requires a robot to find the goal using only visual inputs. Many researchers have demonstrated promising results using deep reinforcement learning (deep RL) on various robotic platforms, but typical end-to-end learning is known for its poor extrapolation capability to new scenarios. Therefore, learning a navigation policy for a new robot with a new sensor configuration or a new target still remains a challenging problem, which could be defined as a ’Universal Navigation’ problem. The objective of the proposed research is to find a universal policy for the agent to quickly adapt to new sensor configurations or target objects, and successfully navigate in unseen situations. In this project, we design a policy architecture with latent features between perception and inference networks and quickly adapt the perception network via meta-learning while freezing the inference network. Our experiments show that our algorithm adapts the learned navigation policy with only three shots for unseen situations with different sensor configurations or different target colors. We also analyze the proposed algorithm by investigating various hyperparameters. A paper based on this work was accepted to International Conference on Robotics and Automation(ICRA) 2021.M.S
    • …
    corecore