43,436 research outputs found

    Integration of task level planning and diagnosis for an intelligent robot

    Get PDF
    The use of robots in the future must go beyond present applications and will depend on the ability of a robot to adapt to a changing environment and to deal with unexpected scenarios (i.e., picking up parts that are not exactly where they were expected to be). The objective of this research is to demonstrate the feasibility of incorporating high level planning into a robot enabling it to deal with anomalous situations in order to minimize the need for constant human instruction. The heuristics can be used by a robot to apply information about previous actions towards accomplishing future objectives more efficiently. The system uses a decision network that represents the plan for accomplishing a task. This enables the robot to modify its plan based on results of previous actions. The system serves as a method for minimizing the need for constant human instruction in telerobotics. This paper describes the integration of expert systems and simulation as a valuable tool that goes far beyond this project. Simulation can be expected to be used increasingly as both hardware and software improve. Similarly, the ability to merge an expert system with simulation means that we can add intelligence to the system. A malfunctioning space satellite is described. The expert system uses a series of heuristics in order to guide the robot to the proper location. This is part of task level planning. The final part of the paper suggests directions for future research. Having shown the feasibility of an expert system embedded in a simulation, the paper then discusses how the system can be integrated with the MSFC graphics system

    Collaborative Goal Tracking of Multiple Mobile Robots Based on Geometric Graph Neural Network

    Full text link
    Multi-robot systems are widely used in spatially distributed tasks, and their collaborative path planning is of great significance for working efficiency. Currently, different multi-robot collaborative path planning methods have been proposed, but how to process the sensory information of neighboring robots at different locations from a local perception perspective in real environment to make better decisions is still a major difficulty. To address this problem, this paper proposes a multi-robot collaborative path planning method based on geometric graph neural network (GeoGNN). GeoGNN introduces the relative position information of neighboring robots into each interaction layer of the graph neural network to better integrate neighbor sensing information. An expert data generation method is designed for the robot to advance in a single step, by which expert data are generated in ROS to train the network. Experimental results show that the accuracy of the proposed method is improved by about 5% compared to the model based only on CNN on the expert data set. In ROS simulation environment path planning test, the success rate is improved by about 4% compared to CNN and flowtime increase is reduced about 8%, which outperforms other graph neural network models

    Learning to View: Decision Transformers for Active Object Detection

    Full text link
    Active perception describes a broad class of techniques that couple planning and perception systems to move the robot in a way to give the robot more information about the environment. In most robotic systems, perception is typically independent of motion planning. For example, traditional object detection is passive: it operates only on the images it receives. However, we have a chance to improve the results if we allow planning to consume detection signals and move the robot to collect views that maximize the quality of the results. In this paper, we use reinforcement learning (RL) methods to control the robot in order to obtain images that maximize the detection quality. Specifically, we propose using a Decision Transformer with online fine-tuning, which first optimizes the policy with a pre-collected expert dataset and then improves the learned policy by exploring better solutions in the environment. We evaluate the performance of proposed method on an interactive dataset collected from an indoor scenario simulator. Experimental results demonstrate that our method outperforms all baselines, including expert policy and pure offline RL methods. We also provide exhaustive analyses of the reward distribution and observation space.Comment: Accepted to ICRA 202

    A Language for Rule-based Systems

    Get PDF
    Expert systems are proliferating in many situations in which it is important to capture expertise in a computer system. This type of system is useful in situations in which human expertise is expensive or difficult to obtain or in which the operating environment is too dangerous for a person. Expert systems are used to address the following categories of problems: interpretation, prediction, diagnosis, design, planning, monitoring, debugging, repair, instruction, and control. [Hayes-Roth] Expert system have now moved out of the laboratory and are being used in production environments. Herein lies the problem addressed by this research. Expert systems have traditionally been used in a research environment in which the software engineering of the product is not particularly important. Production environments are much more demanding. The quality necessary for continual use and abuse is not generally built into research quality expert systems. The problem is further exacerbated when an expert system is to be embedded in an autonomous system for which human interaction is difficult. (For example, an expert system could be used to drive a robot in a hazardous environment. If the expert system fails, it may not be easy for a human to reach the robot for repair.) Quality in these situations is vital

    Progressive Learning for Physics-informed Neural Motion Planning

    Full text link
    Motion planning (MP) is one of the core robotics problems requiring fast methods for finding a collision-free robot motion path connecting the given start and goal states. Neural motion planners (NMPs) demonstrate fast computational speed in finding path solutions but require a huge amount of expert trajectories for learning, thus adding a significant training computational load. In contrast, recent advancements have also led to a physics-informed NMP approach that directly solves the Eikonal equation for motion planning and does not require expert demonstrations for learning. However, experiments show that the physics-informed NMP approach performs poorly in complex environments and lacks scalability in multiple scenarios and high-dimensional real robot settings. To overcome these limitations, this paper presents a novel and tractable Eikonal equation formulation and introduces a new progressive learning strategy to train neural networks without expert data in complex, cluttered, multiple high-dimensional robot motion planning scenarios. The results demonstrate that our method outperforms state-of-the-art traditional MP, data-driven NMP, and physics-informed NMP methods by a significant margin in terms of computational planning speed, path quality, and success rates. We also show that our approach scales to multiple complex, cluttered scenarios and the real robot set up in a narrow passage environment. The proposed method's videos and code implementations are available at https://github.com/ruiqini/P-NTFields.Comment: Accepted to Robotics: Science and Systems (RSS) 202

    Human-Robot Collaboration in Automotive Assembly

    Get PDF
    In the past decades, automation in the automobile production line has significantly increased the efficiency and quality of automotive manufacturing. However, in the automotive assembly stage, most tasks are still accomplished manually by human workers because of the complexity and flexibility of the tasks and the high dynamic unconstructed workspace. This dissertation is proposed to improve the level of automation in automotive assembly by human-robot collaboration (HRC). The challenges that eluded the automation in automotive assembly including lack of suitable collaborative robotic systems for the HRC, especially the compact-size high-payload mobile manipulators; teaching and learning frameworks to enable robots to learn the assembly tasks, and how to assist humans to accomplish assembly tasks from human demonstration; task-driving high-level robot motion planning framework to make the trained robot intelligently and adaptively assist human in automotive assembly tasks. The technical research toward this goal has resulted in several peer-reviewed publications. Achievements include: 1) A novel collaborative lift-assist robot for automotive assembly; 2) Approaches of vision-based robot learning of placing tasks from human demonstrations in assembly; 3) Robot learning of assembly tasks and assistance from human demonstrations using Convolutional Neural Network (CNN); 4) Robot learning of assembly tasks and assistance from human demonstrations using Task Constraint-Guided Inverse Reinforcement Learning (TC-IRL); 5) Robot learning of assembly tasks from non-expert demonstrations via Functional Objective-Oriented Network (FOON); 6) Multi-model sampling-based motion planning for trajectory optimization with execution consistency in manufacturing contexts. The research demonstrates the feasibility of a parallel mobile manipulator, which introduces novel conceptions to industrial mobile manipulators for smart manufacturing. By exploring the Robot Learning from Demonstration (RLfD) with both AI-based and model-based approaches, the research also improves robots’ learning capabilities on collaborative assembly tasks for both expert and non-expert users. The research on robot motion planning and control in the dissertation facilitates the safety and human trust in industrial robots in HRC

    A biologically inspired meta-control navigation system for the Psikharpax rat robot

    Get PDF
    A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e. g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics
    • …
    corecore