4,211 research outputs found

    Motion Planning for the On-orbit Grasping of a Non-cooperative Target Satellite with Collision Avoidance

    Get PDF
    A method for grasping a tumbling noncooperative target is presented, which is based on nonlinear optimization and collision avoidance. Motion constraints on the robot joints as well as on the end-effector forces are considered. Cost functions of interest address the robustness of the planned solutions during the tracking phase as well as actuation energy. The method is applied in simulation to different operational scenarios

    On Grasping a Tumbling Debris Object with a Free-Flying Robot

    Get PDF
    The grasping and stabilization of a tumbling, non-cooperative target satellite by means of a free-flying robot is a challenging control problem, which has been addressed in increasing degree of complexity since 20 years. A novel method for computing robot trajectories for grasping a tumbling target is presented. The problem is solved as a motion planning problem with nonlinear optimization. The resulting solution includes a first maneuver of the Servicer satellite which carries the robot arm, taking account of typical satellite control inputs. An analysis of the characteristics of the motion of a grasping point on a tumbling body is used to motivate this grasping method, which is argued to be useful for grasping targets of larger size

    Coordinated task manipulation by nonholonomic mobile robots

    Get PDF
    Coordinated task manipulation by a group of autonomous mobile robots has received signicant research effort in the last decade. Previous studies in the area revealed that one of the main problems in the area is to avoid the collisions of the robots with obstacles as well as with other members of the group. Another problem is to come up with a model for successful task manipulation. Signicant research effort has accumulated on the denition of forces to generate reference trajectories for each autonomous mobile robots engaged in coordinated behavior. If the mobile robots are nonholonomic, this approach fails to guarantee successful manipulation of the task since the so-generated reference trajectories might not satisfy the nonholonomic constraint. In this work, we introduce a novel coordinated task manipulation model inclusive of an online collision avoidance algorithm. The reference trajectory for each autonomous nonholonomic mobile robot is generated online in terms of linear and angular velocity references for the robot; hence these references automatically satisfy the nonholonomic constraint. The generated reference velocities inevitably depend on the nature of the specied coordinated task. Several coordinated task examples, on the basis of a generic task, have been presented and the proposed model is veried through simulations

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Social Attention: Modeling Attention in Human Crowds

    Full text link
    Robots that navigate through human crowds need to be able to plan safe, efficient, and human predictable trajectories. This is a particularly challenging problem as it requires the robot to predict future human trajectories within a crowd where everyone implicitly cooperates with each other to avoid collisions. Previous approaches to human trajectory prediction have modeled the interactions between humans as a function of proximity. However, that is not necessarily true as some people in our immediate vicinity moving in the same direction might not be as important as other people that are further away, but that might collide with us in the future. In this work, we propose Social Attention, a novel trajectory prediction model that captures the relative importance of each person when navigating in the crowd, irrespective of their proximity. We demonstrate the performance of our method against a state-of-the-art approach on two publicly available crowd datasets and analyze the trained attention model to gain a better understanding of which surrounding agents humans attend to, when navigating in a crowd

    I Can See Your Aim: Estimating User Attention From Gaze For Handheld Robot Collaboration

    Get PDF
    This paper explores the estimation of user attention in the setting of a cooperative handheld robot: a robot designed to behave as a handheld tool but that has levels of task knowledge. We use a tool-mounted gaze tracking system, which, after modelling via a pilot study, we use as a proxy for estimating the attention of the user. This information is then used for cooperation with users in a task of selecting and engaging with objects on a dynamic screen. Via a video game setup, we test various degrees of robot autonomy from fully autonomous, where the robot knows what it has to do and acts, to no autonomy where the user is in full control of the task. Our results measure performance and subjective metrics and show how the attention model benefits the interaction and preference of users.Comment: this is a corrected version of the one that was published at IROS 201

    Autonomous Robots for Active Removal of Orbital Debris

    Full text link
    This paper presents a vision guidance and control method for autonomous robotic capture and stabilization of orbital objects in a time-critical manner. The method takes into account various operational and physical constraints, including ensuring a smooth capture, handling line-of-sight (LOS) obstructions of the target, and staying within the acceleration, force, and torque limits of the robot. Our approach involves the development of an optimal control framework for an eye-to-hand visual servoing method, which integrates two sequential sub-maneuvers: a pre-capturing maneuver and a post-capturing maneuver, aimed at achieving the shortest possible capture time. Integrating both control strategies enables a seamless transition between them, allowing for real-time switching to the appropriate control system. Moreover, both controllers are adaptively tuned through vision feedback to account for the unknown dynamics of the target. The integrated estimation and control architecture also facilitates fault detection and recovery of the visual feedback in situations where the feedback is temporarily obstructed. The experimental results demonstrate the successful execution of pre- and post-capturing operations on a tumbling and drifting target, despite multiple operational constraints
    • …
    corecore