3,631 research outputs found

    A Dynamic Localized Adjustable Force Field Method for Real-time Assistive Non-holonomic Mobile Robotics

    Get PDF
    Providing an assistive navigation system that augments rather than usurps user control of a powered wheelchair represents a significant technical challenge. This paper evaluates an assistive collision avoidance method for a powered wheelchair that allows the user to navigate safely whilst maintaining their overall governance of the platform motion. The paper shows that by shaping, switching and adjusting localized potential fields we are able to negotiate different obstacles by generating a more intuitively natural trajectory, one that does not deviate significantly from the operator in the loop desired-trajectory. It can also be seen that this method does not suffer from the local minima problem, or narrow corridor and proximity oscillation, which are common problems that occur when using potential fields. Furthermore this localized method enables the robotic platform to pass very close to obstacles, such as when negotiating a narrow passage or doorway

    Learning Multi-Agent Navigation from Human Crowd Data

    Get PDF
    The task of safely steering agents amidst static and dynamic obstacles has many applications in robotics, graphics, and traffic engineering. While decentralized solutions are essential for scalability and robustness, achieving globally efficient motions for the entire system of agents is equally important. In a traditional decentralized setting, each agent relies on an underlying local planning algorithm that takes as input a preferred velocity and the current state of the agent\u27s neighborhood and then computes a new velocity for the next time-step that is collision-free and as close as possible to the preferred one. Typically, each agent promotes a goal-oriented preferred velocity, which can result in myopic behaviors as actions that are locally optimal for one agent is not necessarily optimal for the global system of agents. In this thesis, we explore a human-inspired approach for efficient multi-agent navigation that allows each agent to intelligently adapt its preferred velocity based on feedback from the environment. Using supervised learning, we investigate different egocentric representations of the local conditions that the agents face and train various deep neural network architectures on extensive collections of human trajectory datasets to learn corresponding life-like velocities. During simulation, we use the learned velocities as high-level, preferred velocities signals passed as input to the underlying local planning algorithm of the agents. We evaluate our proposed framework using two state-of-the-art local methods, the ORCA method, and the PowerLaw method. Qualitative and quantitative results on a range of scenarios show that adapting the preferred velocity results in more time- and energy-efficient navigation policies, allowing agents to reach their destinations faster as compared to agents simulated with vanilla ORCA and PowerLaw
    corecore