56 research outputs found

    Parallelized Egocentric Fields for Autonomous Navigation

    Get PDF
    In this paper, we propose a general framework for local path-planning and steering that can be easily extended to perform high-level behaviors. Our framework is based on the concept of affordances: the possible ways an agent can interact with its environment. Each agent perceives the environment through a set of vector and scalar fields that are represented in the agent’s local space. This egocentric property allows us to efficiently compute a local space-time plan and has better parallel scalability than a global fields approach. We then use these perception fields to compute a fitness measure for every possible action, defined as an affordance field. The action that has the optimal value in the affordance field is the agent’s steering decision. We propose an extension to a linear space-time prediction model for dynamic collision avoidance and present our parallelization results on multicore systems. We analyze and evaluate our framework using a comprehensive suite of test cases provided in SteerBench and demonstrate autonomous virtual pedestrians that perform steering and path planning in unknown environments along with the emergence of high-level responses to never seen before situations

    A neural network-based exploratory learning and motor planning system for co-robots

    Get PDF
    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object

    Planning Approaches to Constraint-Aware Navigation in Dynamic Environments

    Get PDF
    Path planning is a fundamental problem in many areas, ranging from robotics and artificial intelligence to computer graphics and animation. Although there is extensive literature for computing optimal, collision-free paths, there is relatively little work that explores the satisfaction of spatial constraints between objects and agents at the global navigation layer. This paper presents a planning framework that satisfies multiple spatial constraints imposed on the path. The type of constraints specified can include staying behind a building, walking along walls, or avoiding the line of sight of patrolling agents. We introduce two hybrid environment representations that balance computational efficiency and search space density to provide a minimal, yet sufficient, discretization of the search graph for constraint-aware navigation. An extended anytime dynamic planner is used to compute constraint-aware paths, while efficiently repairing solutions to account for varying dynamic constraints or an updating world model. We demonstrate the benefits of our method on challenging navigation problems in complex environments for dynamic agents using combinations of hard and soft, attracting and repelling constraints, defined by both static obstacles and moving obstacles

    A brain-machine interface for assistive robotic control

    Get PDF
    Brain-machine interfaces (BMIs) are the only currently viable means of communication for many individuals suffering from locked-in syndrome (LIS) – profound paralysis that results in severely limited or total loss of voluntary motor control. By inferring user intent from task-modulated neurological signals and then translating those intentions into actions, BMIs can enable LIS patients increased autonomy. Significant effort has been devoted to developing BMIs over the last three decades, but only recently have the combined advances in hardware, software, and methodology provided a setting to realize the translation of this research from the lab into practical, real-world applications. Non-invasive methods, such as those based on the electroencephalogram (EEG), offer the only feasible solution for practical use at the moment, but suffer from limited communication rates and susceptibility to environmental noise. Maximization of the efficacy of each decoded intention, therefore, is critical. This thesis addresses the challenge of implementing a BMI intended for practical use with a focus on an autonomous assistive robot application. First an adaptive EEG- based BMI strategy is developed that relies upon code-modulated visual evoked potentials (c-VEPs) to infer user intent. As voluntary gaze control is typically not available to LIS patients, c-VEP decoding methods under both gaze-dependent and gaze- independent scenarios are explored. Adaptive decoding strategies in both offline and online task conditions are evaluated, and a novel approach to assess ongoing online BMI performance is introduced. Next, an adaptive neural network-based system for assistive robot control is presented that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. Exploratory learning, or “learning by doing,” is an unsupervised method in which the robot is able to build an internal model for motor planning and coordination based on real-time sensory inputs received during exploration. Finally, a software platform intended for practical BMI application use is developed and evaluated. Using online c-VEP methods, users control a simple 2D cursor control game, a basic augmentative and alternative communication tool, and an assistive robot, both manually and via high-level goal-oriented commands

    Modeling flocks with perceptual agents from a dynamicist perspective

    Get PDF
    Computational simulations of flocks and crowds have typically been processed by a set of logic or syntactic rules. In recent decades, a new generation of systems has emerged from dynamicist approaches in which the agents and the environment are treated as a pair of dynamical systems coupled informationally and mechanically. Their spontaneous interactions allow them to achieve the desired behavior. The main proposition assumes that the agent does not need a full model or to make inferences before taking actions; rather, the information necessary for any action can be derived from the environment with simple computations and very little internal state. In this paper, we present a simulation framework in which the agents are endowed with a sensing device, an oscillator network as controller and actuators to interact with the environment. The perception device is designed as an optic array emulating the principles of the animal retina, which assimilates stimuli resembling optic flow to be captured from the environment. The controller modulates informational variables to action variables in a sensory-motor flow. Our approach is based on the Kuramoto model that describes mathematically a network of coupled phase oscillators and the use of evolutionary algorithms, which is proved to be capable of synthesizing minimal synchronization strategies based on the dynamical coupling between agents and environment. We carry out a comparative analysis with classical implementations taking into account several criteria. It is concluded that we should consider replacing the metaphor of symbolic information processing by that of sensory-motor coordination in problems of multi-agent organizations

    Optimizing Simulated Crowd Behaviour

    Get PDF
    In the context of crowd simulation, there is a diverse set of algorithms that model steering, the ability of an agent to navigate between spatial locations, while avoiding static and dynamic obstacles. The performance of steering approaches, both in terms of quality of results and computational efficiency, depends on internal parameters that are manually tuned to satisfy application-specific requirements. This work investigates the effect that these parameters have on an algorithm's performance. Using three representative steering algorithms and a set of established performance criteria, we perform a number of large scale optimization experiments that optimize an algorithm's parameters for a range of objectives. For example, our method automatically finds optimal parameters to minimize turbulence at bottlenecks, reduce building evacuation times, produce emergent patterns, and increase the computational efficiency of an algorithm. Our study includes a statistical analysis of the correlations between algorithmic parameters, and performance criteria. We also propose using the Pareto Optimal Front as an efficient way of modelling optimal relationships between multiple objectives, and demonstrate its effectiveness by estimating optimal parameters for interactively defined combinations of the associated objectives. The proposed methodologies are general and can be applied to any steering algorithm using any set of performance criteria

    GPU-based dynamic search on adaptive resolution grids

    Full text link
    Abstract — This paper presents a GPU-based wave-front propagation technique for multi-agent path planning in ex-tremely large, complex, dynamic environments. Our work proposes an adaptive subdivision of the environment with efficient indexing, update, and neighbor-finding operations on the GPU to address several known limitations in prior work. In particular, an adaptive environment representation reduces the device memory requirements by an order of magnitude which enables for the first time, GPU-based goal path planning in truly large-scale environments (> 2048 m2) for hundreds of agents with different targets. We compare our approach to prior work that uses an uniform grid on several challenging navigation benchmarks and report significant memory savings, and up to a 1000X computational speedup. I

    Authoring Multi-Actor Behaviors in Crowds With Diverse Personalities

    Get PDF
    Multi-actor simulation is critical to cinematic content creation, disaster and security simulation, and interactive entertainment. A key challenge is providing an appropriate interface for authoring high-fidelity virtual actors with featurerich control mechanisms capable of complex interactions with the environment and other actors. In this chapter, we present work that addresses the problem of behavior authoring at three levels: Individual and group interactions are conducted in an event-centric manner using parameterized behavior trees, social crowd dynamics are captured using the OCEAN personality model, and a centralized automated planner is used to enforce global narrative constraints on the scale of the entire simulation. We demonstrate the benefits and limitations of each of these approaches and propose the need for a single unifying construct capable of authoring functional, purposeful, autonomous actors which conform to a global narrative in an interactive simulation

    GPU-Based Dynamic Search on Adaptive Resolution Grids

    Get PDF
    This paper presents a GPU-based wave-front propagation technique for multi-agent path planning in extremely large, complex, dynamic environments. Our work proposes an adaptive subdivision of the environment with efficient indexing, update, and neighbor-finding operations on the GPU to address several known limitations in prior work. In particular, an adaptive environment representation reduces the device memory requirements by an order of magnitude which enables for the first time, GPU-based goal path planning in truly large-scale environments (\u3e 2048 m2 ) for hundreds of agents with different targets. We compare our approach to prior work that uses an uniform grid on several challenging navigation benchmarks and report significant memory savings, and up to a 1000X computational speedup
    corecore