91 research outputs found

    Poster: How to Raise a Robot - Beyond Access Control Constraints in Assistive Humanoid Robots

    Get PDF
    Humanoid robots will be able to assist humans in their daily life, in particular due to their versatile action capabilities. However, while these robots need a certain degree of autonomy to learn and explore, they also should respect various constraints, for access control and beyond. We explore incorporating privacy and security constraints (Activity-Centric Access Control and Deep Learning Based Access Control) with robot task planning approaches (classical symbolic planning and end-to-end learning-based planning). We report preliminary results on their respective trade-offs and conclude that a hybrid approach will most likely be the method of choice

    Identifying Critical Regions for Robot Planning Using Convolutional Neural Networks

    Get PDF
    abstract: In this thesis, a new approach to learning-based planning is presented where critical regions of an environment with low probability measure are learned from a given set of motion plans. Critical regions are learned using convolutional neural networks (CNN) to improve sampling processes for motion planning (MP). In addition to an identification network, a new sampling-based motion planner, Learn and Link, is introduced. This planner leverages critical regions to overcome the limitations of uniform sampling while still maintaining guarantees of correctness inherent to sampling-based algorithms. Learn and Link is evaluated against planners from the Open Motion Planning Library (OMPL) on an extensive suite of challenging navigation planning problems. This work shows that critical areas of an environment are learnable, and can be used by Learn and Link to solve MP problems with far less planning time than existing sampling-based planners.Dissertation/ThesisMasters Thesis Computer Science 201

    Identifying and Exploiting Features for Effective Plan Retrieval in Case-Based Planning

    Get PDF
    Case-Based planning can fruitfully exploit knowledge gained by solving a large number of problems, storing the corresponding solutions in a plan library and reusing them for solving similar planning problems in the future. Case-based planning is extremely effective when similar reuse candidates can be efficiently chosen. In this paper, we study an innovative technique based on planning problem features for efficiently retrieving solved planning problems (and relative plans) from large plan libraries. A problem feature is a characteristic of the instance that can be automatically derived from the problem specification, domain and search space analyses, and different problem encodings. Since the use of existing planning features are not always able to effectively distinguish between problems within the same planning domain, we introduce a new class of features. An experimental analysis in this paper shows that our features-based retrieval approach can significantly improve the performance of a state-of-the-art case-based planning system

    Value Iteration Networks on Multiple Levels of Abstraction

    Full text link
    Learning-based methods are promising to plan robot motion without performing extensive search, which is needed by many non-learning approaches. Recently, Value Iteration Networks (VINs) received much interest since---in contrast to standard CNN-based architectures---they learn goal-directed behaviors which generalize well to unseen domains. However, VINs are restricted to small and low-dimensional domains, limiting their applicability to real-world planning problems. To address this issue, we propose to extend VINs to representations with multiple levels of abstraction. While the vicinity of the robot is represented in sufficient detail, the representation gets spatially coarser with increasing distance from the robot. The information loss caused by the decreasing resolution is compensated by increasing the number of features representing a cell. We show that our approach is capable of solving significantly larger 2D grid world planning tasks than the original VIN implementation. In contrast to a multiresolution coarse-to-fine VIN implementation which does not employ additional descriptive features, our approach is capable of solving challenging environments, which demonstrates that the proposed method learns to encode useful information in the additional features. As an application for solving real-world planning tasks, we successfully employ our method to plan omnidirectional driving for a search-and-rescue robot in cluttered terrain

    Integration of Reinforcement Learning Based Behavior Planning With Sampling Based Motion Planning for Automated Driving

    Full text link
    Reinforcement learning has received high research interest for developing planning approaches in automated driving. Most prior works consider the end-to-end planning task that yields direct control commands and rarely deploy their algorithm to real vehicles. In this work, we propose a method to employ a trained deep reinforcement learning policy for dedicated high-level behavior planning. By populating an abstract objective interface, established motion planning algorithms can be leveraged, which derive smooth and drivable trajectories. Given the current environment model, we propose to use a built-in simulator to predict the traffic scene for a given horizon into the future. The behavior of automated vehicles in mixed traffic is determined by querying the learned policy. To the best of our knowledge, this work is the first to apply deep reinforcement learning in this manner, and as such lacks a state-of-the-art benchmark. Thus, we validate the proposed approach by comparing an idealistic single-shot plan with cyclic replanning through the learned policy. Experiments with a real testing vehicle on proving grounds demonstrate the potential of our approach to shrink the simulation to real world gap of deep reinforcement learning based planning approaches. Additional simulative analyses reveal that more complex multi-agent maneuvers can be managed by employing the cycling replanning approach.Comment: 8 pages, 10 figures, to be published in 34th IEEE Intelligent Vehicles Symposium (IV

    Protected Area Planning Principles and Strategies

    Get PDF
    In this chapter, the challenges of protected area planning are explored by addressing the latter question. The chapter focuses on maintaining protected area values in face of increasing recreational pressure, although these general concepts and principles can be applied to other threats as well (Machlis and Tichnell 1985). First, the social and political contexts within which such planning occurs are outlined. It is to these complex contexts that an interactive, collaborative-learning based planning process would seem most appropriate. Next, an overview of eleven principles of visitor management is presented. These principles must be acknowledged and incorporated in any protected area planning system. Following this section, the conditions needed to implement a carrying capacity approach are reviewed; these requisite conditions lead us to conclude that, despite a resurgence of interest, the carrying capacity model does not adequately address the needs of protected area management. The final section briefly outlines the Limits of Acceptable Change planning system, an example of an approach that can incorporate the eleven previously described principles and has a demonstrated capacity to respond to the needs of protected area managers. The ideas in this chapter have been variously presented in Malaysia, Venezuela, Canada, and Puerto Rico (McCool 1996, McCool and Stankey 1992, Stankey and McCool 1993) and have benefited from the positive interactions and feedback received from protected area managers in those countries

    ASAP: An Automatic Algorithm Selection Approach for Planning

    Get PDF
    Despite the advances made in the last decade in automated planning, no planner out- performs all the others in every known benchmark domain. This observation motivates the idea of selecting different planning algorithms for different domains. Moreover, the planners’ performances are affected by the structure of the search space, which depends on the encoding of the considered domain. In many domains, the performance of a plan- ner can be improved by exploiting additional knowledge, for instance, in the form of macro-operators or entanglements. In this paper we propose ASAP, an automatic Algorithm Selection Approach for Planning that: (i) for a given domain initially learns additional knowledge, in the form of macro-operators and entanglements, which is used for creating different encodings of the given planning domain and problems, and (ii) explores the 2 dimensional space of available algorithms, defined as encodings–planners couples, and then (iii) selects the most promising algorithm for optimising either the runtimes or the quality of the solution plans

    An Automatic Algorithm Selection Approach for Planning

    Get PDF
    Despite the advances made in the last decade in automated planning, no planner outperforms all the others in every known benchmark domain. This observation motivates the idea of selecting different planning algorithms for different domains. Moreover, the planners' performances are affected by the structure of the search space, which depends on the encoding of the considered domain. In many domains, the performance of a planner can be improved by exploiting additional knowledge, extracted in the form of macro-operators or entanglements. In this paper we propose ASAP, an automatic Algorithm Selection Approach for Planning that: (i) for a given domain initially learns additional knowledge, in the form of macro-operators and entanglements, which is used for creating different encodings of the given planning domain and problems, and (ii) explores the 2 dimensional space of available algorithms, defined as encodings--planners couples, and then (iii) selects the most promising algorithm for optimising either the runtimes or the quality of the solution plans

    Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network

    Get PDF
    It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees of freedom. The current study shows that the predictive coding (PC) and active inference (AIF) frameworks, which employ a generative model, can develop better generalization by learning a prior distribution in a low dimensional latent state space representing probabilistic structures extracted from well habituated sensory-motor trajectories. In our proposed model, learning is carried out by inferring optimal latent variables as well as synaptic weights for maximizing the evidence lower bound, while goal-directed planning is accomplished by inferring latent variables for maximizing the estimated lower bound. Our proposed model was evaluated with both simple and complex robotic tasks in simulation, which demonstrated sufficient generalization in learning with limited training data by setting an intermediate value for a regularization coefficient. Furthermore, comparative simulation results show that the proposed model outperforms a conventional forward model in goal-directed planning, due to the learned prior confining the search of motor plans within the range of habituated trajectories.Comment: 30 pages, 19 figure
    • …
    corecore