49,209 research outputs found

    Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder

    Full text link
    In this paper, we present a hierarchical path planning framework called SG-RL (subgoal graphs-reinforcement learning), to plan rational paths for agents maneuvering in continuous and uncertain environments. By "rational", we mean (1) efficient path planning to eliminate first-move lags; (2) collision-free and smooth for agents with kinematic constraints satisfied. SG-RL works in a two-level manner. At the first level, SG-RL uses a geometric path-planning method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract paths, also called subgoal sequences. At the second level, SG-RL uses an RL method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal motion-planning policies which can generate kinematically feasible and collision-free trajectories between adjacent subgoals. The first advantage of the proposed method is that SSG can solve the limitations of sparse reward and local minima trap for RL agents; thus, LSPI can be used to generate paths in complex environments. The second advantage is that, when the environment changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI can deal with uncertainties by exploiting its generalization ability to handle changes in environments. Simulation experiments in representative scenarios demonstrate that, compared with existing methods, SG-RL can work well on large-scale maps with relatively low action-switching frequencies and shorter path lengths, and SG-RL can deal with small changes in environments. We further demonstrate that the design of reward functions and the types of training environments are important factors for learning feasible policies.Comment: 20 page

    Fibers and global geometry of functions

    Get PDF
    Since the seminal work of Ambrosetti and Prodi, the study of global folds was enriched by geometric concepts and extensions accomodating new examples. We present the advantages of considering fibers, a construction dating to Berger and Podolak's view of the original theorem. A description of folds in terms of properties of fibers gives new perspective to the usual hypotheses in the subject. The text is intended as a guide, outlining arguments and stating results which will be detailed elsewhere

    Experimentation in machine discovery

    Get PDF
    KEKADA, a system that is capable of carrying out a complex series of experiments on problems from the history of science, is described. The system incorporates a set of experimentation strategies that were extracted from the traces of the scientists' behavior. It focuses on surprises to constrain its search, and uses its strategies to generate hypotheses and to carry out experiments. Some strategies are domain independent, whereas others incorporate knowledge of a specific domain. The domain independent strategies include magnification, determining scope, divide and conquer, factor analysis, and relating different anomalous phenomena. KEKADA represents an experiment as a set of independent and dependent entities, with apparatus variables and a goal. It represents a theory either as a sequence of processes or as abstract hypotheses. KEKADA's response is described to a particular problem in biochemistry. On this and other problems, the system is capable of carrying out a complex series of experiments to refine domain theories. Analysis of the system and its behavior on a number of different problems has established its generality, but it has also revealed the reasons why the system would not be a good experimental scientist

    Feature-Guided Black-Box Safety Testing of Deep Neural Networks

    Full text link
    Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. Most existing approaches for crafting adversarial examples necessitate some knowledge (architecture, parameters, etc.) of the network at hand. In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge. Our algorithm employs object detection techniques such as SIFT (Scale Invariant Feature Transform) to extract features from an image. These features are converted into a mutable saliency distribution, where high probability is assigned to pixels that affect the composition of the image with respect to the human visual system. We formulate the crafting of adversarial examples as a two-player turn-based stochastic game, where the first player's objective is to minimise the distance to an adversarial example by manipulating the features, and the second player can be cooperative, adversarial, or random. We show that, theoretically, the two-player game can con- verge to the optimal strategy, and that the optimal strategy represents a globally minimal adversarial image. For Lipschitz networks, we also identify conditions that provide safety guarantees that no adversarial examples exist. Using Monte Carlo tree search we gradually explore the game state space to search for adversarial examples. Our experiments show that, despite the black-box setting, manipulations guided by a perception-based saliency distribution are competitive with state-of-the-art methods that rely on white-box saliency matrices or sophisticated optimization procedures. Finally, we show how our method can be used to evaluate robustness of neural networks in safety-critical applications such as traffic sign recognition in self-driving cars.Comment: 35 pages, 5 tables, 23 figure
    • …
    corecore