5,134 research outputs found

    Best and worst case permutations for random online domination of the path

    Get PDF
    We study a randomized algorithm for graph domination, by which, according to a uniformly chosen permutation, vertices are revealed and added to the dominating set if not already dominated. We determine the expected size of the dominating set produced by the algorithm for the path graph PnP_n and use this to derive the expected size for some related families of graphs. We then provide a much-refined analysis of the worst and best cases of this algorithm on PnP_n and enumerate the permutations for which the algorithm has the worst-possible performance and best-possible performance. The case of dominating the path graph has connections to previous work of Bouwer and Star, and of Gessel on greedily coloring the path.Comment: 13 pages, 1 figur

    Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning

    Full text link
    Robots that navigate among pedestrians use collision avoidance algorithms to enable safe and efficient operation. Recent works present deep reinforcement learning as a framework to model the complex interactions and cooperation. However, they are implemented using key assumptions about other agents' behavior that deviate from reality as the number of agents in the environment increases. This work extends our previous approach to develop an algorithm that learns collision avoidance among a variety of types of dynamic agents without assuming they follow any particular behavior rules. This work also introduces a strategy using LSTM that enables the algorithm to use observations of an arbitrary number of other agents, instead of previous methods that have a fixed observation size. The proposed algorithm outperforms our previous approach in simulation as the number of agents increases, and the algorithm is demonstrated on a fully autonomous robotic vehicle traveling at human walking speed, without the use of a 3D Lidar

    Socially Aware Motion Planning with Deep Reinforcement Learning

    Full text link
    For robotic vehicles to navigate safely and efficiently in pedestrian-rich environments, it is important to model subtle human behaviors and navigation rules (e.g., passing on the right). However, while instinctive to humans, socially compliant navigation is still difficult to quantify due to the stochasticity in people's behaviors. Existing works are mostly focused on using feature-matching techniques to describe and imitate human paths, but often do not generalize well since the feature values can vary from person to person, and even run to run. This work notes that while it is challenging to directly specify the details of what to do (precise mechanisms of human navigation), it is straightforward to specify what not to do (violations of social norms). Specifically, using deep reinforcement learning, this work develops a time-efficient navigation policy that respects common social norms. The proposed method is shown to enable fully autonomous navigation of a robotic vehicle moving at human walking speed in an environment with many pedestrians.Comment: 8 page

    Application of an AIS to the problem of through life health management of remotely piloted aircraft

    Get PDF
    The operation of RPAS includes a cognitive problem for the operators(Pilots, maintainers, ,managers, and the wider organization) to effectively maintain their situational awareness of the aircraft and predict its health state. This has a large impact on their ability to successfully identify faults and manage systems during operations. To overcome these system deficiencies an asset health management system that integrates more cognitive abilities to aid situational awareness could prove beneficial. This paper outlines an artificial immune system (AIS) approach that could meet these challenges and an experimental method within which to evaluate it

    Spatiotemporal studies of black spruce forest soils and implications for the fate of C

    Get PDF
    Post-fire storage of carbon (C) in organic-soil horizons was measured in one Canadian and three Alaskan chronosequences in black spruce forests, together spanning stand ages of nearly 200 yrs. We used a simple mass balance model to derive estimates of inputs, losses, and accumulation rates of C on timescales of years to centuries. The model performed well for the surface and total organic soil layers and presented questions for resolving the dynamics of deeper organic soils. C accumulation in all study areas is on the order of 20–40 gC/m2/yr for stand ages up to ∼200 yrs. Much larger fluxes, both positive and negative, are detected using incremental changes in soil C stocks and by other studies using eddy covariance methods for CO2. This difference suggests that over the course of stand replacement, about 80% of all net primary production (NPP) is returned to the atmosphere within a fire cycle, while about 20% of NPP enters the organic soil layers and becomes available for stabilization or loss via decomposition, leaching, or combustion. Shifts toward more frequent and more severe burning and degradation of deep organic horizons would likely result in an acceleration of the carbon cycle, with greater CO2 emissions from these systems overall

    Large-scale global optimization of ultra-high dimensional non-convex landscapes based on generative neural networks

    Full text link
    We present a non-convex optimization algorithm metaheuristic, based on the training of a deep generative network, which enables effective searching within continuous, ultra-high dimensional landscapes. During network training, populations of sampled local gradients are utilized within a customized loss function to evolve the network output distribution function towards one peak at high-performing optima. The deep network architecture is tailored to support progressive growth over the course of training, which allows the algorithm to manage the curse of dimensionality characteristic of high-dimensional landscapes. We apply our concept to a range of standard optimization problems with dimensions as high as one thousand and show that our method performs better with fewer function evaluations compared to state-of-the-art algorithm benchmarks. We also discuss the role of deep network over-parameterization, loss function engineering, and proper network architecture selection in optimization, and why the required batch size of sampled local gradients is independent of problem dimension. These concepts form the foundation for a new class of algorithms that utilize customizable and expressive deep generative networks to solve non-convex optimization problems
    • …
    corecore