142 research outputs found

    Indiana Wildlife Disease News, Volume 1, Issue 3 -- July 2006

    Get PDF
    Special points of interest: • Special Issue on rabies • Oral rabies vaccination zone • Update - avian influenza surveillance in wild birds in Indiana • Indiana Rabies Task Force • An update on wildlife disease in Indiana and surrounding states Inside this issue: Oral Rabies Vaccination 2 Mechanics of a Rabies Infection 2 Rabies in Indiana 3 Submitting Animals for Testing 4 AI Update Avian Botulism 5 Canine Distemper 5 Indiana Rabies Task Force

    Improving games AI performance using grouped hierarchical level of detail

    Get PDF
    Computer games are increasingly making use of large environments; however, these are often only sparsely populated with autonomous agents. This is, in part, due to the computational cost of implementing behaviour functions for large numbers of agents. In this paper we present an optimisation based on level of detail which reduces the overhead of modelling group behaviours, and facilitates the population of an expansive game world. We consider an environment which is inhabited by many distinct groups of agents. Each group itself comprises individual agents, which are organised using a hierarchical tree structure. Expanding and collapsing nodes within each tree allows the efficient dynamic abstraction of individuals, depending on their proximity to the player. Each branching level represents a different level of detail, and the system is designed to trade off computational performance against behavioural fidelity in a way which is both efficient and seamless to the player. We have developed an implementation of this technique, and used it to evaluate the associated performance benefits. Our experiments indicate a significant potential reduction in processing time, with the update for the entire AI system taking less than 1% of the time required for the same number of agents without optimisation

    Novel Exploration Techniques (NETs) for Malaria Policy Interventions

    Full text link
    The task of decision-making under uncertainty is daunting, especially for problems which have significant complexity. Healthcare policy makers across the globe are facing problems under challenging constraints, with limited tools to help them make data driven decisions. In this work we frame the process of finding an optimal malaria policy as a stochastic multi-armed bandit problem, and implement three agent based strategies to explore the policy space. We apply a Gaussian Process regression to the findings of each agent, both for comparison and to account for stochastic results from simulating the spread of malaria in a fixed population. The generated policy spaces are compared with published results to give a direct reference with human expert decisions for the same simulated population. Our novel approach provides a powerful resource for policy makers, and a platform which can be readily extended to capture future more nuanced policy spaces.Comment: Under-revie

    Adversarial Imitation Learning from Incomplete Demonstrations

    Full text link
    Imitation learning targets deriving a mapping from states to actions, a.k.a. policy, from expert demonstrations. Existing methods for imitation learning typically require any actions in the demonstrations to be fully available, which is hard to ensure in real applications. Though algorithms for learning with unobservable actions have been proposed, they focus solely on state information and overlook the fact that the action sequence could still be partially available and provide useful information for policy deriving. In this paper, we propose a novel algorithm called Action-Guided Adversarial Imitation Learning (AGAIL) that learns a policy from demonstrations with incomplete action sequences, i.e., incomplete demonstrations. The core idea of AGAIL is to separate demonstrations into state and action trajectories, and train a policy with state trajectories while using actions as auxiliary information to guide the training whenever applicable. Built upon the Generative Adversarial Imitation Learning, AGAIL has three components: a generator, a discriminator, and a guide. The generator learns a policy with rewards provided by the discriminator, which tries to distinguish state distributions between demonstrations and samples generated by the policy. The guide provides additional rewards to the generator when demonstrated actions for specific states are available. We compare AGAIL to other methods on benchmark tasks and show that AGAIL consistently delivers comparable performance to the state-of-the-art methods even when the action sequence in demonstrations is only partially available.Comment: Accepted to International Joint Conference on Artificial Intelligence (IJCAI-19

    Top-k Multiclass SVM

    Full text link
    Class ambiguity is typical in image classification problems with a large number of classes. When classes are difficult to discriminate, it makes sense to allow k guesses and evaluate classifiers based on the top-k error instead of the standard zero-one loss. We propose top-k multiclass SVM as a direct method to optimize for top-k performance. Our generalization of the well-known multiclass SVM is based on a tight convex upper bound of the top-k error. We propose a fast optimization scheme based on an efficient projection onto the top-k simplex, which is of its own interest. Experiments on five datasets show consistent improvements in top-k accuracy compared to various baselines.Comment: NIPS 201

    Preliminary design specification for the LANDSAT Imagery Verification and Extraction System (LIVES)

    Get PDF
    There are no author-identified significant results in this report
    corecore