92,731 research outputs found

    Human-Machine Collaborative Optimization via Apprenticeship Scheduling

    Full text link
    Coordinating agents to complete a set of tasks with intercoupled temporal and resource constraints is computationally challenging, yet human domain experts can solve these difficult scheduling problems using paradigms learned through years of apprenticeship. A process for manually codifying this domain knowledge within a computational framework is necessary to scale beyond the ``single-expert, single-trainee" apprenticeship model. However, human domain experts often have difficulty describing their decision-making processes, causing the codification of this knowledge to become laborious. We propose a new approach for capturing domain-expert heuristics through a pairwise ranking formulation. Our approach is model-free and does not require enumerating or iterating through a large state space. We empirically demonstrate that this approach accurately learns multifaceted heuristics on a synthetic data set incorporating job-shop scheduling and vehicle routing problems, as well as on two real-world data sets consisting of demonstrations of experts solving a weapon-to-target assignment problem and a hospital resource allocation problem. We also demonstrate that policies learned from human scheduling demonstration via apprenticeship learning can substantially improve the efficiency of a branch-and-bound search for an optimal schedule. We employ this human-machine collaborative optimization technique on a variant of the weapon-to-target assignment problem. We demonstrate that this technique generates solutions substantially superior to those produced by human domain experts at a rate up to 9.5 times faster than an optimization approach and can be applied to optimally solve problems twice as complex as those solved by a human demonstrator.Comment: Portions of this paper were published in the Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper consists of 50 pages with 11 figures and 4 table

    Physically Embedded Genetic Algorithm Learning in Multi-Robot Scenarios: The PEGA algorithm

    Get PDF
    We present experiments in which a group of autonomous mobile robots learn to perform fundamental sensor-motor tasks through a collaborative learning process. Behavioural strategies, i.e. motor responses to sensory stimuli, are encoded by means of genetic strings stored on the individual robots, and adapted through a genetic algorithm (Mitchell, 1998) executed by the entire robot collective: robots communicate their own strings and corresponding fitness to each other, and then execute a genetic algorithm to improve their individual behavioural strategy. The robots acquired three different sensormotor competences, as well as the ability to select one of two, or one of three behaviours depending on context ("behaviour management"). Results show that fitness indeed increases with increasing learning time, and the analysis of the acquired behavioural strategies demonstrates that they are effective in accomplishing the desired task

    Towards engineering ontologies for cognitive profiling of agents on the semantic web

    Get PDF
    Research shows that most agent-based collaborations suffer from lack of flexibility. This is due to the fact that most agent-based applications assume pre-defined knowledge of agents’ capabilities and/or neglect basic cognitive and interactional requirements in multi-agent collaboration. The highlight of this paper is that it brings cognitive models (inspired from cognitive sciences and HCI) proposing architectural and knowledge-based requirements for agents to structure ontological models for cognitive profiling in order to increase cognitive awareness between themselves, which in turn promotes flexibility, reusability and predictability of agent behavior; thus contributing towards minimizing cognitive overload incurred on humans. The semantic web is used as an action mediating space, where shared knowledge base in the form of ontological models provides affordances for improving cognitive awareness

    Collaborative Deep Reinforcement Learning for Joint Object Search

    Full text link
    We examine the problem of joint top-down active search of multiple objects under interaction, e.g., person riding a bicycle, cups held by the table, etc.. Such objects under interaction often can provide contextual cues to each other to facilitate more efficient search. By treating each detector as an agent, we present the first collaborative multi-agent deep reinforcement learning algorithm to learn the optimal policy for joint active object localization, which effectively exploits such beneficial contextual information. We learn inter-agent communication through cross connections with gates between the Q-networks, which is facilitated by a novel multi-agent deep Q-learning algorithm with joint exploitation sampling. We verify our proposed method on multiple object detection benchmarks. Not only does our model help to improve the performance of state-of-the-art active localization models, it also reveals interesting co-detection patterns that are intuitively interpretable
    • …
    corecore