2,392 research outputs found

    Incremental planning to control a blackboard-based problem solver

    Get PDF
    To control problem solving activity, a planner must resolve uncertainty about which specific long-term goals (solutions) to pursue and about which sequences of actions will best achieve those goals. A planner is described that abstracts the problem solving state to recognize possible competing and compatible solutions and to roughly predict the importance and expense of developing these solutions. With this information, the planner plans sequences of problem solving activities that most efficiently resolve its uncertainty about which of the possible solutions to work toward. The planner only details actions for the near future because the results of these actions will influence how (and whether) a plan should be pursued. As problem solving proceeds, the planner adds new details to the plan incrementally, and monitors and repairs the plan to insure it achieves its goals whenever possible. Through experiments, researchers illustrate how these new mechanisms significantly improve problem solving decisions and reduce overall computation. They briefly discuss current research directions, including how these mechanisms can improve a problem solver's real-time response and can enhance cooperation in a distributed problem solving network

    Exploring the Accuracy of the North American Mesoscale Model during Low-Level Jet Influenced Convection in Iowa

    Get PDF
    This study analyzed low-level jet (LLJ) influenced overnight convection cases over Iowa. There are two main regimes for LLJ development over the Great Plains. One is when there is an upper-level trough in the western United States, while the other is dominated by an upper-level anticyclone. The forecasts of the twelve kilometer North American Mesoscale model (NAM) were analyzed for accuracy in both regimes and overall. The variables examined were the LLJ peak magnitude, timing, location, and total rainfall produced in Iowa from 0000UTC-1200UTC the day of an event. Although weak underforecasting was found regarding the magnitude of the LLJ with both models, there were no significant shortfalls regarding magnitude, timing, or location for either regime. However, the model runs significantly underforecasted the magnitude and area of rainfall, as all but one model run produced a rainfall maximum that was underforecasted in both LLJ regimes

    Asynchronous Partial Overlay: A New Algorithm for Solving Distributed Constraint Satisfaction Problems

    Full text link
    Distributed Constraint Satisfaction (DCSP) has long been considered an important problem in multi-agent systems research. This is because many real-world problems can be represented as constraint satisfaction and these problems often present themselves in a distributed form. In this article, we present a new complete, distributed algorithm called Asynchronous Partial Overlay (APO) for solving DCSPs that is based on a cooperative mediation process. The primary ideas behind this algorithm are that agents, when acting as a mediator, centralize small, relevant portions of the DCSP, that these centralized subproblems overlap, and that agents increase the size of their subproblems along critical paths within the DCSP as the problem solving unfolds. We present empirical evidence that shows that APO outperforms other known, complete DCSP techniques

    Zur Impftuberculose von der Haut aus

    Get PDF
    n/

    DIRECT EMULATION OF CONTROL STRUCTURES BY A PARALLEL MICRO-COMPUTER.

    Full text link

    A Cooperative mediation-based protocol for dynamic distributed resource allocation

    Full text link

    Multi-agent Hierarchical Reinforcement Learning with Dynamic Termination

    Full text link
    In a multi-agent system, an agent's optimal policy will typically depend on the policies chosen by others. Therefore, a key issue in multi-agent systems research is that of predicting the behaviours of others, and responding promptly to changes in such behaviours. One obvious possibility is for each agent to broadcast their current intention, for example, the currently executed option in a hierarchical reinforcement learning framework. However, this approach results in inflexibility of agents if options have an extended duration and are dynamic. While adjusting the executed option at each step improves flexibility from a single-agent perspective, frequent changes in options can induce inconsistency between an agent's actual behaviour and its broadcast intention. In order to balance flexibility and predictability, we propose a dynamic termination Bellman equation that allows the agents to flexibly terminate their options. We evaluate our model empirically on a set of multi-agent pursuit and taxi tasks, and show that our agents learn to adapt flexibly across scenarios that require different termination behaviours.Comment: PRICAI 201
    corecore