417 research outputs found

    Priority-Based Human Resource Allocation in Business Processes

    Get PDF
    In Business Process Management Systems, human resource management typically covers two steps: resource assignment at design time and resource allocation at run time. Although concepts like rolebased assignment often yield several potential performers for an activity, there is a lack of mechanisms for prioritizing them, e.g., according to their skills or current workload. in this paper, we address this research gap. More specifically, we introduce an approach to define resource preferences grounded on a validated, generic user preference model initially developed for semantic web services. Furthermore, we show an implementation of the approach demonstrating its feasibility. Keywords: preference modeling, preference resolution, priority-based allocation, priority ranking, RAL, resource allocation, SOUP

    Interpreting an action from what we perceive and what we expect

    Get PDF
    International audienceIn update logic as studied by Baltag, Moss, Solecki and van Benthem, little attention is paid to the interpretation of an action by an agent, which is just assumed to depend on the situation. This is actually a complex issue that nevertheless complies to some logical dynamics. In this paper, we tackle this topic. We also deal with actions that change propositional facts of the situation. In parallel, we propose a formalism to accurately represent an agent's epistemic state based on hyperreal numbers. In that respect, we use infinitesimals to express what would surprise the agents (and by how much) by contradicting their beliefs. We also use a subjective probability to model the notion of belief. It turns out that our probabilistic update mechanism satisfies the AGM postulates of belief revision

    Human–agent collaboration for disaster response

    Get PDF
    In the aftermath of major disasters, first responders are typically overwhelmed with large numbers of, spatially distributed, search and rescue tasks, each with their own requirements. Moreover, responders have to operate in highly uncertain and dynamic environments where new tasks may appear and hazards may be spreading across the disaster space. Hence, rescue missions may need to be re-planned as new information comes in, tasks are completed, or new hazards are discovered. Finding an optimal allocation of resources to complete all the tasks is a major computational challenge. In this paper, we use decision theoretic techniques to solve the task allocation problem posed by emergency response planning and then deploy our solution as part of an agent-based planning tool in real-world field trials. By so doing, we are able to study the interactional issues that arise when humans are guided by an agent. Specifically, we develop an algorithm, based on a multi-agent Markov decision process representation of the task allocation problem and show that it outperforms standard baseline solutions. We then integrate the algorithm into a planning agent that responds to requests for tasks from participants in a mixed-reality location-based game, called AtomicOrchid, that simulates disaster response settings in the real-world. We then run a number of trials of our planning agent and compare it against a purely human driven system. Our analysis of these trials show that human commanders adapt to the planning agent by taking on a more supervisory role and that, by providing humans with the flexibility of requesting plans from the agent, allows them to perform more tasks more efficiently than using purely human interactions to allocate tasks. We also discuss how such flexibility could lead to poor performance if left unchecked

    Can bounded and self-interested agents be teammates? Application to planning in ad hoc teams

    Get PDF
    Planning for ad hoc teamwork is challenging because it involves agents collaborating without any prior coordination or communication. The focus is on principled methods for a single agent to cooperate with others. This motivates investigating the ad hoc teamwork problem in the context of self-interested decision-making frameworks. Agents engaged in individual decision making in multiagent settings face the task of having to reason about other agents’ actions, which may in turn involve reasoning about others. An established approximation that operationalizes this approach is to bound the infinite nesting from below by introducing level 0 models. For the purposes of this study, individual, self-interested decision making in multiagent settings is modeled using interactive dynamic influence diagrams (I-DID). These are graphical models with the benefit that they naturally offer a factored representation of the problem, allowing agents to ascribe dynamic models to others and reason about them. We demonstrate that an implication of bounded, finitely-nested reasoning by a self-interested agent is that we may not obtain optimal team solutions in cooperative settings, if it is part of a team. We address this limitation by including models at level 0 whose solutions involve reinforcement learning. We show how the learning is integrated into planning in the context of I-DIDs. This facilitates optimal teammate behavior, and we demonstrate its applicability to ad hoc teamwork on several problem domains and configurations
    corecore