72 research outputs found

    An Online Decision-Theoretic Pipeline for Responder Dispatch

    Full text link
    The problem of dispatching emergency responders to service traffic accidents, fire, distress calls and crimes plagues urban areas across the globe. While such problems have been extensively looked at, most approaches are offline. Such methodologies fail to capture the dynamically changing environments under which critical emergency response occurs, and therefore, fail to be implemented in practice. Any holistic approach towards creating a pipeline for effective emergency response must also look at other challenges that it subsumes - predicting when and where incidents happen and understanding the changing environmental dynamics. We describe a system that collectively deals with all these problems in an online manner, meaning that the models get updated with streaming data sources. We highlight why such an approach is crucial to the effectiveness of emergency response, and present an algorithmic framework that can compute promising actions for a given decision-theoretic model for responder dispatch. We argue that carefully crafted heuristic measures can balance the trade-off between computational time and the quality of solutions achieved and highlight why such an approach is more scalable and tractable than traditional approaches. We also present an online mechanism for incident prediction, as well as an approach based on recurrent neural networks for learning and predicting environmental features that affect responder dispatch. We compare our methodology with prior state-of-the-art and existing dispatch strategies in the field, which show that our approach results in a reduction in response time with a drastic reduction in computational time.Comment: Appeared in ICCPS 201

    Artificial Intelligence for Emergency Response

    Full text link
    Emergency response management (ERM) is a challenge faced by communities across the globe. First responders must respond to various incidents, such as fires, traffic accidents, and medical emergencies. They must respond quickly to incidents to minimize the risk to human life. Consequently, considerable attention has been devoted to studying emergency incidents and response in the last several decades. In particular, data-driven models help reduce human and financial loss and improve design codes, traffic regulations, and safety measures. This tutorial paper explores four sub-problems within emergency response: incident prediction, incident detection, resource allocation, and resource dispatch. We aim to present mathematical formulations for these problems and broad frameworks for each problem. We also share open-source (synthetic) data from a large metropolitan area in the USA for future work on data-driven emergency response.Comment: This is a pre-print for a book chapter to appear in Vorobeychik, Yevgeniy., and Mukhopadhyay, Ayan., (Eds.). (2023). \textit{Artificial Intelligence and Society}. ACM Pres

    Analyzing the Impact of Blood Transfusion Kits and Triage Misclassification Errors for Military Medical Evacuation Dispatching Policies via Approximate Dynamic Programming

    Get PDF
    Members of the armed forces greatly rely on having an effective and efficient medical evacuation (MEDEVAC) process for evacuating casualties from the battlefield to medical treatment facilities (MTF) during combat operations. This thesis examines the MEDEVAC dispatching problem and seeks to determine an optimal policy for dispatching a MEDEVAC unit, if any, when a 9-line MEDEVAC request arrives, taking into account triage classification errors and the possibility of having blood transfusion kits on board select MEDEVAC units. A discounted, infinite-horizon continuous-time Markov decision process (MDP) model is formulated to examine such problem and compare generated dispatching policies to the myopic policy of sending the closest available unit. We utilize an approximate dynamic programming (ADP) technique that leverages a random forest value function approximation within an approximate policy iteration algorithmic framework to develop high-quality policies for both a small-scale problem instance and a large-scale problem instance that cannot be solved to optimality. A representative planning scenario involving joint combat operations in South Korea is developed and utilized to investigate the differences between the various policies. Results from the analysis indicate that applying ADP techniques can improve current practices by as much as 29% with regard to a life-saving performance metric. This research is of particular interest to the military medical community and can inform the procedures of future military MEDEVAC operations

    Emergent Incident Response for Unmanned Warehouses with Multi-agent Systems*

    Full text link
    Unmanned warehouses are an important part of logistics, and improving their operational efficiency can effectively enhance service efficiency. However, due to the complexity of unmanned warehouse systems and their susceptibility to errors, incidents may occur during their operation, most often in inbound and outbound operations, which can decrease operational efficiency. Hence it is crucial to to improve the response to such incidents. This paper proposes a collaborative optimization algorithm for emergent incident response based on Safe-MADDPG. To meet safety requirements during emergent incident response, we investigated the intrinsic hidden relationships between various factors. By obtaining constraint information of agents during the emergent incident response process and of the dynamic environment of unmanned warehouses on agents, the algorithm reduces safety risks and avoids the occurrence of chain accidents; this enables an unmanned system to complete emergent incident response tasks and achieve its optimization objectives: (1) minimizing the losses caused by emergent incidents; and (2) maximizing the operational efficiency of inbound and outbound operations during the response process. A series of experiments conducted in a simulated unmanned warehouse scenario demonstrate the effectiveness of the proposed method.Comment: 13 pages, 7 figure

    Decision Making in Non-Stationary Environments with Policy-Augmented Search

    Full text link
    Sequential decision-making under uncertainty is present in many important problems. Two popular approaches for tackling such problems are reinforcement learning and online search (e.g., Monte Carlo tree search). While the former learns a policy by interacting with the environment (typically done before execution), the latter uses a generative model of the environment to sample promising action trajectories at decision time. Decision-making is particularly challenging in non-stationary environments, where the environment in which an agent operates can change over time. Both approaches have shortcomings in such settings -- on the one hand, policies learned before execution become stale when the environment changes and relearning takes both time and computational effort. Online search, on the other hand, can return sub-optimal actions when there are limitations on allowed runtime. In this paper, we introduce \textit{Policy-Augmented Monte Carlo tree search} (PA-MCTS), which combines action-value estimates from an out-of-date policy with an online search using an up-to-date model of the environment. We prove theoretical results showing conditions under which PA-MCTS selects the one-step optimal action and also bound the error accrued while following PA-MCTS as a policy. We compare and contrast our approach with AlphaZero, another hybrid planning approach, and Deep Q Learning on several OpenAI Gym environments. Through extensive experiments, we show that under non-stationary settings with limited time constraints, PA-MCTS outperforms these baselines.Comment: Extended Abstract accepted for presentation at AAMAS 202

    BNAIC 2008:Proceedings of BNAIC 2008, the twentieth Belgian-Dutch Artificial Intelligence Conference

    Get PDF
    • …
    corecore