582,893 research outputs found

    Helping humans and agents avoid undesirable consequences with models of intervention

    Get PDF
    2021 Fall.Includes bibliographical references.When working in an unfamiliar online environment, it can be helpful to have an observer that can intervene and guide a user toward a desirable outcome while avoiding undesirable outcomes or frustration. The Intervention Problem is deciding when to intervene in order to help a user. The Intervention Problem is similar to, but distinct from, Plan Recognition because the observer must not only recognize the intended goals of a user but also when to intervene to help the user when necessary. In this dissertation, we formalize a family of intervention problems to address two sub-problems: (1) The Intervention Recognition Problem, and (2) The Intervention Recovery Problem. The Intervention Recognition Problem views the environment as a state transition system where an agent (or a human user), in order to achieve a desirable outcome, executes actions that change the environment from one state to the next. Some states in the environment are undesirable and the user does not have the ability to recognize them and the intervening agent wants to help the user in the environment avoid the undesirable state. In this dissertation, we model the environment as a classical planning problem and discuss three intervention models to address the Intervention Recognition Problem. The three models address different dimensions of the Intervention Recognition Problem, specifically the actors in the environment, information hidden from the intervening agent, type of observations and noise in the observations. The first model: Intervention by Recognizing Actions Enabling Multiple Undesirable Consequences, is motivated by a study where we observed how home computer users practice cyber-security and take action to unwittingly put their online safety at risk. The model is defined for an environment where three agents: the user, the attacker and the intervening agent are present. The intervening agent helps the user reach a desirable goal that is hidden from the intervening agent by recognizing critical actions that enable multiple undesirable consequences. We view the problem of recognizing critical actions as a multi-factor decision problem of three domain-independent metrics: certainty, timeliness and desirability. The three metrics simulate the trade-off between the safety and freedom of the observed agent when selecting critical actions to intervene. The second model: Intervention as Classical Planning, we model scenarios where the intervening agent observes a user and a competitor attempting to achieve different goals in the same environment. A key difference in this model compared to the first model is that the intervening agent is aware of the user's desirable goal and the undesirable state. The intervening agent exploits the classical planning representation of the environment and uses automated planning to project the possible outcomes in the environment exactly and approximately. To recognize when intervention is required, the observer analyzes the plan suffixes leading to the user's desirable goal and the undesirable state and learns the differences between the plans that achieve the desirable goal and plans that achieve the undesirable state using machine learning. Similar to the first model, learning the differences between the safe and unsafe plans allows the intervening agent to balance specific actions with those that are necessary for the user to allow some freedom. The third model: Human-aware Intervention, we assume that the user is a human solving a cognitively engaging planning task. When human users plan, unlike an automated planner, they do not have the ability to use heuristics to search for the best solution. They often make mistakes and spend time exploring the search space of the planning problem. The complication this adds to the Intervention Recognition Problem is that deciding to intervene by analyzing plan suffixes generated by an automated planner is no longer feasible. Using a cognitively engaging puzzle solving task (Rush Hour) we study how human users solve the puzzle as a planning task and develop the Human-aware Intervention model combining automated planning and machine learning. The intervening agent uses a domain specific feature set more appropriate for human behavior to decide in real time whether to intervene the human user. Our experiments using the benchmark planning domains and human subject studies show that the three intervention recognition models out performs existing plan recognition algorithms in predicting when intervention is required. Our solution to address the Intervention Recovery Problem goes beyond the typical preventative measures to help the human user recover from intervention. We propose the Interactive Human-aware Intervention where a human user solves a cognitively engaging planning task with the assistance of an agent that implements the Human-aware Intervention. The Interactive Human-aware Intervention is different from typical preventive measures where the agent executes actions to modify the domain such that the undesirable plan can not progress (e.g., block an action). Our approach interactively guides the human user toward the solution to the planning task by revealing information about the remaining planning task. We evaluate the Interactive Human-aware Intervention using both subjective and objective measures in a human subject study

    A simplistic approach to keyhole plan recognition

    Get PDF
    When applying plan recognition to Human - Computer Interaction, one must cope with users exhibiting a large amount of reactive behaviour: users that change tasks, or change strategies for achieving tasks. Most current approaches to keyhole plan recognition do not address this problem. We describe an application domain for plan recognition, where users exhibit reactive rather than plan-based behaviour, and where existing approaches to plan recognition do not perform well. In order to enable plan recognition in this domain, we have developed an extremely simplistic mechanism for keyhole plan recognition, "intention guessing". The algorithm is based on descriptions of observable behaviour, and is able to recognize certain instances of plan failures, suboptimal plans and erroneous actions. At run-time, the algorithm only keeps track of a limited number of the most recent actions, which makes the algorithm "forgetful". This property makes the algorithm suitable for domains where users frequently change strategies

    Smartphone sensing platform for emergency management

    Full text link
    The increasingly sophisticated sensors supported by modern smartphones open up novel research opportunities, such as mobile phone sensing. One of the most challenging of these research areas is context-aware and activity recognition. The SmartRescue project takes advantage of smartphone sensing, processing and communication capabilities to monitor hazards and track people in a disaster. The goal is to help crisis managers and members of the public in early hazard detection, prediction, and in devising risk-minimizing evacuation plans when disaster strikes. In this paper we suggest a novel smartphone-based communication framework. It uses specific machine learning techniques that intelligently process sensor readings into useful information for the crisis responders. Core to the framework is a content-based publish-subscribe mechanism that allows flexible sharing of sensor data and computation results. We also evaluate a preliminary implementation of the platform, involving a smartphone app that reads and shares mobile phone sensor data for activity recognition.Comment: 11th International Conference on Information Systems for Crisis Response and Management ISCRAM2014 (2014

    Subject benchmark statement: forensic science: draft for consultation

    Get PDF
    • …
    corecore