34 research outputs found

    Sampling based approaches for minimizing regret in uncertain Markov Decision Problems (MDPs)

    Get PDF
    National Research Foundation (NRF) Singapore under Singapore-MIT Alliance for Research and Technology (SMART) Center for Future Mobilit

    Towards a science of security games

    Get PDF
    Abstract. Security is a critical concern around the world. In many domains from counter-terrorism to sustainability, limited security resources prevent complete security coverage at all times. Instead, these limited resources must be scheduled (or allocated or deployed), while simultaneously taking into account the impor-tance of different targets, the responses of the adversaries to the security posture, and the potential uncertainties in adversary payoffs and observations, etc. Com-putational game theory can help generate such security schedules. Indeed, casting the problem as a Stackelberg game, we have developed new algorithms that are now deployed over multiple years in multiple applications for scheduling of secu-rity resources. These applications are leading to real-world use-inspired research in the emerging research area of “security games”. The research challenges posed by these applications include scaling up security games to real-world sized prob-lems, handling multiple types of uncertainty, and dealing with bounded rationality of human adversaries.

    Learning bounded optimal behavior using Markov decision processes

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2007.Includes bibliographical references (p. 171-175).Creating agents that behave rationally in the real-world is one goal of Artificial Intelligence. A rational agent is one that takes, at each point in time, the optimal action such that its expected utility is maximized. However, to determine the optimal action the agent may need to engage in lengthy deliberations or computations. The effect of computation is generally not explicitly considered when performing deliberations. In reality, spending too much time in deliberation may yield high quality plans that do not satisfy the natural timing constraints of a problem, making them effectively useless. Enforcing shortened deliberation times may yield timely plans, but these may be of diminished utility. These two cases suggest the possibility of optimizing an agent's deliberation process. This thesis proposes a framework for generating meta level controllers that select computational actions to perform by optimally trading off their benefit against their costs. The metalevel optimization problem is posed within a Markov Decision Process framework and is solved off-line to determine a policy for carrying out computations. Once the optimal policy is determined, it serves efficiently as an online metalevel controller that selects computational actions conditioned upon the current state of computation. Solving for the exact policy of the metalevel optimization problem becomes computationally intractable with problem size. A learning approach that takes advantage of the problem structure is proposed to generate approximate policies that are shown to perform relatively well in comparison to optimal policies. Metalevel policies are generated for two types of problem scenarios, distinguished by the representation of the cost of computation. In the first case, the cost of computation is explicitly defined as part of the problem description. In the second case, it is implicit in the timing constraints of problem. Results are presented to validate the beneficial effects of metalevel planning over traditional methods when the cost of computation has a significant effect on the utility of a plan.by Hon Fai Vuong.Ph.D

    Decision-theoretic planning of clinical patient management

    Get PDF
    When a doctor is treating a patient, he is constantly facing decisions. From the externally visible signs and symptoms he must establish a hypothesis of what might be wrong with the patient; then he must decide whether additional diagnostic procedures are required to verify this hypothesis, whether therapeutic action is necessary, and which post-therapeutic trajectory is to be followed. All these bedside decisions are related to each other, and the whole task of clinical patient management can therefore be regarded as a form a planning. In Artificial Intelligence, planning is traditionally studied for situations that are highly predictable. An important characteristic of medical decisions is however that they often must be made under conditions of uncertainty; this is due to errors in the results of diagnostic tests, limitations in medical knowledge, and unpredictability of the future course of disease. Decision making under uncertainty is traditionally studied in the field decision theory; in this thesis, we investigate the problem of clinical patient management as action planning using decision-theoretic principles, or decision-theoretic planning for short

    Life Cycle Evaluation under Uncertain Environmental Policies Using a Ship-Centric Markov Decision Process Framework.

    Full text link
    A novel design evaluation framework is offered to improve early stage design decisions relating to environmental policy change and similar non-technical disturbances. The goal of this research is to overcome the traditional treatment of policy as a static, external constraint and to address in early stage design the potential disruptions to performance posed by regulatory policy change. While a designer’s primary purpose is not to affect policy, it is the responsibility of the designer to be cognizant of how policy can change, of how to assess the implications of a policy change, and of how to deliver performance despite change. This research addresses a present need for a rigorous means to keep strategic pace with policy evolution. Use of a Markov Decision Process (MDP) framework serves as a unifying foundation for incorporating temporal activities into early stage design considerations. The framework employs probabilistic methods via a state-based structure to holistically address policy uncertainty. Presented research enables exploration of the performance of a design solution through time in the face of environmental instabilities and identifies decisions necessary to negotiate path dependencies. The outcome of this research is an advanced framework for addressing life cycle management needs that arise due to policy change, as judged from a life cycle cost perspective. Original metrics for evaluating decision paths provide insight into how the timing, location, and confluence of disturbances impact design decisions. Development of the metrics is driven by a desire to communicate the design-specific characteristics of a strategic response to policy change. Quantifying the amount and type of uncertainty present, changeability afforded, and life cycle changes exercised offer points of comparison among individual design solutions. The knowledge gained from path-centric measurements enables an enhanced ability to characterize design lock-in. Principles and metrics borne out of the design evaluation framework are validated through two ship design examples related to ballast water treatment and carbon emissions.PHDNaval Architecture & Marine EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/96130/1/ndniese_1.pd
    corecore