17 research outputs found
Recommended from our members
Learning cost-sensitive diagnostic policies from data
In its simplest form, the process of diagnosis is a decision-making process in which the diagnostician performs a sequence of tests culminating in a diagnostic decision. For example, a physician might perform a series of simple measurements (body tem- perature, weight, etc.) and laboratory measurements (white blood count, CT scan, MRI scan, etc.) in order to determine the disease of the patient. A diagnostic policy is a complete description of the decision-making actions of a diagnostician under all possible circumstances. This dissertation studies the problem of learning diagnostic policies from training examples. An optimal diagnostic policy is one that minimizes the expected total cost of diagnosing a patient, where the cost is composed of two components: (a) measurement costs (the costs of performing various diagnostic tests) and (b) misdiagnosis costs (the costs incurred when the patient is incorrectly diag- nosed). The optimal policy must perform diagnostic tests until further measurements do not reduce the expected total cost of diagnosis. The dissertation investigates two families of algorithms for learning diagnostic policies: greedy methods and methods based on the AO* algorithm for systematic search. Previous work in supervised learning constructed greedy diagnostic policies that either ignored all costs or considered only measurement costs or only misdiag- nosis costs. This research recognizes the practical importance of costs incurred by performing measurements and by making incorrect diagnoses and studies the tradeo between them. This dissertation develops improved greedy methods. It also intro- duces a new family of learning algorithms based on systematic search. Systematic search has previously been regarded as computationally infeasible for learning diag- nostic policies. However, this dissertation describes an admissible heuristic for AO* that enables it to prune large parts of the search space. In addition, the dissertation shows that policies with better performance on an independent test set are learned when the AO* method is regularized in order to reduce over tting. Experimental studies on benchmark data sets show that in most cases the sys- tematic search methods produce better diagnostic policies than the greedy methods. Hence, these AO*-based methods are recommended for learning diagnostic policies that seek to minimize the expected total cost of diagnosis
Recommended from our members
AO* revisited
Preprint submitted to Elsevier Science 30 June 2004.This paper revisits the AO* algorithm introduced by Martelli and Montanari [1] and made popular by Nilsson [2] The paper's main contributions are: (1) proving that the value of a node monotonically increases as the AO* search progresses , (2) proving that selective updates of the value function in the AO* algorithm are correct and
providing guidance to researchers interested in the AO* implementation. (1) and (2) are proven under the assumption that the heuristic used by AO* is consistent. The paper also reviews the use of AO* for solving Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs).Keywords: AND / OR graph, MDP, consistent heuristic, Heuristic search, AO implementation, admissible heuristic, POMD
Recommended from our members
A POMDP approximation algorithm that anticipates the need to observe
This paper introduces the even-odd POMDP an approximation to POMDPs Partially Observable Markov Decision Problems in which the world is assumed to be fully observable every other time step. This approximation works well for problems with a delayed need to observe. The even-odd POMDP can be converted into an equivalent MDP the 2MDP whose value function, V*[subscript 2MDP], can be combined online with a 2-step lookahead search to provide a good POMDP policy. We prove that this gives an approximation to the POMDPs optimal value function that is at least as good as methods based on the optimal value function of the underlying MDP. We present experimental evidence that the method finds a good policy for a POMDP with states and observations.Keywords: Partially Observable Markov Decision Problems, Even-odd POMDP, POMD
Recommended from our members
Integrating learning from examples into the search for diagnostic policies
This paper studies the problem of learning diagnostic policies from training examples. A diagnostic policy is a complete description of the decision-making actions of a diagnostician (i.e., tests followed by a diagnostic decision) for all possible combinations of test results. An optimal diagnostic policy is one that minimizes the expected total cost of diagnosing a patient, where the cost is the sum of two components: (a) measurement costs (the costs of performing various diagnostic tests) and (b) misdiagnosis costs (the costs incurred when the patient is incorrectly diagnosed). In most diagnostic settings, there is a tradeoff between these two kinds of costs. A diagnostic policy that minimizes measurement costs usually performs fewer tests and tends to make more diagnostic errors, which are expensive. Conversely, a policy that minimizes misdiagnosis costs usually makes more measurements. This paper formalizes diagnostic decision making as a Markov Decision Process (MDP). It then presents a range of algorithms for solving this MDP. These algorithms can be divided into methods based on systematic search and methods based on greedy search. The paper introduces a new family of systematic algorithms based on the AO* algorithm. To make AO* efficient, the paper describes an admissible heuristic that enables AO* to prune large parts of the search space. The paper also introduces several greedy algorithms including some improvements over previously-published methods. The paper then addresses the question of learning diagnostic policies from examples. When the probabilities of diseases and test results are computed from training data, there is a great danger of overfitting. The paper introduces a range of regularization methods to reduce overfitting. An interesting aspect of these regularizers is that they are integrated into the search algorithms rather than being isolated in a separate learning step prior to searching for a good diagnostic policy. Finally, the paper compares the proposed methods on five benchmark diagnostic data sets. The studies show that in most cases the systematic search methods produce better diagnostic policies than the greedy methods. In addition, the studies show that for training sets of realistic size, the systematic search algorithms are practical on today's desktop computers. Hence, these AO*-based methods are recommended for learning diagnostic policies that seek to minimize the expected total cost of diagnosis.Keywords: diagnostic policy, AO*, Markov decision process, diagnostic decision makin
Recommended from our members
Two heuristics for solving POMDPs having a delayed need to observe
A common heuristic for solving Partially Observable Markov Decision Problems POMDPs is to first solve the underlying Markov Decision Process MDP and then construct a POMDP policy by performing a fixed depth lookahead search in the POMDP and evaluating the leaf nodes using the MDP value function. A problem with this approximation is that it does not account for the need to choose actions in order to gain information about the state of the world particularly when those observation actions are needed at some point in the future. This paper proposes two heuristics that are better than the MDP approximation in POMDPs where there is a delayed need to observe. The first approximation introduced in [2] is the even-odd POMDP in which the world is assumed to be fully observable every other time step. The even-odd POMDP can be converted into an equivalent MDP the even-MDP whose value function captures some of the sensing costs of the original POMDP. An online policy consisting in a step lookahead search combined with the value function of the even-MDP gives an approximation to the POMDPs value function that is at least as good as the method based on the value function of the underlying MDP. The second POMDP approximation is applicable to a special kind of POMDP which we call the Cost Observable Markov Decision Problem COMDP. In a COMDP the actions are partitioned into those that change the state of the world and those that are pure observation actions. For such problems we describe the chain-MDP algorithm which in many cases is able to capture more of the sensing costs than the even-odd POMDP approximation. We prove that both heuristics compute value functions that are upper bounded by (i.e., better than) the value function of the underlying MDP and in the case of the even-MDP also lower bounded by the POMDPs optimal value function. We show cases where the chain-MDP online policy is better equal or worse than the even-MDP online policy.Keywords: Cost Observable Markov Decision Problem, POMDP, COMDP, Partially Observable Markov Decision ProblemsKeywords: Cost Observable Markov Decision Problem, POMDP, COMDP, Partially Observable Markov Decision Problem
A POMDP Approximation Algorithm that Anticipates the Need to Observe
. This paper introduces the even-odd POMDP, an approximation to POMDPs (Partially Observable Markov Decision Problems) in which the world is assumed to be fully observable every other time step. This approximation works well for problems with a delayed need to observe. The even-odd POMDP can be converted into an equivalent MDP, the 2MDP, whose value function, V 2MDP , can be combined online with a 2-step lookahead search to provide a good POMDP policy. We prove that this gives an approximation to the POMDP's optimal value function that is at least as good as methods based on the optimal value function of the underlying MDP. We present experimental evidence that the method finds a good policy for a POMDP with 10,000 states and observations. 1 Introduction The Partially Observable Markov Decision Problem (POMDP) is a general model of a single agent interacting with a partially observable environment. POMDP applications include quality control, autonomous robots, weapon allocation and..
Integrating learning from examples into the search for diagnostic policies
This paper studies the problem of learning diagnostic policies from training examples. A diagnostic policy is a complete description of the decision-making actions of a diagnostician (i.e., tests followed by a diagnostic decision) for all possible combinations of test results. An optimal diagnostic policy is one that minimizes the expected total cost, which isthe sum of measurement costs and misdiagnosis costs. In most diagnostic settings, there is a tradeo between these two kinds of costs. This paper formalizes diagnostic decision making as a Markov Decision Process (MDP). The paper introduces a new family of systematic search algorithms based on the AO algorithm to solve this MDP.To makeAO e cient, the paper describes an admissible heuristic that enables AO to prune large parts of the search space. The paper also introduces several greedy algorithms including some improvements over previously-published methods. The paper then addresses the question of learning diagnostic policies from examples. When the probabilities of diseases and test results are computed from training data, there is a great danger of over tting. To reduce over tting, regularizers are integrated into the search algorithms. Finally, the paper compares the proposed methods on ve benchmark diagnostic data sets. The studies show that in most cases the systematic search methods produce better diagnostic policies than the greedy methods. In addition, the studies show that for training sets of realistic size, the systematic search algorithms are practical on today's desktop computers. 1