102 research outputs found
Recommended from our members
Learning cost-sensitive diagnostic policies from data
In its simplest form, the process of diagnosis is a decision-making process in which the diagnostician performs a sequence of tests culminating in a diagnostic decision. For example, a physician might perform a series of simple measurements (body tem- perature, weight, etc.) and laboratory measurements (white blood count, CT scan, MRI scan, etc.) in order to determine the disease of the patient. A diagnostic policy is a complete description of the decision-making actions of a diagnostician under all possible circumstances. This dissertation studies the problem of learning diagnostic policies from training examples. An optimal diagnostic policy is one that minimizes the expected total cost of diagnosing a patient, where the cost is composed of two components: (a) measurement costs (the costs of performing various diagnostic tests) and (b) misdiagnosis costs (the costs incurred when the patient is incorrectly diag- nosed). The optimal policy must perform diagnostic tests until further measurements do not reduce the expected total cost of diagnosis. The dissertation investigates two families of algorithms for learning diagnostic policies: greedy methods and methods based on the AO* algorithm for systematic search. Previous work in supervised learning constructed greedy diagnostic policies that either ignored all costs or considered only measurement costs or only misdiag- nosis costs. This research recognizes the practical importance of costs incurred by performing measurements and by making incorrect diagnoses and studies the tradeo between them. This dissertation develops improved greedy methods. It also intro- duces a new family of learning algorithms based on systematic search. Systematic search has previously been regarded as computationally infeasible for learning diag- nostic policies. However, this dissertation describes an admissible heuristic for AO* that enables it to prune large parts of the search space. In addition, the dissertation shows that policies with better performance on an independent test set are learned when the AO* method is regularized in order to reduce over tting. Experimental studies on benchmark data sets show that in most cases the sys- tematic search methods produce better diagnostic policies than the greedy methods. Hence, these AO*-based methods are recommended for learning diagnostic policies that seek to minimize the expected total cost of diagnosis
Recommended from our members
AO* revisited
Preprint submitted to Elsevier Science 30 June 2004.This paper revisits the AO* algorithm introduced by Martelli and Montanari [1] and made popular by Nilsson [2] The paper's main contributions are: (1) proving that the value of a node monotonically increases as the AO* search progresses , (2) proving that selective updates of the value function in the AO* algorithm are correct and
providing guidance to researchers interested in the AO* implementation. (1) and (2) are proven under the assumption that the heuristic used by AO* is consistent. The paper also reviews the use of AO* for solving Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs).Keywords: AND / OR graph, MDP, consistent heuristic, Heuristic search, AO implementation, admissible heuristic, POMD
Ultra-deep optical cooling of coupled nuclear spin-spin and quadrupole reservoirs in a GaAs/(Al,Ga)As quantum well
The physics of interacting nuclear spins in solids is well interpreted within the nuclear spin temperature concept. A common approach to cooling the nuclear spin system is adiabatic demagnetization of the initial, optically created, nuclear spin polarization. Here, the selective cooling of 75As spins by optical pumping followed by adiabatic demagnetization in the rotating frame is realized in a nominally undoped GaAs/(Al,Ga)As quantum well. The lowest nuclear spin temperature achieved is 0.54 μK. The rotation of 6 kG strong Overhauser field at the 75As Larmor frequency of 5.5 MHz is evidenced by the dynamic Hanle effect. Despite the presence of the quadrupole induced nuclear spin splitting, it is shown that the rotating 75As magnetization is uniquely determined by the spin temperature of coupled spin-spin and quadrupole reservoirs. The dependence of heat capacity of these reservoirs on the external magnetic field direction with respect to crystal and structure axes is investigated
Recommended from our members
Approximation algorithms for solving cost observable Markov decision processes
"The specifi c problem addressed in this proposal is the development of
good approximation algorithms for solving problems that have partial observability. The model we propose associates costs with obtaining information about the current state. We want to predict when and how much it is necessary to observe. We want to use our Cost Observable Markov Decision Process (COMDP) model to find good solutions for real-world problems ..."--Problem definition
A pooled analysis of mortality in patients with COPD receiving triple therapy versus dual bronchodilation
Peer reviewedPostprin
Recommended from our members
A POMDP approximation algorithm that anticipates the need to observe
This paper introduces the even-odd POMDP, an approximation to POMDPs in which the world is assumed to be fully observable every other time step. The even-odd POMDP can be converted into an equivalent MDP, the
2MDP, whose value function, V*[subscript 2MDP], can be combined online with a 2-step lookahead search to provide a good POMDP policy. We
prove that this gives an approximation to the POMDP's optimal value function that is at least as good as methods based on the optimal value function of the underlying MDP. We present experimental evidence that the method gives better policies, and we show that it can find a good policy for a POMDP with 10,000 states and observations.Keywords: Partially Observable Markov Decision Problem, even-odd POMDP, POMDPKeywords: Partially Observable Markov Decision Problem, even-odd POMDP, POMD
Recommended from our members
A POMDP approximation algorithm that anticipates the need to observe
This paper introduces the even-odd POMDP an approximation to POMDPs Partially Observable Markov Decision Problems in which the world is assumed to be fully observable every other time step. This approximation works well for problems with a delayed need to observe. The even-odd POMDP can be converted into an equivalent MDP the 2MDP whose value function, V*[subscript 2MDP], can be combined online with a 2-step lookahead search to provide a good POMDP policy. We prove that this gives an approximation to the POMDPs optimal value function that is at least as good as methods based on the optimal value function of the underlying MDP. We present experimental evidence that the method finds a good policy for a POMDP with states and observations.Keywords: Partially Observable Markov Decision Problems, Even-odd POMDP, POMD
Recommended from our members
Two heuristics for solving POMDPs having a delayed need to observe
A common heuristic for solving Partially Observable Markov Decision Problems POMDPs is to first solve the underlying Markov Decision Process MDP and then construct a POMDP policy by performing a fixed depth lookahead search in the POMDP and evaluating the leaf nodes using the MDP value function. A problem with this approximation is that it does not account for the need to choose actions in order to gain information about the state of the world particularly when those observation actions are needed at some point in the future. This paper proposes two heuristics that are better than the MDP approximation in POMDPs where there is a delayed need to observe. The first approximation introduced in [2] is the even-odd POMDP in which the world is assumed to be fully observable every other time step. The even-odd POMDP can be converted into an equivalent MDP the even-MDP whose value function captures some of the sensing costs of the original POMDP. An online policy consisting in a step lookahead search combined with the value function of the even-MDP gives an approximation to the POMDPs value function that is at least as good as the method based on the value function of the underlying MDP. The second POMDP approximation is applicable to a special kind of POMDP which we call the Cost Observable Markov Decision Problem COMDP. In a COMDP the actions are partitioned into those that change the state of the world and those that are pure observation actions. For such problems we describe the chain-MDP algorithm which in many cases is able to capture more of the sensing costs than the even-odd POMDP approximation. We prove that both heuristics compute value functions that are upper bounded by (i.e., better than) the value function of the underlying MDP and in the case of the even-MDP also lower bounded by the POMDPs optimal value function. We show cases where the chain-MDP online policy is better equal or worse than the even-MDP online policy.Keywords: Cost Observable Markov Decision Problem, POMDP, COMDP, Partially Observable Markov Decision ProblemsKeywords: Cost Observable Markov Decision Problem, POMDP, COMDP, Partially Observable Markov Decision Problem
- …