30 research outputs found
Recommended from our members
AO* revisited
Preprint submitted to Elsevier Science 30 June 2004.This paper revisits the AO* algorithm introduced by Martelli and Montanari [1] and made popular by Nilsson [2] The paper's main contributions are: (1) proving that the value of a node monotonically increases as the AO* search progresses , (2) proving that selective updates of the value function in the AO* algorithm are correct and
providing guidance to researchers interested in the AO* implementation. (1) and (2) are proven under the assumption that the heuristic used by AO* is consistent. The paper also reviews the use of AO* for solving Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs).Keywords: AND / OR graph, MDP, consistent heuristic, Heuristic search, AO implementation, admissible heuristic, POMD
Recommended from our members
What's on TV? Detecting age-related neurodegenerative eye disease using eye movement scanpaths
Purpose: We test the hypothesis that age-related neurodegenerative eye disease can be detected by examining patterns of eye movement recorded whilst a person naturally watches a movie.
Methods: Thirty-two elderly people with healthy vision (median age: 70, interquartile range [IQR] 64–75 years) and 44 patients with a clinical diagnosis of glaucoma (median age: 69, IQR 63–77 years) had standard vision examinations including automated perimetry. Disease severity was measured using a standard clinical measure (visual field mean deviation; MD). All study participants viewed three unmodified TV and film clips on a computer set up incorporating the Eyelink 1000 eyetracker (SR Research, Ontario, Canada). Eye movement scanpaths were plotted using novel methods that first filtered the data and then generated saccade density maps. Maps were then subjected to a feature extraction analysis using kernel principal component analysis (KPCA). Features from the KPCA were then classified using a standard machine based classifier trained and tested by a 10-fold cross validation which was repeated 100 times to estimate the confidence interval (CI) of classification sensitivity and specificity. Results: Patients had a range of disease severity from early to advanced (median [IQR] right eye and left eye MD was −7 [−13 to −5] dB and −9 [−15 to −4] dB, respectively). Average sensitivity for correctly identifying a glaucoma patient at a fixed specificity of 90% was 79% (95% CI: 58–86%). The area under the Receiver Operating Characteristic curve was 0.84 (95% CI: 0.82–0.87).
Conclusions: Huge data from scanpaths of eye movements recorded whilst people freely watch TV type films can be processed into maps that contain a signature of vision loss. In this proof of principle study we have demonstrated that a group of patients with age-related neurodegenerative eye disease can be reasonably well separated from a group of healthy peers by considering these eye movement signatures alone
Recommended from our members
A POMDP approximation algorithm that anticipates the need to observe
This paper introduces the even-odd POMDP, an approximation to POMDPs in which the world is assumed to be fully observable every other time step. The even-odd POMDP can be converted into an equivalent MDP, the
2MDP, whose value function, V*[subscript 2MDP], can be combined online with a 2-step lookahead search to provide a good POMDP policy. We
prove that this gives an approximation to the POMDP's optimal value function that is at least as good as methods based on the optimal value function of the underlying MDP. We present experimental evidence that the method gives better policies, and we show that it can find a good policy for a POMDP with 10,000 states and observations.Keywords: Partially Observable Markov Decision Problem, even-odd POMDP, POMDPKeywords: Partially Observable Markov Decision Problem, even-odd POMDP, POMD
Recommended from our members
A POMDP approximation algorithm that anticipates the need to observe
This paper introduces the even-odd POMDP an approximation to POMDPs Partially Observable Markov Decision Problems in which the world is assumed to be fully observable every other time step. This approximation works well for problems with a delayed need to observe. The even-odd POMDP can be converted into an equivalent MDP the 2MDP whose value function, V*[subscript 2MDP], can be combined online with a 2-step lookahead search to provide a good POMDP policy. We prove that this gives an approximation to the POMDPs optimal value function that is at least as good as methods based on the optimal value function of the underlying MDP. We present experimental evidence that the method finds a good policy for a POMDP with states and observations.Keywords: Partially Observable Markov Decision Problems, Even-odd POMDP, POMD
Recommended from our members
Two heuristics for solving POMDPs having a delayed need to observe
A common heuristic for solving Partially Observable Markov Decision Problems POMDPs is to first solve the underlying Markov Decision Process MDP and then construct a POMDP policy by performing a fixed depth lookahead search in the POMDP and evaluating the leaf nodes using the MDP value function. A problem with this approximation is that it does not account for the need to choose actions in order to gain information about the state of the world particularly when those observation actions are needed at some point in the future. This paper proposes two heuristics that are better than the MDP approximation in POMDPs where there is a delayed need to observe. The first approximation introduced in [2] is the even-odd POMDP in which the world is assumed to be fully observable every other time step. The even-odd POMDP can be converted into an equivalent MDP the even-MDP whose value function captures some of the sensing costs of the original POMDP. An online policy consisting in a step lookahead search combined with the value function of the even-MDP gives an approximation to the POMDPs value function that is at least as good as the method based on the value function of the underlying MDP. The second POMDP approximation is applicable to a special kind of POMDP which we call the Cost Observable Markov Decision Problem COMDP. In a COMDP the actions are partitioned into those that change the state of the world and those that are pure observation actions. For such problems we describe the chain-MDP algorithm which in many cases is able to capture more of the sensing costs than the even-odd POMDP approximation. We prove that both heuristics compute value functions that are upper bounded by (i.e., better than) the value function of the underlying MDP and in the case of the even-MDP also lower bounded by the POMDPs optimal value function. We show cases where the chain-MDP online policy is better equal or worse than the even-MDP online policy.Keywords: Cost Observable Markov Decision Problem, POMDP, COMDP, Partially Observable Markov Decision ProblemsKeywords: Cost Observable Markov Decision Problem, POMDP, COMDP, Partially Observable Markov Decision Problem
Recommended from our members
Integrating learning from examples into the search for diagnostic policies
This paper studies the problem of learning diagnostic policies from training examples. A diagnostic policy is a complete description of the decision-making actions of a diagnostician (i.e., tests followed by a diagnostic decision) for all possible combinations of test results. An optimal diagnostic policy is one that minimizes the expected total cost of diagnosing a patient, where the cost is the sum of two components: (a) measurement costs (the costs of performing various diagnostic tests) and (b) misdiagnosis costs (the costs incurred when the patient is incorrectly diagnosed). In most diagnostic settings, there is a tradeoff between these two kinds of costs. A diagnostic policy that minimizes measurement costs usually performs fewer tests and tends to make more diagnostic errors, which are expensive. Conversely, a policy that minimizes misdiagnosis costs usually makes more measurements. This paper formalizes diagnostic decision making as a Markov Decision Process (MDP). It then presents a range of algorithms for solving this MDP. These algorithms can be divided into methods based on systematic search and methods based on greedy search. The paper introduces a new family of systematic algorithms based on the AO* algorithm. To make AO* efficient, the paper describes an admissible heuristic that enables AO* to prune large parts of the search space. The paper also introduces several greedy algorithms including some improvements over previously-published methods. The paper then addresses the question of learning diagnostic policies from examples. When the probabilities of diseases and test results are computed from training data, there is a great danger of overfitting. The paper introduces a range of regularization methods to reduce overfitting. An interesting aspect of these regularizers is that they are integrated into the search algorithms rather than being isolated in a separate learning step prior to searching for a good diagnostic policy. Finally, the paper compares the proposed methods on five benchmark diagnostic data sets. The studies show that in most cases the systematic search methods produce better diagnostic policies than the greedy methods. In addition, the studies show that for training sets of realistic size, the systematic search algorithms are practical on today's desktop computers. Hence, these AO*-based methods are recommended for learning diagnostic policies that seek to minimize the expected total cost of diagnosis.Keywords: diagnostic policy, AO*, Markov decision process, diagnostic decision makin
Scattering of dipole-mode vector solitons: Theory and experiment
We study, both theoretically and experimentally, the scattering properties of
optical dipole-mode vector solitons - radially asymmetric composite
self-trapped optical beams. First, we analyze the soliton collisions in an
isotropic two-component model with a saturable nonlinearity and demonstrate
that in many cases the scattering dynamics of the dipole-mode solitons allows
us to classify them as ``molecules of light'' - extremely robust spatially
localized objects which survive a wide range of interactions and display many
properties of composite states with a rotational degree of freedom. Next, we
study the composite solitons in an anisotropic nonlinear model that describes
photorefractive nonlinearities, and also present a number of experimental
verifications of our analysis.Comment: 8 pages + 4 pages of figure
Unique and contrasting structures of homoleptic lanthanum(III) and cerium(III) 3,5-dimethylpyrazolates
Homoleptic [La(Me(2)pz)(3)](n) (Me(2)pz = 3,5-dimethylpyrazolato) is a mu-eta(2):eta(5)-Me(2)pz coordination polymer with 12-coordinate La atoms in an unusual eta(5):eta(5) Me(2)pz sandwich, whilst the cerium congener forms a molecular tetrametallic cage [Ce-4(Me(2)pz)(12)] featuring six different Me(2)pz coordination modes
Recommended from our members
Pruning improves heuristic search for cost-sensitive learning
This paper addresses cost-sensitive classification in the setting where there are costs for measuring each attribute as well as costs for misclassification errors. We show how to formulate this as a Markov Decision Process in which the transition model is learned from the training data. Specifically we assume a set of training examples in which all attributes and the true class have been measured. We describe a learning algorithm based on the AO* heuristic search procedure that searches for the classification policy with minimum expected cost. We provide an admissible heuristic for AO* that substantially reduces the number of nodes that need to be expanded particularly when attribute measurement costs are high. To further prune the search space we introduce a statistical pruning heuristic based on the principle that if the values of two policies are statistically indistinguishable on the training data then we can prune one of the policies from the AO* search space. Experiments with realistic and synthetic data demonstrate that these heuristics can substantially reduce the memory needed for AO* search without significantly affecting the quality of the learned policy. Hence these heuristics expand the range of cost-sensitive learning problems for which AO* is feasible.Keywords: Cost-sensitive learning problems, Markov decision process, Statistical pruning heuristic, AO