5 research outputs found
Modeling a Sensor to Improve its Efficacy
Robots rely on sensors to provide them with information about their
surroundings. However, high-quality sensors can be extremely expensive and
cost-prohibitive. Thus many robotic systems must make due with lower-quality
sensors. Here we demonstrate via a case study how modeling a sensor can improve
its efficacy when employed within a Bayesian inferential framework. As a test
bed we employ a robotic arm that is designed to autonomously take its own
measurements using an inexpensive LEGO light sensor to estimate the position
and radius of a white circle on a black field. The light sensor integrates the
light arriving from a spatially distributed region within its field of view
weighted by its Spatial Sensitivity Function (SSF). We demonstrate that by
incorporating an accurate model of the light sensor SSF into the likelihood
function of a Bayesian inference engine, an autonomous system can make improved
inferences about its surroundings. The method presented here is data-based,
fairly general, and made with plug-and play in mind so that it could be
implemented in similar problems.Comment: 18 pages, 8 figures, submitted to the special issue of "Sensors for
Robotics
Maximum Joint Entropy and Information-Based Collaboration of Automated Learning Machines
We are working to develop automated intelligent agents, which can act and
react as learning machines with minimal human intervention. To accomplish this,
an intelligent agent is viewed as a question-asking machine, which is designed
by coupling the processes of inference and inquiry to form a model-based
learning unit. In order to select maximally-informative queries, the
intelligent agent needs to be able to compute the relevance of a question. This
is accomplished by employing the inquiry calculus, which is dual to the
probability calculus, and extends information theory by explicitly requiring
context. Here, we consider the interaction between two question-asking
intelligent agents, and note that there is a potential information redundancy
with respect to the two questions that the agents may choose to pose. We show
that the information redundancy is minimized by maximizing the joint entropy of
the questions, which simultaneously maximizes the relevance of each question
while minimizing the mutual information between them. Maximum joint entropy is
therefore an important principle of information-based collaboration, which
enables intelligent agents to efficiently learn together.Comment: 8 pages, 1 figure, to appear in the proceedings of MaxEnt 2011 held
in Waterloo, Canad
An Entropy-based Approach to Improve Clinic Performance and Patient Satisfaction
Abstract The patient scheduling problem in outpatient clinics has been studied extensively in literature with several mathematical, simulation and heuristic based solutions. Factors that influence a clinic's decision to follow a specific scheduling method depend on the patient arrival factors and the expected encounter time. A significant number of small clinics use Bailey's rule or an adaptation of the Bailey's rule for patient scheduling due to its simplicity and lack of resources to invest in a complex scheduling software system. Often there are competing factors that a scheduler or decision maker has to evaluate. These include maximizing clinical resource utilization levels from an economic standpoint versus attempting to minimize waiting time for patients from a patient satisfaction/ quality of care standpoint. Additional parameters that make the scheduling problem challenging are the variability in patient arrival time, no-shows, variability in patient-physician encounter times, emergency patients, and several related factors. This research studies the patient scheduling problem in an outpatient clinic using entropy as a common measure to classify the dominating factors that contribute towards intended clinic performance criteria and patient satisfaction criteria. The goal is to provide an effective and insightful method to study the clinic outpatient scheduling problem which can benefit the clinic and the patients
Nested Sampling Methods
Nested sampling (NS) computes parameter posterior distributions and makes
Bayesian model comparison computationally feasible. Its strengths are the
unsupervised navigation of complex, potentially multi-modal posteriors until a
well-defined termination point. A systematic literature review of nested
sampling algorithms and variants is presented. We focus on complete algorithms,
including solutions to likelihood-restricted prior sampling, parallelisation,
termination and diagnostics. The relation between number of live points,
dimensionality and computational cost is studied for two complete algorithms. A
new formulation of NS is presented, which casts the parameter space exploration
as a search on a tree. Previously published ways of obtaining robust error
estimates and dynamic variations of the number of live points are presented as
special cases of this formulation. A new on-line diagnostic test is presented
based on previous insertion rank order work. The survey of nested sampling
methods concludes with outlooks for future research.Comment: Updated version incorporating constructive input from four(!)
positive reports (two referees, assistant editor and editor). The open-source
UltraNest package and astrostatistics tutorials can be found at
https://johannesbuchner.github.io/UltraNest