1,115 research outputs found
Recommended from our members
Reliable Decision-Making with Imprecise Models
The rapid growth in the deployment of autonomous systems across various sectors has generated considerable interest in how these systems can operate reliably in large, stochastic, and unstructured environments. Despite recent advances in artificial intelligence and machine learning, it is challenging to assure that autonomous systems will operate reliably in the open world. One of the causes of unreliable behavior is the impreciseness of the model used for decision-making. Due to the practical challenges in data collection and precise model specification, autonomous systems often operate based on models that do not represent all the details in the environment. Even if the system has access to a comprehensive decision-making model that accounts for all the details in the environment and all possible scenarios the agent may encounter, it may be intractable to solve this complex model optimally. Consequently, this complex, high fidelity model may be simplified to accelerate planning, introducing imprecision. Reasoning with such imprecise models affects the reliability of autonomous systems. A system\u27s actions may sometimes produce unexpected, undesirable consequences, which are often identified after deployment. How can we design autonomous systems that can operate reliably in the presence of uncertainty and model imprecision?
This dissertation presents solutions to address three classes of model imprecision in a Markov decision process, along with an analysis of the conditions under which bounded-performance can be guaranteed. First, an adaptive outcome selection approach is introduced to devise risk-aware reduced models of the environment that efficiently balance the trade-off between model simplicity and fidelity, to accelerate planning in resource-constrained settings. Second, a framework that extends stochastic shortest path framework to problems with imperfect information about the goal state during planning is introduced, along with two solution approaches to solve this problem. Finally, two complementary solution approaches are presented to minimize the negative side effects of agent actions. The techniques presented in this dissertation enable an autonomous system to detect and mitigate undesirable behavior, without redesigning the model entirely
Recommended from our members
SUPPORTING ENGINEERING DESIGN OF ADDITIVELY MANUFACTURED MEDICAL DEVICES WITH KNOWLEDGE MANAGEMENT THROUGH ONTOLOGIES
Medical environments pose a substantial challenge for engineering designers. They combine significant knowledge demands with large investment for new product development and severe consequences in the case of design failure. Engineering designers must contend with an often-chaotic environment to which they have limited access and familiarity, a user base that is difficult to engage and highly diverse in many attributes, and a market structure that often pits stakeholders against one another. As medical care in general moves towards personalized models and surgical tools towards less invasive options emerging manufacturing technologies in additive manufacturing offer significant potential for the design of highly innovative medical devices. At the same time however these same technologies also introduce yet more challenges to the design process.
This dissertation presents a knowledge-based approach to addressing the existing and emerging challenges of medical device design. The approach aims to address these challenges using knowledge captured in a suite of modular ontologies modeling knowledge domains that must be considered in medical device design. These include ontologies for understanding clinical context, human factors, regulation, enterprise, and manufacturability. Together these ontologies support design ideation, knowledge capture, and design verification. These ontologies are subsequently used to formulate a comprehensive knowledge framework for medical device design, and to enable an innovative design process. Case studies analyzing the design of surgical tools in several medical specialties are used to assess the capabilities of this approach
Spectral anonymization of data
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 87-96).Data anonymization is the process of conditioning a dataset such that no sensitive information can be learned about any specific individual, but valid scientific analysis can nevertheless be performed on it. It is not sufficient to simply remove identifying information because the remaining data may be enough to infer the individual source of the record (a reidentification disclosure) or to otherwise learn sensitive information about a person (a predictive disclosure). The only known way to prevent these disclosures is to remove additional information from the dataset. Dozens of anonymization methods have been proposed over the past few decades; most work by perturbing or suppressing variable values. None have been successful at simultaneously providing perfect privacy protection and allowing perfectly accurate scientific analysis. This dissertation makes the new observation that the anonymizing operations do not need to be made in the original basis of the dataset. Operating in a different, judiciously chosen basis can improve privacy protection, analytic utility, and computational efficiency. I use the term 'spectral anonymization' to refer to anonymizing in a spectral basis, such as the basis provided by the data's eigenvectors. Additionally, I propose new measures of reidentification and prediction risk that are more generally applicable and more informative than existing measures. I also propose a measure of analytic utility that assesses the preservation of the multivariate probability distribution. Finally, I propose the demanding reference standard of nonparticipation in the study to define adequate privacy protection. I give three examples of spectral anonymization in practice. The first example improves basic cell swapping from a weak algorithm to one competitive with state of-the-art methods merely by a change of basis.(cont) The second example demonstrates avoiding the curse of dimensionality in microaggregation. The third describes a powerful algorithm that reduces computational disclosure risk to the same level as that of nonparticipants and preserves at least 4th order interactions in the multivariate distribution. No previously reported algorithm has achieved this combination of results.by Thomas Anton Lasko.Ph.D
Statistical methods for NHS incident reporting data
The National Reporting and Learning System (NRLS) is the English and Welsh NHS’ national repository of incident reports from healthcare. It aims to capture details of incident reports, at national level, and facilitate clinical review and learning to improve patient safety. These incident reports range from minor ‘near-misses’ to critical incidents that may lead to severe harm or death. NRLS data are currently reported as crude counts and proportions, but their major use is clinical review of the free-text descriptions of incidents. There are few well-developed quantitative analysis approaches for NRLS, and this thesis investigates these methods. A literature review revealed a wealth of clinical detail, but also systematic constraints of NRLS’ structure, including non-mandatory reporting, missing data and misclassification. Summary statistics for reports from 2010/11 – 2016/17 supported this and suggest NRLS was not suitable for statistical modelling in isolation. Modelling methods were advanced by creating a hybrid dataset using other sources of hospital casemix data from Hospital Episode Statistics (HES). A theoretical model was established, based on ‘exposure’ variables (using casemix proxies), and ‘culture’ as a random-effect. The initial modelling approach examined Poisson regression, mixture and multilevel models. Overdispersion was significant, generated mainly by clustering and aggregation in the hybrid dataset, but models were chosen to reflect these structures. Further modelling approaches were examined, using Generalized Additive Models to smooth predictor variables, regression tree-based models including Random Forests, and Artificial Neural Networks. Models were also extended to examine a subset of death and severe harm incidents, exploring how sparse counts affect models. Text mining techniques were examined for analysis of incident descriptions and showed how term frequency might be used. Terms were used to generate latent topics models used, in-turn, to predict the harm level of incidents. Model outputs were used to create a ‘Standardised Incident Reporting Ratio’ (SIRR) and cast this in the mould of current regulatory frameworks, using process control techniques such as funnel plots and cusum charts. A prototype online reporting tool was developed to allow NHS organisations to examine their SIRRs, provide supporting analyses, and link data points back to individual incident reports
Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain
The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio
Probability Models for Health Care Operations with Application to Emergency Medicine
This thesis consists of four contributing chapters; two of which are inspired by practical problems related to emergency department (ED) operations management and the remaining two are motivated by the theoretical problem related to the time-dependent priority queue. Unlike classical priority queue, priorities in the time-dependent priority queue depends on the amount of time an arrival waits for service in addition to the priority class they belong. The mismatch between the demand for ED services and the available resources have direct and indirect negative consequences. Moreover, ED physician pay in some jurisdictions reflects pay-for-performance contracts based on operational benchmarks. To assist in capacity planning and meeting these benchmarks, in chapter 4, I built a forecasting model to produce short-term forecasts of ED arrivals. In chapter 5, I empirically investigated the effect of workload on the productivity of ED services. Specifically, under discretionary work setting, different statistical models were fitted to identify the effect of workload and census on four measures of ED service processes, namely, number discharged, length of stay, service time, and waiting time. The time-dependent priority model was first proposed by Kleinrock (1964), and, more recently, naming it accumulating priority queue (APQ), Stanford et al. (2014) derived the waiting time distributions for the various priority classes when the queue has a single server. In chapter 6, I derived expressions for the waiting time distributions for a multi-server APQ with Poisson arrivals for each class, and a common exponential service time distribution. In chapter 7, I worked with a KPI based service system where there are specific time targets by which each class of customers should commence their service and a compliance probability indicating the proportion of customers from that class meeting the target. Recognizing the fact that customer who misses their KPI target is of greater, not lesser importance, I seek to minimize a weighted sum of the expected amount of excess waiting for each class. When minimizing the total expected excess, our numerical examples lead to an easily-implemented rule of thumb for the optimal priority accumulation rates, which can have an immediate impact on health care delivery
- …