2,151 research outputs found

    Multiple Fault Isolation in Redundant Systems

    Get PDF
    Fault diagnosis in large-scale systems that are products of modem technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption

    Multiple Fault Isolation in Redundant Systems

    Get PDF
    Fault diagnosis in large-scale systems that are products of modern technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption

    Active Learning - An Explicit Treatment of Unreliable Parameters

    Get PDF
    Institute for Communicating and Collaborative SystemsActive learning reduces annotation costs for supervised learning by concentrating labelling efforts on the most informative data. Most active learning methods assume that the model structure is fixed in advance and focus upon improving parameters within that structure. However, this is not appropriate for natural language processing where the model structure and associated parameters are determined using labelled data. Applying traditional active learning methods to natural language processing can fail to produce expected reductions in annotation cost. We show that one of the reasons for this problem is that active learning can only select examples which are already covered by the model. In this thesis, we better tailor active learning to the need of natural language processing as follows. We formulate the Unreliable Parameter Principle: Active learning should explicitly and additionally address unreliably trained model parameters in order to optimally reduce classification error. In order to do so, we should target both missing events and infrequent events. We demonstrate the effectiveness of such an approach for a range of natural language processing tasks: prepositional phrase attachment, sequence labelling, and syntactic parsing. For prepositional phrase attachment, the explicit selection of unknown prepositions significantly improves coverage and classification performance for all examined active learning methods. For sequence labelling, we introduce a novel active learning method which explicitly targets unreliable parameters by selecting sentences with many unknown words and a large number of unobserved transition probabilities. For parsing, targeting unparseable sentences significantly improves coverage and f-measure in active learning

    Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech

    Get PDF
    We describe a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as Statement, Question, Backchannel, Agreement, Disagreement, and Apology. Our model detects and predicts dialogue acts based on lexical, collocational, and prosodic cues, as well as on the discourse coherence of the dialogue act sequence. The dialogue model is based on treating the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Constraints on the likely sequence of dialogue acts are modeled via a dialogue act n-gram. The statistical dialogue grammar is combined with word n-grams, decision trees, and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue act. We develop a probabilistic integration of speech recognition with dialogue modeling, to improve both speech recognition and dialogue act classification accuracy. Models are trained and evaluated using a large hand-labeled database of 1,155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We achieved good dialogue act labeling accuracy (65% based on errorful, automatically recognized words and prosody, and 71% based on word transcripts, compared to a chance baseline accuracy of 35% and human accuracy of 84%) and a small reduction in word recognition error.Comment: 35 pages, 5 figures. Changes in copy editing (note title spelling changed

    Exact and heuristic approaches to detect failures in failed k-out-of-n systems

    Get PDF
    This paper considers a k-out-of-n system that has just failed. There is an associated cost of testing each component. In addition, we have apriori information regarding the probabilities that a certain set of components is the reason for the failure. The goal is to identify the subset of components that have caused the failure with the minimum expected cost. In this work, we provide exact and approximate policies that detects components’ states in a failed k-out-of-n system. We propose two integer programming (IP) formulations, two novel Markov decision process (MDP) based approaches, and two heuristic algorithms. We show the limitations of exact algorithms and effectiveness of proposed heuristic approaches on a set of randomly generated test instances. Despite longer CPU times, IP formulations are flexible in incorporating further restrictions such as test precedence relationships, if need be. Numerical results illustrate that dynamic programming for the proposed MDP model is the most effective exact method, solving up to 12 components within one hour. The heuristic algorithms’ performances are presented against exact approaches for small to medium sized instances and against a lower bound for larger instances

    Neutral genomic microevolution of a recently emerged pathogen, salmonella enterica serovar agona

    Get PDF
    Salmonella enterica serovar Agona has caused multiple food-borne outbreaks of gastroenteritis since it was first isolated in 1952. We analyzed the genomes of 73 isolates from global sources, comparing five distinct outbreaks with sporadic infections as well as food contamination and the environment. Agona consists of three lineages with minimal mutational diversity: only 846 single nucleotide polymorphisms (SNPs) have accumulated in the non-repetitive, core genome since Agona evolved in 1932 and subsequently underwent a major population expansion in the 1960s. Homologous recombination with other serovars of S. enterica imported 42 recombinational tracts (360 kb) in 5/143 nodes within the genealogy, which resulted in 3,164 additional SNPs. In contrast to this paucity of genetic diversity, Agona is highly diverse according to pulsed-field gel electrophoresis (PFGE), which is used to assign isolates to outbreaks. PFGE diversity reflects a highly dynamic accessory genome associated with the gain or loss (indels) of 51 bacteriophages, 10 plasmids, and 6 integrative conjugational elements (ICE/IMEs), but did not correlate uniquely with outbreaks. Unlike the core genome, indels occurred repeatedly in independent nodes (homoplasies), resulting in inaccurate PFGE genealogies. The accessory genome contained only few cargo genes relevant to infection, other than antibiotic resistance. Thus, most of the genetic diversity within this recently emerged pathogen reflects changes in the accessory genome, or is due to recombination, but these changes seemed to reflect neutral processes rather than Darwinian selection. Each outbreak was caused by an independent clade, without universal, outbreak-associated genomic features, and none of the variable genes in the pan-genome seemed to be associated with an ability to cause outbreaks

    Reinforcement Learning: A Survey

    Full text link
    This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file

    Optimal Dynamic Control of Queueing Networks: Emergency Departments, the W Service Network, and Supply Chains under Disruptions.

    Full text link
    Many systems in both the service and manufacturing sectors can be modeled and analyzed as queueing networks. In such systems, control and design is often an important issue that may significantly affect the performance. This dissertation focuses on the development of innovative techniques for the design and control of such systems. Special attention is given to real-world applications in (a) the design and control of patient flow in the hospital emergency departments, (b) design and control of service/call centers, and (c) the design and control of supply chains under disruption risks. With respect to application (a), using hospital data, analytical models, and simulation analyses we show how (1) better patient prioritization, (2) enhanced triage systems, and (3) improved patient flow designs allow emergency departments to significantly improve their performance with respect to both operational efficiency and patient safety. Regarding application (b), we give specific attention to a two-server and three-demand class network in the shape of a ``W'' with random server disruption and repair times. Studying this network, we show how effective control and design strategies that efficiently make use of (partial) flexibility of servers can be implemented to achieve high performance and resilience to server disruptions. In addition to establishing stability properties of different known control mechanisms, a new heuristic policy, termed Largest Expected Workload Cost (LEWC), is proposed and its performance is extensively benchmarked with respect to other widely used polices. Regarding application (c), we demonstrate how supply chains can boost their performance using better control and design strategies that efficiently take into account supply disruption risks. Motivated by several real-world examples of disruptions, production flexibility, and supply contracts within supply chains, we model the informational and operational flexibility approaches to designing a resilient supply chain. By analyzing optimal ordering policies, sourcing strategies, and the optimal levels of back-up capacity reservation contracts, various disruption risk mitigation strategies are considered and compared, and new insights into the design of resilient supply chains are provided.PHDIndustrial and Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/94002/1/soroush_1.pd
    corecore