1,047 research outputs found

    Control of Hidden Mode Hybrid Systems: Algorithm termination

    Get PDF
    We consider the problem of safety control in Hidden Mode Hybrid Systems (HMHS) that arises in the development of a semi-autonomous cooperative active safety system for collision avoidance at an intersection. We utilize the approach of constructing a new hybrid automaton whose discrete state is an estimate of the HMHS mode. A dynamic feedback map can then be designed that guarantees safety on the basis of the current mode estimate and the concept of the capture set. In this work, we relax the conditions for the termination of the algorithm that computes the capture set by constructing an abstraction of the new hybrid automaton. We present a relation to compute the capture set for the abstraction and show that this capture set is equal to the one for the new hybrid automaton

    Safety control of hidden mode hybrid systems

    Get PDF
    In this paper, we consider the safety control problem for hidden mode hybrid systems (HMHSs), which are a special class of hybrid automata in which the mode is not available for control. For these systems, safety control is a problem with imperfect state information. We tackle this problem by introducing the notion of nondeterministic discrete information state and by translating the problem to one with perfect state information. The perfect state information control problem is obtained by constructing a new hybrid automaton, whose discrete state is an estimate of the HMHS mode and is, as such, available for control. This problem is solved by computing the capture set and the least restrictive control map for the new hybrid automaton. Sufficient conditions for the termination of the algorithm that computes the capture set are provided. Finally, we show that the solved perfect state information control problem is equivalent to the original problem with imperfect state information under suitable assumptions. We illustrate the application of the proposed technique to a collision avoidance problem between an autonomous vehicle and a human driven vehicle at a traffic intersection.National Science Foundation (U.S.) (NSF CAREER Award Number CNS-0642719

    Robust learning of probabilistic hybrid models

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2008.Includes bibliographical references (p. 125-127).Advances in autonomy, in the fields of control, estimation, and diagnosis, have improved immensely, as seen by spacecraft that navigate toward pinpoint landings, or speech recognition enabled in hand-held devices. Arguably the most important step to controlling and improving a system, is to understand that system. For this reason, accurate models are essential for continued advancements in the field of autonomy. Hybrid stochastic models, such as JMLS and LPHA, allow for representational accuracy of a general scope of problems. The goal of this thesis is to develop a robust method for learning accurate hybrid models automatically from data. A robust method should learn a set of model parameters, but should also avoid convergence to locally optimal solutions that reduce accuracy, and should be less sensitive to sparse or poor quality observation data. These three goals are the focus of this thesis. We present the HML-LPHA algorithm that uses approximate EM for learning maximum likelihood model parameters of LPHA, given a sequence of control inputs {u}0T, and outputs, {y}T+I 1 We implement the algorithm in a scenario that simulates the mechanical wheel failure of the MER Spirit rover wheel and demonstrate empirical convergence of the algorithm. Local convergence is a limitation of many optimization approaches for multimodal functions, including EM. For model learning, this can mean a severe compromise in accuracy. We present the kMeans-EM algorithm, that iteratively learns the locations and shapes of explored local maxima of our model likelihood function, and focuses the search away from these areas of the solution space toward undiscovered maxima that are promising apriori. We find the kMeans-EM algorithm demonstrates iteratively increasing improvement over a Random Restarts method with respect to learning sets of model parameters with higher likelihood values, and reducing Euclidean distance to the true set of model parameters. Lastly, the AHML-LPHA algorithm is an active hybrid model learning approach that augments sparse, and/or very noisy training data, with limited queries of the discrete state.(cont.) We use an active approach for adding data to our training set, where we query at points that obtain the greatest reduction in uncertainty of the distribution over the hybrid state trajectories. Empirical evidence indicates that querying only 6% of the time reduces continous state squared error and MAP mode estimate error of the discrete state. We also find that when the passive learner, HML-LPHA, diverges due to poor initialization or training data, the AHML-LPHA algorithm is capable of convergence; at times, just one query allows for convergence, demonstrating a vast improvement in learning capacity with a very limited amount of data augmentation.by Stephanie Gil.S.M

    Situational Awareness Enhancement for Connected and Automated Vehicle Systems

    Get PDF
    Recent developments in the area of Connected and Automated Vehicles (CAVs) have boosted the interest in Intelligent Transportation Systems (ITSs). While ITS is intended to resolve and mitigate serious traffic issues such as passenger and pedestrian fatalities, accidents, and traffic congestion; these goals are only achievable by vehicles that are fully aware of their situation and surroundings in real-time. Therefore, connected and automated vehicle systems heavily rely on communication technologies to create a real-time map of their surrounding environment and extend their range of situational awareness. In this dissertation, we propose novel approaches to enhance situational awareness, its applications, and effective sharing of information among vehicles.;The communication technology for CAVs is known as vehicle-to-everything (V2x) communication, in which vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) have been targeted for the first round of deployment based on dedicated short-range communication (DSRC) devices for vehicles and road-side transportation infrastructures. Wireless communication among these entities creates self-organizing networks, known as Vehicular Ad-hoc Networks (VANETs). Due to the mobile, rapidly changing, and intrinsically error-prone nature of VANETs, traditional network architectures are generally unsatisfactory to address VANETs fundamental performance requirements. Therefore, we first investigate imperfections of the vehicular communication channel and propose a new modeling scheme for large-scale and small-scale components of the communication channel in dense vehicular networks. Subsequently, we introduce an innovative method for a joint modeling of the situational awareness and networking components of CAVs in a single framework. Based on these two models, we propose a novel network-aware broadcast protocol for fast broadcasting of information over multiple hops to extend the range of situational awareness. Afterward, motivated by the most common and injury-prone pedestrian crash scenarios, we extend our work by proposing an end-to-end Vehicle-to-Pedestrian (V2P) framework to provide situational awareness and hazard detection for vulnerable road users. Finally, as humans are the most spontaneous and influential entity for transportation systems, we design a learning-based driver behavior model and integrate it into our situational awareness component. Consequently, higher accuracy of situational awareness and overall system performance are achieved by exchange of more useful information

    Computer-Aided Clinical Trials For Medical Devices

    Get PDF
    Life-critical medical devices require robust safety and efficacy to treat patient populations with potentially large patient heterogeneity. Today, the de facto standard for evaluating medical devices is the randomized controlled trial. However, even after years of device development many clinical trials fail. For example, in the Rhythm ID Goes Head to Head Trial (RIGHT) the risk for inappropriate therapy by implantable cardioverter defibrillators (ICDs) actually increased relative to control treatments. With recent advances in physiological modeling and devices incorporating more complex software components, population-level device outcomes can be obtained with scalable simulations. Consequently, there is a need for data-driven approaches to provide early insight prior to the trial, lowering the cost of trials using patient and device models, and quantifying the robustness of the outcome. This work presents a clinical trial modeling and statistical framework which utilizes simulation to improve the evaluation of medical device software, such as the algorithms in ICDs. First, a method for generating virtual cohorts using a physiological simulator is introduced. Next, we present our framework which combines virtual cohorts with real data to evaluate the efficacy and allows quantifying the uncertainty due to the use of simulation. Results predicting the outcome of RIGHT and improving statistical power while reducing the sample size are shown. Finally, we improve device performance with an approach using Bayesian optimization. Device performance can degrade when deployed to a general population despite success in clinical trials. Our approach improves the performance of the device with outcomes aligned with the MADIT-RIT clinical trial. This work provides a rigorous approach towards improving the development and evaluation of medical treatments

    Soft computing applied to optimization, computer vision and medicine

    Get PDF
    Artificial intelligence has permeated almost every area of life in modern society, and its significance continues to grow. As a result, in recent years, Soft Computing has emerged as a powerful set of methodologies that propose innovative and robust solutions to a variety of complex problems. Soft Computing methods, because of their broad range of application, have the potential to significantly improve human living conditions. The motivation for the present research emerged from this background and possibility. This research aims to accomplish two main objectives: On the one hand, it endeavors to bridge the gap between Soft Computing techniques and their application to intricate problems. On the other hand, it explores the hypothetical benefits of Soft Computing methodologies as novel effective tools for such problems. This thesis synthesizes the results of extensive research on Soft Computing methods and their applications to optimization, Computer Vision, and medicine. This work is composed of several individual projects, which employ classical and new optimization algorithms. The manuscript presented here intends to provide an overview of the different aspects of Soft Computing methods in order to enable the reader to reach a global understanding of the field. Therefore, this document is assembled as a monograph that summarizes the outcomes of these projects across 12 chapters. The chapters are structured so that they can be read independently. The key focus of this work is the application and design of Soft Computing approaches for solving problems in the following: Block Matching, Pattern Detection, Thresholding, Corner Detection, Template Matching, Circle Detection, Color Segmentation, Leukocyte Detection, and Breast Thermogram Analysis. One of the outcomes presented in this thesis involves the development of two evolutionary approaches for global optimization. These were tested over complex benchmark datasets and showed promising results, thus opening the debate for future applications. Moreover, the applications for Computer Vision and medicine presented in this work have highlighted the utility of different Soft Computing methodologies in the solution of problems in such subjects. A milestone in this area is the translation of the Computer Vision and medical issues into optimization problems. Additionally, this work also strives to provide tools for combating public health issues by expanding the concepts to automated detection and diagnosis aid for pathologies such as Leukemia and breast cancer. The application of Soft Computing techniques in this field has attracted great interest worldwide due to the exponential growth of these diseases. Lastly, the use of Fuzzy Logic, Artificial Neural Networks, and Expert Systems in many everyday domestic appliances, such as washing machines, cookers, and refrigerators is now a reality. Many other industrial and commercial applications of Soft Computing have also been integrated into everyday use, and this is expected to increase within the next decade. Therefore, the research conducted here contributes an important piece for expanding these developments. The applications presented in this work are intended to serve as technological tools that can then be used in the development of new devices

    Parallel Factor Analysis Enables Quantification and Identification of Highly Convolved Data-Independent-Acquired Protein Spectra

    Get PDF
    The latest high-throughput mass spectrometry-based technologies can record virtually all molecules from complex biological samples, providing a holistic picture of proteomes in cells and tissues and enabling an evaluation of the overall status of a person\u27s health. However, current best practices are still only scratching the surface of the wealth of available information obtained from the massive proteome datasets, and efficient novel data-driven strategies are needed. Powered by advances in GPU hardware and open-source machine-learning frameworks, we developed a data-driven approach, CANDIA, which disassembles highly complex proteomics data into the elementary molecular signatures of the proteins in biological samples. Our work provides a performant and adaptable solution that complements existing mass spectrometry techniques. As the central mathematical methods are generic, other scientific fields that are dealing with highly convolved datasets will benefit from this work

    Stochastic spatial modelling of DNA methylation patterns and moment-based parameter estimation

    Get PDF
    In the first part of this thesis, we introduce and analyze spatial stochastic models for DNA methylation, an epigenetic mark with an important role in development. The underlying mechanisms controlling methylation are only partly understood. Several mechanistic models of enzyme activities responsible for methylation have been proposed. Here, we extend existing hidden Markov models (HMMs) for DNA methylation by describing the occurrence of spatial methylation patterns with stochastic automata networks. We perform numerical analysis of the HMMs applied to (non-)hairpin bisulfite sequencing KO data and accurately predict the wild-type data from these results. We find evidence that the activities of Dnmt3a/b responsible for de novo methylation depend on the left but not on the right CpG neighbors. The second part focuses on parameter estimation in chemical reaction networks (CRNs). We propose a generalized method of moments (GMM) approach for inferring the parameters of CRNs based on a sophisticated matching of the statistical moments of the stochastic model and the sample moments of population snapshot data. The proposed parameter estimation method exploits recently developed moment-based approximations and provides estimators with desirable statistical properties when many samples are available. The GMM provides accurate and fast estimations of unknown parameters of CRNs. The accuracy increases and the variance decreases when higher-order moments are considered.Im ersten Teil der Arbeit führen wir eine Analyse für spatielle stochastische Modelle der DNA Methylierung, ein wichtiger epigenetischer Marker in der Entwicklung, durch. Die zugrunde liegenden Mechanismen der Methylierung werden noch nicht vollständig verstanden. Mechanistische Modelle beschreiben die Aktivität der Methylierungsenzyme. Wir erweitern bestehende Hidden Markov Models (HMMs) zur DNA Methylierung durch eine Stochastic Automata Networks Beschreibung von spatiellen Methylierungsmustern. Wir führen eine numerische Analyse der HMMs auf bisulfit-sequenzierten KO Datens¨atzen aus und nutzen die Resultate, um die Wildtyp-Daten erfolgreich vorherzusagen. Unsere Ergebnisse deuten an, dass die Aktivitäten von Dnmt3a/b, die überwiegend für die de novo Methylierung verantwortlich sind, nur vom Methylierungsstatus des linken, nicht aber vom rechten CpG Nachbarn abhängen. Der zweite Teil befasst sich mit Parameterschätzung in chemischen Reaktionsnetzwerken (CRNs). Wir führen eine Verallgemeinerte Momentenmethode (GMM) ein, die die statistischen Momente des stochastischen Modells an die Momente von Stichproben geschickt anpasst. Die GMM nutzt hier kürzlich entwickelte, momentenbasierte Näherungen, liefert Schätzer mit wünschenswerten statistischen Eigenschaften, wenn genügend Stichproben verfügbar sind, mit schnellen und genauen Schätzungen der unbekannten Parameter in CRNs. Momente höherer Ordnung steigern die Genauigkeit des Schätzers, während die Varianz sinkt
    corecore