319 research outputs found

    A Bayesian Abduction Model For Sensemaking

    Get PDF
    This research develops a Bayesian Abduction Model for Sensemaking Support (BAMSS) for information fusion in sensemaking tasks. Two methods are investigated. The first is the classical Bayesian information fusion with belief updating (using Bayesian clustering algorithm) and abductive inference. The second method uses a Genetic Algorithm (BAMSS-GA) to search for the k-best most probable explanation (MPE) in the network. Using various data from recent Iraq and Afghanistan conflicts, experimental simulations were conducted to compare the methods using posterior probability values which can be used to give insightful information for prospective sensemaking. The inference results demonstrate the utility of BAMSS as a computational model for sensemaking. The major results obtained are: (1) The inference results from BAMSS-GA gave average posterior probabilities that were 103 better than those produced by BAMSS; (2) BAMSS-GA gave more consistent posterior probabilities as measured by variances; and (3) BAMSS was able to give an MPE while BAMSS-GA was able to identify the optimal values for kMPEs. In the experiments, out of 20 MPEs generated by BAMSS, BAMSS-GA was able to identify 7 plausible network solutions resulting in less amount of information needed for sensemaking and reducing the inference search space by 7/20 (35%). The results reveal that GA can be used successfully in Bayesian information fusion as a search technique to identify those significant posterior probabilities useful for sensemaking. BAMSS-GA was also more robust in overcoming the problem of bounded search that is a constraint to Bayesian clustering and inference state space in BAMSS

    Deep Learning in Lane Marking Detection: A Survey

    Get PDF
    Lane marking detection is a fundamental but crucial step in intelligent driving systems. It can not only provide relevant road condition information to prevent lane departure but also assist vehicle positioning and forehead car detection. However, lane marking detection faces many challenges, including extreme lighting, missing lane markings, and obstacle obstructions. Recently, deep learning-based algorithms draw much attention in intelligent driving society because of their excellent performance. In this paper, we review deep learning methods for lane marking detection, focusing on their network structures and optimization objectives, the two key determinants of their success. Besides, we summarize existing lane-related datasets, evaluation criteria, and common data processing techniques. We also compare the detection performance and running time of various methods, and conclude with some current challenges and future trends for deep learning-based lane marking detection algorithm

    Logical Abstractions for Noisy Variational Quantum Algorithm Simulation

    Full text link
    Due to the unreliability and limited capacity of existing quantum computer prototypes, quantum circuit simulation continues to be a vital tool for validating next generation quantum computers and for studying variational quantum algorithms, which are among the leading candidates for useful quantum computation. Existing quantum circuit simulators do not address the common traits of variational algorithms, namely: 1) their ability to work with noisy qubits and operations, 2) their repeated execution of the same circuits but with different parameters, and 3) the fact that they sample from circuit final wavefunctions to drive a classical optimization routine. We present a quantum circuit simulation toolchain based on logical abstractions targeted for simulating variational algorithms. Our proposed toolchain encodes quantum amplitudes and noise probabilities in a probabilistic graphical model, and it compiles the circuits to logical formulas that support efficient repeated simulation of and sampling from quantum circuits for different parameters. Compared to state-of-the-art state vector and density matrix quantum circuit simulators, our simulation approach offers greater performance when sampling from noisy circuits with at least eight to 20 qubits and with around 12 operations on each qubit, making the approach ideal for simulating near-term variational quantum algorithms. And for simulating noise-free shallow quantum circuits with 32 qubits, our simulation approach offers a 66×66\times reduction in sampling cost versus quantum circuit simulation techniques based on tensor network contraction.Comment: ASPLOS '21, April 19-23, 2021, Virtual, US

    A hybrid algorithm for Bayesian network structure learning with application to multi-label learning

    Get PDF
    We present a novel hybrid algorithm for Bayesian network structure learning, called H2PC. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. The algorithm is based on divide-and-conquer constraint-based subroutines to learn the local structure around a target variable. We conduct two series of experimental comparisons of H2PC against Max-Min Hill-Climbing (MMHC), which is currently the most powerful state-of-the-art algorithm for Bayesian network structure learning. First, we use eight well-known Bayesian network benchmarks with various data sizes to assess the quality of the learned structure returned by the algorithms. Our extensive experiments show that H2PC outperforms MMHC in terms of goodness of fit to new data and quality of the network structure with respect to the true dependence structure of the data. Second, we investigate H2PC's ability to solve the multi-label learning problem. We provide theoretical results to characterize and identify graphically the so-called minimal label powersets that appear as irreducible factors in the joint distribution under the faithfulness condition. The multi-label learning problem is then decomposed into a series of multi-class classification problems, where each multi-class variable encodes a label powerset. H2PC is shown to compare favorably to MMHC in terms of global classification accuracy over ten multi-label data sets covering different application domains. Overall, our experiments support the conclusions that local structural learning with H2PC in the form of local neighborhood induction is a theoretically well-motivated and empirically effective learning framework that is well suited to multi-label learning. The source code (in R) of H2PC as well as all data sets used for the empirical tests are publicly available.Comment: arXiv admin note: text overlap with arXiv:1101.5184 by other author

    An overview of the main machine learning models - from theory to algorithms

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsIn the context of solving highly complex problems, Artificial Intelligence shows an exponential growth over the past years allowing the Machine Learning to augment and sometimes to outperform the human learning. From driverless cars to automatic recommendation on Netflix, we are surrounded by AI, even if we do not notice it. Furthermore, companies have recently adopted new frameworks in their routines which are mainly composed by algorithms able to solve complex problems in a short period of time. The growth of AI technologies has been absolutely stunning and yes, it is only possible because a sub-field of AI called Machine Learning is growing even faster. In a small scale, Machine Learning may be seen as a simple system able to find patterns on data and learn from it. However, it is precisely that learning process that in a large scale will allow machines to mimic the human behavior and perform tasks that would eventually require human intelligence. Just for us to have an idea, according to Forbes the global Machine Learning market was evaluated in 1.7Bin2017anditisexpectedtoreachalmost1.7B in 2017 and it is expected to reach almost 21B in 2024. Naturally, Machine Learning has become an attractive and profitable scientific area that demands continuous learning since there is always something new being discovered. During the last decades, a huge number of algorithms have been proposed by the research community, which sometimes may cause some confusion of how and when to use each one of them. That is exactly what is pretended in this thesis, over the next chapters we are going to review the main Machine Learning models and their respective advantages/disadvantages

    Advanced Fault Diagnosis and Health Monitoring Techniques for Complex Engineering Systems

    Get PDF
    Over the last few decades, the field of fault diagnostics and structural health management has been experiencing rapid developments. The reliability, availability, and safety of engineering systems can be significantly improved by implementing multifaceted strategies of in situ diagnostics and prognostics. With the development of intelligence algorithms, smart sensors, and advanced data collection and modeling techniques, this challenging research area has been receiving ever-increasing attention in both fundamental research and engineering applications. This has been strongly supported by the extensive applications ranging from aerospace, automotive, transport, manufacturing, and processing industries to defense and infrastructure industries

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    Data Science: Measuring Uncertainties

    Get PDF
    With the increase in data processing and storage capacity, a large amount of data is available. Data without analysis does not have much value. Thus, the demand for data analysis is increasing daily, and the consequence is the appearance of a large number of jobs and published articles. Data science has emerged as a multidisciplinary field to support data-driven activities, integrating and developing ideas, methods, and processes to extract information from data. This includes methods built from different knowledge areas: Statistics, Computer Science, Mathematics, Physics, Information Science, and Engineering. This mixture of areas has given rise to what we call Data Science. New solutions to the new problems are reproducing rapidly to generate large volumes of data. Current and future challenges require greater care in creating new solutions that satisfy the rationality for each type of problem. Labels such as Big Data, Data Science, Machine Learning, Statistical Learning, and Artificial Intelligence are demanding more sophistication in the foundations and how they are being applied. This point highlights the importance of building the foundations of Data Science. This book is dedicated to solutions and discussions of measuring uncertainties in data analysis problems
    • …
    corecore