514 research outputs found

    Improving A*OMP: Theoretical and Empirical Analyses With a Novel Dynamic Cost Model

    Full text link
    Best-first search has been recently utilized for compressed sensing (CS) by the A* orthogonal matching pursuit (A*OMP) algorithm. In this work, we concentrate on theoretical and empirical analyses of A*OMP. We present a restricted isometry property (RIP) based general condition for exact recovery of sparse signals via A*OMP. In addition, we develop online guarantees which promise improved recovery performance with the residue-based termination instead of the sparsity-based one. We demonstrate the recovery capabilities of A*OMP with extensive recovery simulations using the adaptive-multiplicative (AMul) cost model, which effectively compensates for the path length differences in the search tree. The presented results, involving phase transitions for different nonzero element distributions as well as recovery rates and average error, reveal not only the superior recovery accuracy of A*OMP, but also the improvements with the residue-based termination and the AMul cost model. Comparison of the run times indicate the speed up by the AMul cost model. We also demonstrate a hybrid of OMP and A?OMP to accelerate the search further. Finally, we run A*OMP on a sparse image to illustrate its recovery performance for more realistic coefcient distributions

    A sparse reconstruction framework for Fourier-based plane wave imaging

    Get PDF
    International audienceUltrafast imaging based on plane-wave (PW) insonification is an active area of research due to its capability of reaching high frame rates. Among PW imaging methods, Fourier-based approaches have demonstrated to be competitive compared with traditional delay and sum methods. Motivated by the success of compressed sensing techniques in other Fourier imaging modalities, like magnetic resonance imaging, we propose a new sparse regularization framework to reconstruct high-quality ultrasound (US) images. The framework takes advantage of both the ability to formulate the imaging inverse problem in the Fourier domain and the sparsity of US images in a sparsifying domain. We show, by means of simulations, in vitro and in vivo data, that the proposed framework significantly reduces image artifacts, i.e., measurement noise and sidelobes, compared with classical methods, leading to an increase of the image quality

    Task-Driven Dictionary Learning

    Get PDF
    Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.Comment: final draft post-refereein

    Multiobjective optimization of the production process for ground granulated blast furnace slags

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The production process of ground granulated blast furnace slag (GGBS) aims to produce products of the best grade and the highest yields. However, grade and yields are two competing objectives which can not be optimized at the same time by one single solution. Meanwhile, the production process is a multivariable strong coupling complicated nonlinear system. It is hard to establish the accurate mechanism model of this system. Considering above problems, we formulate the GGBS production process as an multiobjective optimization problem, introduce a least square support vector machine method based on particle swarm optimization to build the data-based system model and solve the corresponding multiobjective optimization problem by several multiobjective optimization evolutionary algorithms. Simulation example is presented to illustrate the performance of the presented multiobjective optimization scheme in GGBS production process

    Learning, Categorization, Rule Formation, and Prediction by Fuzzy Neural Networks

    Full text link
    National Science Foundation (IRI 94-01659); Office of Naval Research (N00014-91-J-4100, N00014-92-J-4015) Air Force Office of Scientific Research (90-0083, N00014-92-J-4015

    Generative adversarial networks review in earthquake-related engineering fields

    Get PDF
    Within seismology, geology, civil and structural engineering, deep learning (DL), especially via generative adversarial networks (GANs), represents an innovative, engaging, and advantageous way to generate reliable synthetic data that represent actual samples' characteristics, providing a handy data augmentation tool. Indeed, in many practical applications, obtaining a significant number of high-quality information is demanding. Data augmentation is generally based on artificial intelligence (AI) and machine learning data-driven models. The DL GAN-based data augmentation approach for generating synthetic seismic signals revolutionized the current data augmentation paradigm. This study delivers a critical state-of-art review, explaining recent research into AI-based GAN synthetic generation of ground motion signals or seismic events, and also with a comprehensive insight into seismic-related geophysical studies. This study may be relevant, especially for the earth and planetary science, geology and seismology, oil and gas exploration, and on the other hand for assessing the seismic response of buildings and infrastructures, seismic detection tasks, and general structural and civil engineering applications. Furthermore, highlighting the strengths and limitations of the current studies on adversarial learning applied to seismology may help to guide research efforts in the next future toward the most promising directions

    Distinguishing cause from effect using observational data: methods and benchmarks

    Get PDF
    The discovery of causal relationships from purely observational data is a fundamental problem in science. The most elementary form of such a causal discovery problem is to decide whether X causes Y or, alternatively, Y causes X, given joint observations of two variables X, Y. An example is to decide whether altitude causes temperature, or vice versa, given only joint measurements of both variables. Even under the simplifying assumptions of no confounding, no feedback loops, and no selection bias, such bivariate causal discovery problems are challenging. Nevertheless, several approaches for addressing those problems have been proposed in recent years. We review two families of such methods: Additive Noise Methods (ANM) and Information Geometric Causal Inference (IGCI). We present the benchmark CauseEffectPairs that consists of data for 100 different cause-effect pairs selected from 37 datasets from various domains (e.g., meteorology, biology, medicine, engineering, economy, etc.) and motivate our decisions regarding the "ground truth" causal directions of all pairs. We evaluate the performance of several bivariate causal discovery methods on these real-world benchmark data and in addition on artificially simulated data. Our empirical results on real-world data indicate that certain methods are indeed able to distinguish cause from effect using only purely observational data, although more benchmark data would be needed to obtain statistically significant conclusions. One of the best performing methods overall is the additive-noise method originally proposed by Hoyer et al. (2009), which obtains an accuracy of 63+-10 % and an AUC of 0.74+-0.05 on the real-world benchmark. As the main theoretical contribution of this work we prove the consistency of that method.Comment: 101 pages, second revision submitted to Journal of Machine Learning Researc
    corecore