38 research outputs found

    Advanced Denoising for X-ray Ptychography

    Get PDF
    The success of ptychographic imaging experiments strongly depends on achieving high signal-to-noise ratio. This is particularly important in nanoscale imaging experiments when diffraction signals are very weak and the experiments are accompanied by significant parasitic scattering (background), outliers or correlated noise sources. It is also critical when rare events such as cosmic rays, or bad frames caused by electronic glitches or shutter timing malfunction take place. In this paper, we propose a novel iterative algorithm with rigorous analysis that exploits the direct forward model for parasitic noise and sample smoothness to achieve a thorough characterization and removal of structured and random noise. We present a formal description of the proposed algorithm and prove its convergence under mild conditions. Numerical experiments from simulations and real data (both soft and hard X-ray beamlines) demonstrate that the proposed algorithms produce better results when compared to state-of-the-art methods.Comment: 24 pages, 9 figure

    Inference for Generalized Linear Models via Alternating Directions and Bethe Free Energy Minimization

    Full text link
    Generalized Linear Models (GLMs), where a random vector x\mathbf{x} is observed through a noisy, possibly nonlinear, function of a linear transform z=Ax\mathbf{z}=\mathbf{Ax} arise in a range of applications in nonlinear filtering and regression. Approximate Message Passing (AMP) methods, based on loopy belief propagation, are a promising class of approaches for approximate inference in these models. AMP methods are computationally simple, general, and admit precise analyses with testable conditions for optimality for large i.i.d. transforms A\mathbf{A}. However, the algorithms can easily diverge for general A\mathbf{A}. This paper presents a convergent approach to the generalized AMP (GAMP) algorithm based on direct minimization of a large-system limit approximation of the Bethe Free Energy (LSL-BFE). The proposed method uses a double-loop procedure, where the outer loop successively linearizes the LSL-BFE and the inner loop minimizes the linearized LSL-BFE using the Alternating Direction Method of Multipliers (ADMM). The proposed method, called ADMM-GAMP, is similar in structure to the original GAMP method, but with an additional least-squares minimization. It is shown that for strictly convex, smooth penalties, ADMM-GAMP is guaranteed to converge to a local minima of the LSL-BFE, thus providing a convergent alternative to GAMP that is stable under arbitrary transforms. Simulations are also presented that demonstrate the robustness of the method for non-convex penalties as well

    Dictionary optimization for representing sparse signals using Rank-One Atom Decomposition (ROAD)

    Get PDF
    Dictionary learning has attracted growing research interest during recent years. As it is a bilinear inverse problem, one typical way to address this problem is to iteratively alternate between two stages: sparse coding and dictionary update. The general principle of the alternating approach is to fix one variable and optimize the other one. Unfortunately, for the alternating method, an ill-conditioned dictionary in the training process may not only introduce numerical instability but also trap the overall training process towards a singular point. Moreover, it leads to difficulty in analyzing its convergence, and few dictionary learning algorithms have been proved to have global convergence. For the other bilinear inverse problems, such as short-and-sparse deconvolution (SaSD) and convolutional dictionary learning (CDL), the alternating method is still a popular choice. As these bilinear inverse problems are also ill-posed and complicated, they are tricky to handle. Additional inner iterative methods are usually required for both of the updating stages, which aggravates the difficulty of analyzing the convergence of the whole learning process. It is also challenging to determine the number of iterations for each stage, as over-tuning any stage will trap the whole process into a local minimum that is far from the ground truth. To mitigate the issues resulting from the alternating method, this thesis proposes a novel algorithm termed rank-one atom decomposition (ROAD), which intends to recast a bilinear inverse problem into an optimization problem with respect to a single variable, that is, a set of rank-one matrices. Therefore, the resulting algorithm is one stage, which minimizes the sparsity of the coefficients while keeping the data consistency constraint throughout the whole learning process. Inspired by recent advances in applying the alternating direction method of multipliers (ADMM) to nonconvex nonsmooth problems, an ADMM solver is adopted to address ROAD problems, and a lower bound of the penalty parameter is derived to guarantee a convergence in the augmented Lagrangian despite nonconvexity of the optimization formulation. Compared to two-stage dictionary learning methods, ROAD simplifies the learning process, eases the difficulty of analyzing convergence, and avoids the singular point issue. From a practical point of view, ROAD reduces the number of tuning parameters required in other benchmark algorithms. Numerical tests reveal that ROAD outperforms other benchmark algorithms in both synthetic data tests and single image super-resolution applications. In addition to dictionary learning, the ROAD formulation can also be extended to solve the SaSD and CDL problems. ROAD can still be employed to recast these problems into a one-variable optimization problem. Numerical tests illustrate that ROAD has better performance in estimating convolutional kernels compared to the latest SaSD and CDL algorithms.Open Acces

    Factor analysis of dynamic PET images

    Get PDF
    Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the β\beta-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics

    Side information in robust principal component analysis: algorithms and applications

    Get PDF
    Dimensionality reduction and noise removal are fundamental machine learning tasks that are vital to artificial intelligence applications. Principal component analysis has long been utilised in computer vision to achieve the above mentioned goals. Recently, it has been enhanced in terms of robustness to outliers in robust principal component analysis. Both convex and non-convex programs have been developed to solve this new formulation, some with exact convergence guarantees. Its effectiveness can be witnessed in image and video applications ranging from image denoising and alignment to background separation and face recognition. However, robust principal component analysis is by no means perfect. This dissertation identifies its limitations, explores various promising options for improvement and validates the proposed algorithms on both synthetic and real-world datasets. Common algorithms approximate the NP-hard formulation of robust principal component analysis with convex envelopes. Though under certain assumptions exact recovery can be guaranteed, the relaxation margin is too big to be squandered. In this work, we propose to apply gradient descent on the Burer-Monteiro bilinear matrix factorisation to squeeze this margin given available subspaces. This non-convex approach improves upon conventional convex approaches both in terms of accuracy and speed. On the other hand, oftentimes there is accompanying side information when an observation is made. The ability to assimilate such auxiliary sources of data can ameliorate the recovery process. In this work, we investigate in-depth such possibilities for incorporating side information in restoring the true underlining low-rank component from gross sparse noise. Lastly, tensors, also known as multi-dimensional arrays, represent real-world data more naturally than matrices. It is thus advantageous to adapt robust principal component analysis to tensors. Since there is no exact equivalence between tensor rank and matrix rank, we employ the notions of Tucker rank and CP rank as our optimisation objectives. Overall, this dissertation carefully defines the problems when facing real-world computer vision challenges, extensively and impartially evaluates the state-of-the-art approaches, proposes novel solutions and provides sufficient validations on both simulated data and popular real-world datasets for various mainstream computer vision tasks.Open Acces

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Efficient Algorithms for Mumford-Shah and Potts Problems

    Get PDF
    In this work, we consider Mumford-Shah and Potts models and their higher order generalizations. Mumford-Shah and Potts models are among the most well-known variational approaches to edge-preserving smoothing and partitioning of images. Though their formulations are intuitive, their application is not straightforward as it corresponds to solving challenging, particularly non-convex, minimization problems. The main focus of this thesis is the development of new algorithmic approaches to Mumford-Shah and Potts models, which is to this day an active field of research. We start by considering the situation for univariate data. We find that switching to higher order models can overcome known shortcomings of the classical first order models when applied to data with steep slopes. Though the existing approaches to the first order models could be applied in principle, they are slow or become numerically unstable for higher orders. Therefore, we develop a new algorithm for univariate Mumford-Shah and Potts models of any order and show that it solves the models in a stable way in O(n^2). Furthermore, we develop algorithms for the inverse Potts model. The inverse Potts model can be seen as an approach to jointly reconstructing and partitioning images that are only available indirectly on the basis of measured data. Further, we give a convergence analysis for the proposed algorithms. In particular, we prove the convergence to a local minimum of the underlying NP-hard minimization problem. We apply the proposed algorithms to numerical data to illustrate their benefits. Next, we apply the multi-channel Potts prior to the reconstruction problem in multi-spectral computed tomography (CT). To this end, we propose a new superiorization approach, which perturbs the iterates of the conjugate gradient method towards better results with respect to the Potts prior. In numerical experiments, we illustrate the benefits of the proposed approach by comparing it to the existing Potts model approach from the literature as well as to the existing total variation type methods. Hereafter, we consider the second order Mumford-Shah model for edge-preserving smoothing of images which –similarly to the univariate case– improves upon the classical Mumford-Shah model for images with linear color gradients. Based on reformulations in terms of Taylor jets, i.e. specific fields of polynomials, we derive discrete second order Mumford-Shah models for which we develop an efficient algorithm using an ADMM scheme. We illustrate the potential of the proposed method by comparing it with existing methods for the second order Mumford-Shah model. Further, we illustrate its benefits in connection with edge detection. Finally, we consider the affine-linear Potts model for the image partitioning problem. As many images possess linear trends within homogeneous regions, the classical Potts model frequently leads to oversegmentation. The affine-linear Potts model accounts for that problem by allowing for linear trends within segments. We lift the corresponding minimization problem to the jet space and develop again an ADMM approach. In numerical experiments, we show that the proposed algorithm achieves lower energy values as well as faster runtimes than the method of comparison, which is based on the iterative application of the graph cut algorithm (with α-expansion moves)
    corecore