318 research outputs found

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Discrete Tomography by Convex-Concave Regularization using Linear and Quadratic Optimization

    Get PDF
    Discrete tomography concerns the reconstruction of objects that are made up from a few different materials, each of which comprising a homogeneous density distribution. Under the assumption that these densities are a priori known new algorithms can be developed which typically need less projection data to reveal appealing reconstruction results

    Non-Convex and Geometric Methods for Tomography and Label Learning

    Get PDF
    Data labeling is a fundamental problem of mathematical data analysis in which each data point is assigned exactly one single label (prototype) from a finite predefined set. In this thesis we study two challenging extensions, where either the input data cannot be observed directly or prototypes are not available beforehand. The main application of the first setting is discrete tomography. We propose several non-convex variational as well as smooth geometric approaches to joint image label assignment and reconstruction from indirect measurements with known prototypes. In particular, we consider spatial regularization of assignments, based on the KL-divergence, which takes into account the smooth geometry of discrete probability distributions endowed with the Fisher-Rao (information) metric, i.e. the assignment manifold. Finally, the geometric point of view leads to a smooth flow evolving on a Riemannian submanifold including the tomographic projection constraints directly into the geometry of assignments. Furthermore we investigate corresponding implicit numerical schemes which amount to solving a sequence of convex problems. Likewise, for the second setting, when the prototypes are absent, we introduce and study a smooth dynamical system for unsupervised data labeling which evolves by geometric integration on the assignment manifold. Rigorously abstracting from ``data-label'' to ``data-data'' decisions leads to interpretable low-rank data representations, which themselves are parameterized by label assignments. The resulting self-assignment flow simultaneously performs learning of latent prototypes in the very same framework while they are used for inference. Moreover, a single parameter, the scale of regularization in terms of spatial context, drives the entire process. By smooth geodesic interpolation between different normalizations of self-assignment matrices on the positive definite matrix manifold, a one-parameter family of self-assignment flows is defined. Accordingly, the proposed approach can be characterized from different viewpoints such as discrete optimal transport, normalized spectral cuts and combinatorial optimization by completely positive factorizations, each with additional built-in spatial regularization

    A new difference of anisotropic and isotropic total variation regularization method for image restoration

    Get PDF
    Total variation (TV) regularizer has diffusely emerged in image processing. In this paper, we propose a new nonconvex total variation regularization method based on the generalized Fischer-Burmeister function for image restoration. Since our model is nonconvex and nonsmooth, the specific difference of convex algorithms (DCA) are presented, in which the subproblem can be minimized by the alternating direction method of multipliers (ADMM). The algorithms have a low computational complexity in each iteration. Experiment results including image denoising and magnetic resonance imaging demonstrate that the proposed models produce more preferable results compared with state-of-the-art methods

    Accelerating the DC algorithm for smooth functions

    Get PDF
    We introduce two new algorithms to minimise smooth difference of convex (DC) functions that accelerate the convergence of the classical DC algorithm (DCA). We prove that the point computed by DCA can be used to define a descent direction for the objective function evaluated at this point. Our algorithms are based on a combination of DCA together with a line search step that uses this descent direction. Convergence of the algorithms is proved and the rate of convergence is analysed under the Łojasiewicz property of the objective function. We apply our algorithms to a class of smooth DC programs arising in the study of biochemical reaction networks, where the objective function is real analytic and thus satisfies the Łojasiewicz property. Numerical tests on various biochemical models clearly show that our algorithms outperform DCA, being on average more than four times faster in both computational time and the number of iterations. Numerical experiments show that the algorithms are globally convergent to a non-equilibrium steady state of various biochemical networks, with only chemically consistent restrictions on the network topology.F. J. Aragón Artacho was supported by MINECO of Spain and ERDF of EU, as part of the Ramón y Cajal program (RYC-2013-13327) and the Grant MTM2014-59179-C2-1-P. R. M. Fleming and P. T. Vuong were supported by the U.S. Department of Energy, Offices of Advanced Scientific Computing Research and the Biological and Environmental Research as part of the Scientific Discovery Through Advanced Computing program, Grant #DE-SC0010429

    Nonlocal Graph-PDEs and Riemannian Gradient Flows for Image Labeling

    Get PDF
    In this thesis, we focus on the image labeling problem which is the task of performing unique pixel-wise label decisions to simplify the image while reducing its redundant information. We build upon a recently introduced geometric approach for data labeling by assignment flows [ APSS17 ] that comprises a smooth dynamical system for data processing on weighted graphs. Hereby we pursue two lines of research that give new application and theoretically-oriented insights on the underlying segmentation task. We demonstrate using the example of Optical Coherence Tomography (OCT), which is the mostly used non-invasive acquisition method of large volumetric scans of human retinal tis- sues, how incorporation of constraints on the geometry of statistical manifold results in a novel purely data driven geometric approach for order-constrained segmentation of volumetric data in any metric space. In particular, making diagnostic analysis for human eye diseases requires decisive information in form of exact measurement of retinal layer thicknesses that has be done for each patient separately resulting in an demanding and time consuming task. To ease the clinical diagnosis we will introduce a fully automated segmentation algorithm that comes up with a high segmentation accuracy and a high level of built-in-parallelism. As opposed to many established retinal layer segmentation methods, we use only local information as input without incorporation of additional global shape priors. Instead, we achieve physiological order of reti- nal cell layers and membranes including a new formulation of ordered pair of distributions in an smoothed energy term. This systematically avoids bias pertaining to global shape and is hence suited for the detection of anatomical changes of retinal tissue structure. To access the perfor- mance of our approach we compare two different choices of features on a data set of manually annotated 3 D OCT volumes of healthy human retina and evaluate our method against state of the art in automatic retinal layer segmentation as well as to manually annotated ground truth data using different metrics. We generalize the recent work [ SS21 ] on a variational perspective on assignment flows and introduce a novel nonlocal partial difference equation (G-PDE) for labeling metric data on graphs. The G-PDE is derived as nonlocal reparametrization of the assignment flow approach that was introduced in J. Math. Imaging & Vision 58(2), 2017. Due to this parameterization, solving the G-PDE numerically is shown to be equivalent to computing the Riemannian gradient flow with re- spect to a nonconvex potential. We devise an entropy-regularized difference-of-convex-functions (DC) decomposition of this potential and show that the basic geometric Euler scheme for inte- grating the assignment flow is equivalent to solving the G-PDE by an established DC program- ming scheme. Moreover, the viewpoint of geometric integration reveals a basic way to exploit higher-order information of the vector field that drives the assignment flow, in order to devise a novel accelerated DC programming scheme. A detailed convergence analysis of both numerical schemes is provided and illustrated by numerical experiments

    Sparse MRI and CT Reconstruction

    Full text link
    Sparse signal reconstruction is of the utmost importance for efficient medical imaging, conducting accurate screening for security and inspection, and for non-destructive testing. The sparsity of the signal is dictated by either feasibility, or the cost and the screening time constraints of the system. In this work, two major sparse signal reconstruction systems such as compressed sensing magnetic resonance imaging (MRI) and sparse-view computed tomography (CT) are investigated. For medical CT, a limited number of views (sparse-view) is an option for whether reducing the amount of ionizing radiation or the screening time and the cost of the procedure. In applications such as non-destructive testing or inspection of large objects, like a cargo container, one angular view can take up to a few minutes for only one slice. On the other hand, some views can be unavailable due to the configuration of the system. A problem of data sufficiency and on how to estimate a tomographic image when the projection data are not ideally sufficient for precise reconstruction is one of two major objectives of this work. Three CT reconstruction methods are proposed: algebraic iterative reconstruction-reprojection (AIRR), sparse-view CT reconstruction based on curvelet and total variation regularization (CTV), and sparse-view CT reconstruction based on nonconvex L1-L2 regularization. The experimental results confirm a high performance based on subjective and objective quality metrics. Additionally, sparse-view neutron-photon tomography is studied based on Monte-Carlo modelling to demonstrate shape reconstruction, material discrimination and visualization based on the proposed 3D object reconstruction method and material discrimination signatures. One of the methods for efficient acquisition of multidimensional signals is the compressed sensing (CS). A significantly low number of measurements can be obtained in different ways, and one is undersampling, that is sampling below the Shannon-Nyquist limit. Magnetic resonance imaging (MRI) suffers inherently from its slow data acquisition. The compressed sensing MRI (CSMRI) offers significant scan time reduction with advantages for patients and health care economics. In this work, three frameworks are proposed and evaluated, i.e., CSMRI based on curvelet transform and total generalized variation (CT-TGV), CSMRI using curvelet sparsity and nonlocal total variation: CS-NLTV, CSMRI that explores shearlet sparsity and nonlocal total variation: SS-NLTV. The proposed methods are evaluated experimentally and compared to the previously reported state-of-the-art methods. Results demonstrate a significant improvement of image reconstruction quality on different medical MRI datasets

    Minimizing Quotient Regularization Model

    Full text link
    Quotient regularization models (QRMs) are a class of powerful regularization techniques that have gained considerable attention in recent years, due to their ability to handle complex and highly nonlinear data sets. However, the nonconvex nature of QRM poses a significant challenge in finding its optimal solution. We are interested in scenarios where both the numerator and the denominator of QRM are absolutely one-homogeneous functions, which is widely applicable in the fields of signal processing and image processing. In this paper, we utilize a gradient flow to minimize such QRM in combination with a quadratic data fidelity term. Our scheme involves solving a convex problem iteratively.The convergence analysis is conducted on a modified scheme in a continuous formulation, showing the convergence to a stationary point. Numerical experiments demonstrate the effectiveness of the proposed algorithm in terms of accuracy, outperforming the state-of-the-art QRM solvers.Comment: 20 page
    corecore