260 research outputs found

    A survey of exemplar-based texture synthesis

    Full text link
    Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statistics-based methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever "copy-paste" procedure, which stitches together large regions of the sample. Hybrid methods try to combine ideas from both approaches to avoid their hurdles. The recent approaches using convolutional neural networks fit to this classification, some being statistical and others performing patch re-arrangement in the feature space. They produce impressive synthesis on various kinds of textures. Nevertheless, we found that most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe FRAME. New method presented: CNNMR

    Noise Estimation, Noise Reduction and Intensity Inhomogeneity Correction in MRI Images of the Brain

    Get PDF
    Rician noise and intensity inhomogeneity are two common types of image degradation that manifest in the acquisition of magnetic resonance imaging (MRI) system images of the brain. Many noise reduction and intensity inhomogeneity correction algorithms are based on strong parametric assumptions. These parametric assumptions are generic and do not account for salient features that are unique to specific classes and different levels of degradation in natural images. This thesis proposes the 4-neighborhood clique system in a layer-structured Markov random field (MRF) model for noise estimation and noise reduction. When the test image is the only physical system under consideration, it is regarded as a single layer Markov random field (SLMRF) model, and as a double layer MRF model when the test images and classical priors are considered. A scientific principle states that segmentation trivializes the task of bias field correction. Another principle states that the bias field distorts the intensity but not the spatial attribute of an image. This thesis exploits these two widely acknowledged scientific principles in order to propose a new model for correction of intensity inhomogeneity. The noise estimation algorithm is invariant to the presence or absence of background features in an image and more accurate in the estimation of noise levels because it is potentially immune to the modeling errors inherent in some current state-of-the-art algorithms. The noise reduction algorithm derived from the SLMRF model does not incorporate a regularization parameter. Furthermore, it preserves edges, and its output is devoid of the blurring and ringing artifacts associated with Gaussian and wavelet based algorithms. The procedure for correction of intensity inhomogeneity does not require the computationally intensive task of estimation of the bias field map. Furthermore, there is no requirement for a digital brain atlas which will incorporate additional image processing tasks such as image registration

    Information processing in biology

    Get PDF
    To survive, organisms must respond appropriately to a variety of challenges posed by a dynamic and uncertain environment. The mechanisms underlying such responses can in general be framed as input-output devices which map environment states (inputs) to associated responses (output. In this light, it is appealing to attempt to model these systems using information theory, a well developed mathematical framework to describe input-output systems. Under the information theoretical perspective, an organism’s behavior is fully characterized by the repertoire of its outputs under different environmental conditions. Due to natural selection, it is reasonable to assume this input-output mapping has been fine tuned in such a way as to maximize the organism’s fitness. If that is the case, it should be possible to abstract away the mechanistic implementation details and obtain the general principles that lead to fitness under a certain environment. These can then be used inferentially to both generate hypotheses about the underlying implementation as well as predict novel responses under external perturbations. In this work I use information theory to address the question of how biological systems generate complex outputs using relatively simple mechanisms in a robust manner. In particular, I will examine how communication and distributed processing can lead to emergent phenomena which allow collective systems to respond in a much richer way than a single organism could

    Scaling Algorithms for Unbalanced Transport Problems

    Full text link
    This article introduces a new class of fast algorithms to approximate variational problems involving unbalanced optimal transport. While classical optimal transport considers only normalized probability distributions, it is important for many applications to be able to compute some sort of relaxed transportation between arbitrary positive measures. A generic class of such "unbalanced" optimal transport problems has been recently proposed by several authors. In this paper, we show how to extend the, now classical, entropic regularization scheme to these unbalanced problems. This gives rise to fast, highly parallelizable algorithms that operate by performing only diagonal scaling (i.e. pointwise multiplications) of the transportation couplings. They are generalizations of the celebrated Sinkhorn algorithm. We show how these methods can be used to solve unbalanced transport, unbalanced gradient flows, and to compute unbalanced barycenters. We showcase applications to 2-D shape modification, color transfer, and growth models

    Variational methods for texture segmentation

    Get PDF
    In the last decades, image production has grown significantly. From digital photographs to the medical scans, including satellite images and video films, more and more data need to be processed. Consequently the number of applications based on digital images has increased, either for medicine, research for country planning or for entertainment business such as animation or video games. All these areas, although very different one to another, need the same image processing techniques. Among all these techniques, segmentation is probably one of the most studied because of its important role. Segmentation is the process of extracting meaningful objects from an image. This task, although easily achieved by the human visual system, is actually complex and still a true challenge for the image processing community despite several decades of research. The thesis work presented in this manuscript proposes solutions to the image segmentation problem in a well established mathematical framework, i.e. variational models. The image is defined in a continuous space and the segmentation problem is expressed through a functional or energy optimization. Depending on the object to be segmented, this energy definition can be difficult; in particular for objects with ambiguous borders or objects with textures. For the latter, the difficulty lies already in the definition of the term texture. The human eye can easily recognize a texture, but it is quite difficult to find words to define it, even more in mathematical terms. There is a deliberate vagueness in the definition of texture which explains the difficulty to conceptualize a model able to describe it. Often these textures can neither be described by homogeneous regions nor by sharp contours. This is why we are first interested in the extraction of texture features, that is to say, finding one representation that can discriminate a textured region from another. The first contribution of this thesis is the construction of a texture descriptor from the representation of the image similar to a surface in a volume. This descriptor belongs to the framework of non-supervised segmentation, since it will not require any user interaction. The second contribution is a solution for the segmentation problem based on active contour models and information theory tools. Third contribution is a semi-supervised segmentation model, i.e. where constraints provided by the user will be integrated in the segmentation framework. This processus is actually derived from the graph of image patches. This graph gives the connectivity measure between the different points of the image. The segmentation will be expressed by a graph partition and a variational model. This manuscript proposes to tackle the segmentation problem for textured images

    MDS-Based Multiresolution Nonlinear Dimensionality Reduction Model for Color Image Segmentation

    Full text link

    Segmentation and quantification of spinal cord gray matter–white matter structures in magnetic resonance images

    Get PDF
    This thesis focuses on finding ways to differentiate the gray matter (GM) and white matter (WM) in magnetic resonance (MR) images of the human spinal cord (SC). The aim of this project is to quantify tissue loss in these compartments to study their implications on the progression of multiple sclerosis (MS). To this end, we propose segmentation algorithms that we evaluated on MR images of healthy volunteers. Segmentation of GM and WM in MR images can be done manually by human experts, but manual segmentation is tedious and prone to intra- and inter-rater variability. Therefore, a deterministic automation of this task is necessary. On axial 2D images acquired with a recently proposed MR sequence, called AMIRA, we experiment with various automatic segmentation algorithms. We first use variational model-based segmentation approaches combined with appearance models and later directly apply supervised deep learning to train segmentation networks. Evaluation of the proposed methods shows accurate and precise results, which are on par with manual segmentations. We test the developed deep learning approach on images of conventional MR sequences in the context of a GM segmentation challenge, resulting in superior performance compared to the other competing methods. To further assess the quality of the AMIRA sequence, we apply an already published GM segmentation algorithm to our data, yielding higher accuracy than the same algorithm achieves on images of conventional MR sequences. On a different topic, but related to segmentation, we develop a high-order slice interpolation method to address the large slice distances of images acquired with the AMIRA protocol at different vertebral levels, enabling us to resample our data to intermediate slice positions. From the methodical point of view, this work provides an introduction to computer vision, a mathematically focused perspective on variational segmentation approaches and supervised deep learning, as well as a brief overview of the underlying project's anatomical and medical background
    • …
    corecore