2,011 research outputs found

    Image Reconstruction from Undersampled Confocal Microscopy Data using Multiresolution Based Maximum Entropy Regularization

    Full text link
    We consider the problem of reconstructing 2D images from randomly under-sampled confocal microscopy samples. The well known and widely celebrated total variation regularization, which is the L1 norm of derivatives, turns out to be unsuitable for this problem; it is unable to handle both noise and under-sampling together. This issue is linked with the notion of phase transition phenomenon observed in compressive sensing research, which is essentially the break-down of total variation methods, when sampling density gets lower than certain threshold. The severity of this breakdown is determined by the so-called mutual incoherence between the derivative operators and measurement operator. In our problem, the mutual incoherence is low, and hence the total variation regularization gives serious artifacts in the presence of noise even when the sampling density is not very low. There has been very few attempts in developing regularization methods that perform better than total variation regularization for this problem. We develop a multi-resolution based regularization method that is adaptive to image structure. In our approach, the desired reconstruction is formulated as a series of coarse-to-fine multi-resolution reconstructions; for reconstruction at each level, the regularization is constructed to be adaptive to the image structure, where the information for adaption is obtained from the reconstruction obtained at coarser resolution level. This adaptation is achieved by using maximum entropy principle, where the required adaptive regularization is determined as the maximizer of entropy subject to the information extracted from the coarse reconstruction as constraints. We demonstrate the superiority of the proposed regularization method over existing ones using several reconstruction examples

    High-ISO long-exposure image denoising based on quantitative blob characterization

    Get PDF
    Blob detection and image denoising are fundamental, sometimes related tasks in computer vision. In this paper, we present a computational method to quantitatively measure blob characteristics using normalized unilateral second-order Gaussian kernels. This method suppresses non-blob structures while yielding a quantitative measurement of the position, prominence and scale of blobs, which can facilitate the tasks of blob reconstruction and blob reduction. Subsequently, we propose a denoising scheme to address high-ISO long-exposure noise, which sometimes spatially shows a blob appearance, employing a blob reduction procedure as a cheap preprocessing for conventional denoising methods. We apply the proposed denoising methods to real-world noisy images as well as standard images that are corrupted by real noise. The experimental results demonstrate the superiority of the proposed methods over state-of-the-art denoising methods

    Weighted Mean Curvature

    Full text link
    In image processing tasks, spatial priors are essential for robust computations, regularization, algorithmic design and Bayesian inference. In this paper, we introduce weighted mean curvature (WMC) as a novel image prior and present an efficient computation scheme for its discretization in practical image processing applications. We first demonstrate the favorable properties of WMC, such as sampling invariance, scale invariance, and contrast invariance with Gaussian noise model; and we show the relation of WMC to area regularization. We further propose an efficient computation scheme for discretized WMC, which is demonstrated herein to process over 33.2 giga-pixels/second on GPU. This scheme yields itself to a convolutional neural network representation. Finally, WMC is evaluated on synthetic and real images, showing its superiority quantitatively to total-variation and mean curvature.Comment: 12 page

    Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System

    Get PDF
    The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques. This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors – including laser range scanners, video cameras, and pose estimation hardware – on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model. The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented

    Split operator method for fluorescence diffuse optical tomography using anisotropic diffusion regularisation with prior anatomical information

    Get PDF
    Fluorescence diffuse optical tomography (fDOT) is an imaging modality that provides images of the fluorochrome distribution within the object of study. The image reconstruction problem is ill-posed and highly underdetermined and, therefore, regularisation techniques need to be used. In this paper we use a nonlinear anisotropic diffusion regularisation term that incorporates anatomical prior information. We introduce a split operator method that reduces the nonlinear inverse problem to two simpler problems, allowing fast and efficient solution of the fDOT problem. We tested our method using simulated, phantom and ex-vivo mouse data, and found that it provides reconstructions with better spatial localisation and size of fluorochrome inclusions than using the standard Tikhonov penalty term

    Multi-scale Feature-Preserving Smoothing of Images and Volumes on GPU

    Get PDF
    Les images et données volumiques sont devenues importantes dans notre vie quotidienne que ce soit sur le plan artistique, culturel, ou scientifique. Les données volumiques ont un intérêt important dans l'imagerie médicale, l'ingénierie, et l'analyse du patrimoine culturel. Ils sont créées en utilisant la reconstruction tomographique, une technique qui combine une large série de scans 2D capturés de plusieur points de vue. Chaque scan 2D est obtenu par des methodes de rayonnement : Rayons X pour les scanners CT, ondes radiofréquences pour les IRM, annihilation électron-positron pour les PET scans, etc. L'acquisition des images et données volumique est influencée par le bruit provoqué par différents facteurs. Le bruit dans les images peut être causée par un manque d'éclairage, des défauts électroniques, faible dose de rayonnement, et un mauvais positionnement de l'outil ou de l'objet. Le bruit dans les données volumique peut aussi provenir d'une variété de sources : le nombre limité de points de vue, le manque de sensibilité dans les capteurs, des contrastes élevé, les algorithmes de reconstruction employés, etc. L'acquisition de données non bruitée est iréalisable. Alors, il est souhaitable de réduire ou d'éliminer le bruit le plus tôt possible dans le pipeline. La suppression du bruit tout en préservant les caractéristiques fortes d'une image ou d'un objet volumique reste une tâche difficile. Nous proposons une méthode multi-échelle pour lisser des images 2D et des données tomographiques 3D tout en préservant les caractéristiques à l'échelle spécifiée. Notre algorithme est contrôlé par un seul paramètre la taille des caractéristiques qui doivent être préservées. Toute variation qui est plus petite que l'échelle spécifiée est traitée comme bruit et lissée, tandis que les discontinuités telles que des coins, des bords et des détails à plus grande échelle sont conservés. Nous démontrons les données lissées produites par notre algorithme permettent d'obtenir des images nettes et des iso-surfaces plus propres. Nous comparons nos résultats avec ceux des methodes précédentes. Notre méthode est inspirée par la diffusion anisotrope. Nous calculons nos tenseurs de diffusion à partir des histogrammes continues locaux de gradients autour de chaque pixel dans les images et autour de chaque voxel dans des volumes. Comme notre méthode de lissage fonctionne entièrement sur GPU, il est extrêmement rapide.Two-dimensional images and three-dimensional volumes have become a staple ingredient of our artistic, cultural, and scientific appetite. Images capture and immortalize an instance such as natural scenes, through a photograph camera. Moreover, they can capture details inside biological subjects through the use of CT (computer tomography) scans, X-Rays, ultrasound, etc. Three-dimensional volumes of objects are also of high interest in medical imaging, engineering, and analyzing cultural heritage. They are produced using tomographic reconstruction, a technique that combine a large series of 2D scans captured from multiple views. Typically, penetrative radiation is used to obtain each 2D scan: X-Rays for CT scans, radio-frequency waves for MRI (magnetic resonance imaging), electron-positron annihilation for PET scans, etc. Unfortunately, their acquisition is influenced by noise caused by different factors. Noise in two-dimensional images could be caused by low-light illumination, electronic defects, low-dose of radiation, and a mispositioning tool or object. Noise in three-dimensional volumes also come from a variety of sources: the limited number of views, lack of captor sensitivity, high contrasts, the reconstruction algorithms, etc. The constraint that data acquisition be noiseless is unrealistic. It is desirable to reduce, or eliminate, noise at the earliest stage in the application. However, removing noise while preserving the sharp features of an image or volume object remains a challenging task. We propose a multi-scale method to smooth 2D images and 3D tomographic data while preserving features at a specified scale. Our algorithm is controlled using a single user parameter the minimum scale of features to be preserved. Any variation that is smaller than the specified scale is treated as noise and smoothed, while discontinuities such as corners, edges and detail at a larger scale are preserved. We demonstrate that our smoothed data produces clean images and clean contour surfaces of volumes using standard surface-extraction algorithms. In addition to, we compare our results with results of previous approaches. Our method is inspired by anisotropic diffusion. We compute our diffusion tensors from the local continuous histograms of gradients around each pixel in imageSAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Decellularized grass as a sustainable scaffold for skeletal muscle tissue engineering

    Get PDF
    Scaffold materials suitable for the scale-up and subsequent commercialization of tissue engineered products should ideally be cost effective and accessible. For the in vitro culture of certain adherent cells, synthetic fabrication techniques are often employed to produce micro- or nano-patterned substrates to influence cell attachment, morphology, and alignment via the mechanism of contact guidance. Here we present a natural scaffold, in the form of decellularized amenity grass, which retains its natural striated topography and supports the attachment, proliferation, alignment and differentiation of murine C2C12 myoblasts, without the need for additional functionalization. This presents an inexpensive, sustainable scaffold material and structure for tissue engineering applications capable of influencing cell alignment, a desired property for the culture of skeletal muscle and other anisotropic tissues.The raw and processed quantitative data required to reproduce these findings are available to download from http://dx.doi.org/10.17632/5mgnz3zrmv.
    corecore