120 research outputs found

    Image Denoising via L

    Get PDF
    The L0 gradient minimization (LGM) method has been proposed for image smoothing very recently. As an improvement of the total variation (TV) model which employs the L1 norm of the gradient, the LGM model yields much better results for the piecewise constant image. However, just as the TV model, the LGM model also suffers, even more seriously, from the staircasing effect and the inefficiency in preserving the texture in image. In order to overcome these drawbacks, in this paper, we propose to introduce an effective fidelity term into the LGM model. The fidelity term is an exemplar of the moving least square method using steering kernel. Under this framework, these two methods benefit from each other and can produce better results. Experimental results show that the proposed scheme is promising as compared with the state-of-the-art methods

    Ultrasound image processing in the evaluation of labor induction failure risk

    Get PDF
    Labor induction is defined as the artificial stimulation of uterine contractions for the purpose of vaginal birth. Induction is prescribed for medical and elective reasons. Success in labor induction procedures is related to vaginal delivery. Cesarean section is one of the potential risks of labor induction as it occurs in about 20% of the inductions. A ripe cervix (soft and distensible) is needed for a successful labor. During the ripening cervical, tissues experience micro structural changes: collagen becomes disorganized and water content increases. These changes will affect the interaction between cervical tissues and sound waves during ultrasound transvaginal scanning and will be perceived as gray level intensity variations in the echographic image. Texture analysis can be used to analyze these variations and provide a means to evaluate cervical ripening in a non-invasive way

    Data driven regularization models of non-linear ill-posed inverse problems in imaging

    Get PDF
    Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method

    Adaptive Methods for Point Cloud and Mesh Processing

    Get PDF
    Point clouds and 3D meshes are widely used in numerous applications ranging from games to virtual reality to autonomous vehicles. This dissertation proposes several approaches for noise removal and calibration of noisy point cloud data and 3D mesh sharpening methods. Order statistic filters have been proven to be very successful in image processing and other domains as well. Different variations of order statistics filters originally proposed for image processing are extended to point cloud filtering in this dissertation. A brand-new adaptive vector median is proposed in this dissertation for removing noise and outliers from noisy point cloud data. The major contributions of this research lie in four aspects: 1) Four order statistic algorithms are extended, and one adaptive filtering method is proposed for the noisy point cloud with improved results such as preserving significant features. These methods are applied to standard models as well as synthetic models, and real scenes, 2) A hardware acceleration of the proposed method using Microsoft parallel pattern library for filtering point clouds is implemented using multicore processors, 3) A new method for aerial LIDAR data filtering is proposed. The objective is to develop a method to enable automatic extraction of ground points from aerial LIDAR data with minimal human intervention, and 4) A novel method for mesh color sharpening using the discrete Laplace-Beltrami operator is proposed. Median and order statistics-based filters are widely used in signal processing and image processing because they can easily remove outlier noise and preserve important features. This dissertation demonstrates a wide range of results with median filter, vector median filter, fuzzy vector median filter, adaptive mean, adaptive median, and adaptive vector median filter on point cloud data. The experiments show that large-scale noise is removed while preserving important features of the point cloud with reasonable computation time. Quantitative criteria (e.g., complexity, Hausdorff distance, and the root mean squared error (RMSE)), as well as qualitative criteria (e.g., the perceived visual quality of the processed point cloud), are employed to assess the performance of the filters in various cases corrupted by different noisy models. The adaptive vector median is further optimized for denoising or ground filtering aerial LIDAR data point cloud. The adaptive vector median is also accelerated on multi-core CPUs using Microsoft Parallel Patterns Library. In addition, this dissertation presents a new method for mesh color sharpening using the discrete Laplace-Beltrami operator, which is an approximation of second order derivatives on irregular 3D meshes. The one-ring neighborhood is utilized to compute the Laplace-Beltrami operator. The color for each vertex is updated by adding the Laplace-Beltrami operator of the vertex color weighted by a factor to its original value. Different discretizations of the Laplace-Beltrami operator have been proposed for geometrical processing of 3D meshes. This work utilizes several discretizations of the Laplace-Beltrami operator for sharpening 3D mesh colors and compares their performance. Experimental results demonstrated the effectiveness of the proposed algorithms

    Computer Vision Approaches to Liquid-Phase Transmission Electron Microscopy

    Get PDF
    Electron microscopy (EM) is a technique that exploits the interaction between electron and matter to produce high resolution images down to atomic level. In order to avoid undesired scattering in the electron path, EM samples are conventionally imaged in solid state under vacuum conditions. Recently, this limit has been overcome by the realization of liquid-phase electron microscopy (LP EM), a technique that enables the analysis of samples in their liquid native state. LP EM paired with a high frame rate acquisition direct detection camera allows tracking the motion of particles in liquids, as well as their temporal dynamic processes. In this research work, LP EM is adopted to image the dynamics of particles undergoing Brownian motion, exploiting their natural rotation to access all the particle views, in order to reconstruct their 3D structure via tomographic techniques. However, specific computer vision-based tools were designed around the limitations of LP EM in order to elaborate the results of the imaging process. Consequently, different deblurring and denoising approaches were adopted to improve the quality of the images. Therefore, the processed LP EM images were adopted to reconstruct the 3D model of the imaged samples. This task was performed by developing two different methods: Brownian tomography (BT) and Brownian particle analysis (BPA). The former tracks in time a single particle, capturing its dynamics evolution over time. The latter is an extension in time of the single particle analysis (SPA) technique. Conventionally it is paired to cryo-EM to reconstruct 3D density maps starting from thousands of EM images by capturing hundreds of particles of the same species frozen on a grid. On the contrary, BPA has the ability to process image sequences that may not contain thousands of particles, but instead monitors individual particle views across consecutive frames, rather than across a single frame

    Applications of Physically Accurate Deep Learning for Processing Digital Rock Images

    Full text link
    Digital rock analysis aims to improve our understanding of the fluid flow properties of reservoir rocks, which are important for enhanced oil recovery, hydrogen storage, carbonate dioxide storage, and groundwater management. X-ray microcomputed tomography (micro-CT) is the primary approach to capturing the structure of porous rock samples for digital rock analysis. Initially, the obtained micro-CT images are processed using image-based techniques, such as registration, denoising, and segmentation depending on various requirements. Numerical simulations are then conducted on the digital models for petrophysical prediction. The accuracy of the numerical simulation highly depends on the quality of the micro-CT images. Therefore, image processing is a critical step for digital rock analysis. Recent advances in deep learning have surpassed conventional methods for image processing. Herein, the utility of convolutional neural networks (CNN) and generative adversarial networks (GAN) are assessed in regard to various applications in digital rock image processing, such as segmentation, super-resolution, and denoising. To obtain training data, different sandstone and carbonate samples were scanned using various micro-CT facilities. After that, validation images previously unseen by the trained neural networks are utilised to evaluate the performance and robustness of the proposed deep learning techniques. Various threshold scenarios are applied to segment the reconstructed digital rock images for sensitivity analyses. Then, quantitative petrophysical analyses, such as porosity, absolute/relative permeability, and pore size distribution, are implemented to estimate the physical accuracy of the digital rock data with the corresponding ground truth data. The results show that both CNN and GAN deep learning methods can provide physically accurate digital rock images with less user bias than traditional approaches. These results unlock new pathways for various applications related to the reservoir characterisation of porous reservoir rocks

    Review : Deep learning in electron microscopy

    Get PDF
    Deep learning is transforming most areas of science and technology, including electron microscopy. This review paper offers a practical perspective aimed at developers with limited familiarity. For context, we review popular applications of deep learning in electron microscopy. Following, we discuss hardware and software needed to get started with deep learning and interface with electron microscopes. We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy

    The Manifold of Neural Responses Informs Physiological Circuits in the Visual System

    Get PDF
    The rapid development of multi-electrode and imaging techniques is leading to a data explosion in neuroscience, opening the possibility of truly understanding the organization and functionality of our visual systems. Furthermore, the need for more natural visual stimuli greatly increases the complexity of the data. Together, these create a challenge for machine learning. Our goal in this thesis is to develop one such technique. The central pillar of our contribution is designing a manifold of neurons, and providing an algorithmic approach to inferring it. This manifold is functional, in the sense that nearby neurons on the manifold respond similarly (in time) to similar aspects of the stimulus ensemble. By organizing the neurons, our manifold differs from other, standard manifolds as they are used in visual neuroscience which instead organize the stimuli. Our contributions to the machine learning component of the thesis are twofold. First, we develop a tensor representation of the data, adopting a multilinear view of potential circuitry. Tensor factorization then provides an intermediate representation between the neural data and the manifold. We found that the rank of the neural factor matrix can be used to select an appropriate number of tensor factors. Second, to apply manifold learning techniques, a similarity kernel on the data must be defined. Like many others, we employ a Gaussian kernel, but refine it based on a proposed graph sparsification technique—this makes the resulting manifolds less sensitive to the choice of bandwidth parameter. We apply this method to neuroscience data recorded from retina and primary visual cortex in the mouse. For the algorithm to work, however, the underlying circuitry must be exercised to as full an extent as possible. To this end, we develop an ensemble of flow stimuli, which simulate what the mouse would \u27see\u27 running through a field. Applying the algorithm to the retina reveals that neurons form clusters corresponding to known retinal ganglion cell types. In the cortex, a continuous manifold is found, indicating that, from a functional circuit point of view, there may be a continuum of cortical function types. Interestingly, both manifolds share similar global coordinates, which hint at what the key ingredients to vision might be. Lastly, we turn to perhaps the most widely used model for the cortex: deep convolutional networks. Their feedforward architecture leads to manifolds that are even more clustered than the retina, and not at all like that of the cortex. This suggests, perhaps, that they may not suffice as general models for Artificial Intelligence

    Anisotropy Across Fields and Scales

    Get PDF
    This open access book focuses on processing, modeling, and visualization of anisotropy information, which are often addressed by employing sophisticated mathematical constructs such as tensors and other higher-order descriptors. It also discusses adaptations of such constructs to problems encountered in seemingly dissimilar areas of medical imaging, physical sciences, and engineering. Featuring original research contributions as well as insightful reviews for scientists interested in handling anisotropy information, it covers topics such as pertinent geometric and algebraic properties of tensors and tensor fields, challenges faced in processing and visualizing different types of data, statistical techniques for data processing, and specific applications like mapping white-matter fiber tracts in the brain. The book helps readers grasp the current challenges in the field and provides information on the techniques devised to address them. Further, it facilitates the transfer of knowledge between different disciplines in order to advance the research frontiers in these areas. This multidisciplinary book presents, in part, the outcomes of the seventh in a series of Dagstuhl seminars devoted to visualization and processing of tensor fields and higher-order descriptors, which was held in Dagstuhl, Germany, on October 28–November 2, 2018
    • …
    corecore