456 research outputs found

    Denoising Particle Beam Micrographs with Plug-and-Play Methods

    Full text link
    In a particle beam microscope, a raster-scanned focused beam of particles interacts with a sample to generate a secondary electron (SE) signal pixel by pixel. Conventionally formed micrographs are noisy because of limitations on acquisition time and dose. Recent work has shown that estimation methods applicable to a time-resolved measurement paradigm can greatly reduce noise, but these methods apply pixel by pixel without exploiting image structure. Raw SE count data can be modeled with a compound Poisson (Neyman Type A) likelihood, which implies data variance that is signal-dependent and greater than the variation in the underlying particle-sample interaction. These statistical properties make methods that assume additive white Gaussian noise ineffective. This paper introduces methods for particle beam micrograph denoising that use the plug-and-play framework to exploit image structure while being applicable to the unusual data likelihoods of this modality. Approximations of the data likelihood that vary in accuracy and computational complexity are combined with denoising by total variation regularization, BM3D, and DnCNN. Methods are provided for both conventional and time-resolved measurements, assuming SE counts are available. In simulations representative of helium ion microscopy and scanning electron microscopy, significant improvements in root mean-squared error (RMSE), structural similarity index measure (SSIM), and qualitative appearance are obtained. Average reductions in RMSE are by factors ranging from 2.24 to 4.11

    Image and Texture Independent Deep Learning Noise Estimation using Multiple Frames

    Full text link
    In this study, a novel multiple-frame based image and texture independent convolutional Neural Network (CNN) noise estimator is introduced. The estimator works

    High-Level Interpretation of Urban Road Maps Fusing Deep Learning-Based Pixelwise Scene Segmentation and Digital Navigation Maps

    Get PDF
    This paper addresses the problem of high-level road modeling for urban environments. Current approaches are based on geometric models that fit well to the road shape for narrow roads. However, urban environments are more complex and those models are not suitable for inner city intersections or other urban situations. The approach presented in this paper generates a model based on the information provided by a digital navigation map and a vision-based sensing module. On the one hand, the digital map includes data about the road type (residential, highway, intersection, etc.), road shape, number of lanes, and other context information such as vegetation areas, parking slots, and railways. On the other hand, the sensing module provides a pixelwise segmentation of the road using a ResNet-101 CNN with random data augmentation, as well as other hand-crafted features such as curbs, road markings, and vegetation. The high-level interpretation module is designed to learn the best set of parameters of a function that maps all the available features to the actual parametric model of the urban road, using a weighted F-score as a cost function to be optimized. We show that the presented approach eases the maintenance of digital maps using crowd-sourcing, due to the small number of data to send, and adds important context information to traditional road detection systems

    Estimating Uncertainty in Neural Networks for Cardiac MRI Segmentation: A Benchmark Study

    Get PDF

    Accurate phase retrieval of complex point spread functions with deep residual neural networks

    Full text link
    Phase retrieval, i.e. the reconstruction of phase information from intensity information, is a central problem in many optical systems. Here, we demonstrate that a deep residual neural net is able to quickly and accurately perform this task for arbitrary point spread functions (PSFs) formed by Zernike-type phase modulations. Five slices of the 3D PSF at different focal positions within a two micron range around the focus are sufficient to retrieve the first six orders of Zernike coefficients.Comment: 8 pages, 4 figure

    Deep Learning for Single Image Super-Resolution: A Brief Review

    Get PDF
    Single image super-resolution (SISR) is a notoriously challenging ill-posed problem, which aims to obtain a high-resolution (HR) output from one of its low-resolution (LR) versions. To solve the SISR problem, recently powerful deep learning algorithms have been employed and achieved the state-of-the-art performance. In this survey, we review representative deep learning-based SISR methods, and group them into two categories according to their major contributions to two essential aspects of SISR: the exploration of efficient neural network architectures for SISR, and the development of effective optimization objectives for deep SISR learning. For each category, a baseline is firstly established and several critical limitations of the baseline are summarized. Then representative works on overcoming these limitations are presented based on their original contents as well as our critical understandings and analyses, and relevant comparisons are conducted from a variety of perspectives. Finally we conclude this review with some vital current challenges and future trends in SISR leveraging deep learning algorithms.Comment: Accepted by IEEE Transactions on Multimedia (TMM
    corecore