14,034 research outputs found
Sparsity-Based Super Resolution for SEM Images
The scanning electron microscope (SEM) produces an image of a sample by
scanning it with a focused beam of electrons. The electrons interact with the
atoms in the sample, which emit secondary electrons that contain information
about the surface topography and composition. The sample is scanned by the
electron beam point by point, until an image of the surface is formed. Since
its invention in 1942, SEMs have become paramount in the discovery and
understanding of the nanometer world, and today it is extensively used for both
research and in industry. In principle, SEMs can achieve resolution better than
one nanometer. However, for many applications, working at sub-nanometer
resolution implies an exceedingly large number of scanning points. For exactly
this reason, the SEM diagnostics of microelectronic chips is performed either
at high resolution (HR) over a small area or at low resolution (LR) while
capturing a larger portion of the chip. Here, we employ sparse coding and
dictionary learning to algorithmically enhance LR SEM images of microelectronic
chips up to the level of the HR images acquired by slow SEM scans, while
considerably reducing the noise. Our methodology consists of two steps: an
offline stage of learning a joint dictionary from a sequence of LR and HR
images of the same region in the chip, followed by a fast-online
super-resolution step where the resolution of a new LR image is enhanced. We
provide several examples with typical chips used in the microelectronics
industry, as well as a statistical study on arbitrary images with
characteristic structural features. Conceptually, our method works well when
the images have similar characteristics. This work demonstrates that employing
sparsity concepts can greatly improve the performance of SEM, thereby
considerably increasing the scanning throughput without compromising on
analysis quality and resolution.Comment: Final publication available at ACS Nano Letter
Deep Networks for Image Super-Resolution with Sparse Prior
Deep learning techniques have been successfully applied in many areas of
computer vision, including low-level image restoration problems. For image
super-resolution, several models based on deep neural networks have been
recently proposed and attained superior performance that overshadows all
previous handcrafted models. The question then arises whether large-capacity
and data-driven models have become the dominant solution to the ill-posed
super-resolution problem. In this paper, we argue that domain expertise
represented by the conventional sparse coding model is still valuable, and it
can be combined with the key ingredients of deep learning to achieve further
improved results. We show that a sparse coding model particularly designed for
super-resolution can be incarnated as a neural network, and trained in a
cascaded structure from end to end. The interpretation of the network based on
sparse coding leads to much more efficient and effective training, as well as a
reduced model size. Our model is evaluated on a wide range of images, and shows
clear advantage over existing state-of-the-art methods in terms of both
restoration accuracy and human subjective quality
Learning a Mixture of Deep Networks for Single Image Super-Resolution
Single image super-resolution (SR) is an ill-posed problem which aims to
recover high-resolution (HR) images from their low-resolution (LR)
observations. The crux of this problem lies in learning the complex mapping
between low-resolution patches and the corresponding high-resolution patches.
Prior arts have used either a mixture of simple regression models or a single
non-linear neural network for this propose. This paper proposes the method of
learning a mixture of SR inference modules in a unified framework to tackle
this problem. Specifically, a number of SR inference modules specialized in
different image local patterns are first independently applied on the LR image
to obtain various HR estimates, and the resultant HR estimates are adaptively
aggregated to form the final HR image. By selecting neural networks as the SR
inference module, the whole procedure can be incorporated into a unified
network and be optimized jointly. Extensive experiments are conducted to
investigate the relation between restoration performance and different network
architectures. Compared with other current image SR approaches, our proposed
method achieves state-of-the-arts restoration results on a wide range of images
consistently while allowing more flexible design choices. The source codes are
available in http://www.ifp.illinois.edu/~dingliu2/accv2016
Multi-modal Image Processing based on Coupled Dictionary Learning
In real-world scenarios, many data processing problems often involve
heterogeneous images associated with different imaging modalities. Since these
multimodal images originate from the same phenomenon, it is realistic to assume
that they share common attributes or characteristics. In this paper, we propose
a multi-modal image processing framework based on coupled dictionary learning
to capture similarities and disparities between different image modalities. In
particular, our framework can capture favorable structure similarities across
different image modalities such as edges, corners, and other elementary
primitives in a learned sparse transform domain, instead of the original pixel
domain, that can be used to improve a number of image processing tasks such as
denoising, inpainting, or super-resolution. Practical experiments demonstrate
that incorporating multimodal information using our framework brings notable
benefits.Comment: SPAWC 2018, 19th IEEE International Workshop On Signal Processing
Advances In Wireless Communication
- …