3,794 research outputs found
Sparsity-Based Super Resolution for SEM Images
The scanning electron microscope (SEM) produces an image of a sample by
scanning it with a focused beam of electrons. The electrons interact with the
atoms in the sample, which emit secondary electrons that contain information
about the surface topography and composition. The sample is scanned by the
electron beam point by point, until an image of the surface is formed. Since
its invention in 1942, SEMs have become paramount in the discovery and
understanding of the nanometer world, and today it is extensively used for both
research and in industry. In principle, SEMs can achieve resolution better than
one nanometer. However, for many applications, working at sub-nanometer
resolution implies an exceedingly large number of scanning points. For exactly
this reason, the SEM diagnostics of microelectronic chips is performed either
at high resolution (HR) over a small area or at low resolution (LR) while
capturing a larger portion of the chip. Here, we employ sparse coding and
dictionary learning to algorithmically enhance LR SEM images of microelectronic
chips up to the level of the HR images acquired by slow SEM scans, while
considerably reducing the noise. Our methodology consists of two steps: an
offline stage of learning a joint dictionary from a sequence of LR and HR
images of the same region in the chip, followed by a fast-online
super-resolution step where the resolution of a new LR image is enhanced. We
provide several examples with typical chips used in the microelectronics
industry, as well as a statistical study on arbitrary images with
characteristic structural features. Conceptually, our method works well when
the images have similar characteristics. This work demonstrates that employing
sparsity concepts can greatly improve the performance of SEM, thereby
considerably increasing the scanning throughput without compromising on
analysis quality and resolution.Comment: Final publication available at ACS Nano Letter
Frequency-splitting Dynamic MRI Reconstruction using Multi-scale 3D Convolutional Sparse Coding and Automatic Parameter Selection
Department of Computer Science and EngineeringIn this thesis, we propose a novel image reconstruction algorithm using multi-scale 3D con- volutional sparse coding and a spectral decomposition technique for highly undersampled dy- namic Magnetic Resonance Imaging (MRI) data. The proposed method recovers high-frequency information using a shared 3D convolution-based dictionary built progressively during the re- construction process in an unsupervised manner, while low-frequency information is recovered using a total variation-based energy minimization method that leverages temporal coherence in dynamic MRI. Additionally, the proposed 3D dictionary is built across three different scales to more efficiently adapt to various feature sizes, and elastic net regularization is employed to promote a better approximation to the sparse input data. Furthermore, the computational com- plexity of each component in our iterative method is analyzed. We also propose an automatic parameter selection technique based on a genetic algorithm to find optimal parameters for our numerical solver which is a variant of the alternating direction method of multipliers (ADMM). We demonstrate the performance of our method by comparing it with state-of-the-art methods on 15 single-coil cardiac, 7 single-coil DCE, and a multi-coil brain MRI datasets at different sampling rates (12.5%, 25% and 50%). The results show that our method significantly outper- forms the other state-of-the-art methods in reconstruction quality with a comparable running time and is resilient to noise.ope
e-Counterfeit: a mobile-server platform for document counterfeit detection
This paper presents a novel application to detect counterfeit identity
documents forged by a scan-printing operation. Texture analysis approaches are
proposed to extract validation features from security background that is
usually printed in documents as IDs or banknotes. The main contribution of this
work is the end-to-end mobile-server architecture, which provides a service for
non-expert users and therefore can be used in several scenarios. The system
also provides a crowdsourcing mode so labeled images can be gathered,
generating databases for incremental training of the algorithms.Comment: 6 pages, 5 figure
Extending local features with contextual information in graph kernels
Graph kernels are usually defined in terms of simpler kernels over local
substructures of the original graphs. Different kernels consider different
types of substructures. However, in some cases they have similar predictive
performances, probably because the substructures can be interpreted as
approximations of the subgraphs they induce. In this paper, we propose to
associate to each feature a piece of information about the context in which the
feature appears in the graph. A substructure appearing in two different graphs
will match only if it appears with the same context in both graphs. We propose
a kernel based on this idea that considers trees as substructures, and where
the contexts are features too. The kernel is inspired from the framework in
[6], even if it is not part of it. We give an efficient algorithm for computing
the kernel and show promising results on real-world graph classification
datasets.Comment: To appear in ICONIP 201
- …