2,346 research outputs found
Sparsity-Based Super Resolution for SEM Images
The scanning electron microscope (SEM) produces an image of a sample by
scanning it with a focused beam of electrons. The electrons interact with the
atoms in the sample, which emit secondary electrons that contain information
about the surface topography and composition. The sample is scanned by the
electron beam point by point, until an image of the surface is formed. Since
its invention in 1942, SEMs have become paramount in the discovery and
understanding of the nanometer world, and today it is extensively used for both
research and in industry. In principle, SEMs can achieve resolution better than
one nanometer. However, for many applications, working at sub-nanometer
resolution implies an exceedingly large number of scanning points. For exactly
this reason, the SEM diagnostics of microelectronic chips is performed either
at high resolution (HR) over a small area or at low resolution (LR) while
capturing a larger portion of the chip. Here, we employ sparse coding and
dictionary learning to algorithmically enhance LR SEM images of microelectronic
chips up to the level of the HR images acquired by slow SEM scans, while
considerably reducing the noise. Our methodology consists of two steps: an
offline stage of learning a joint dictionary from a sequence of LR and HR
images of the same region in the chip, followed by a fast-online
super-resolution step where the resolution of a new LR image is enhanced. We
provide several examples with typical chips used in the microelectronics
industry, as well as a statistical study on arbitrary images with
characteristic structural features. Conceptually, our method works well when
the images have similar characteristics. This work demonstrates that employing
sparsity concepts can greatly improve the performance of SEM, thereby
considerably increasing the scanning throughput without compromising on
analysis quality and resolution.Comment: Final publication available at ACS Nano Letter
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems
We present a new algorithm that significantly improves the efficiency of
exploration for deep Q-learning agents in dialogue systems. Our agents explore
via Thompson sampling, drawing Monte Carlo samples from a Bayes-by-Backprop
neural network. Our algorithm learns much faster than common exploration
strategies such as -greedy, Boltzmann, bootstrapping, and
intrinsic-reward-based ones. Additionally, we show that spiking the replay
buffer with experiences from just a few successful episodes can make Q-learning
feasible when it might otherwise fail.Comment: 13 pages, 9 figure
- …