1,163 research outputs found
Dilated Deep Residual Network for Image Denoising
Variations of deep neural networks such as convolutional neural network (CNN)
have been successfully applied to image denoising. The goal is to automatically
learn a mapping from a noisy image to a clean image given training data
consisting of pairs of noisy and clean images. Most existing CNN models for
image denoising have many layers. In such cases, the models involve a large
amount of parameters and are computationally expensive to train. In this paper,
we develop a dilated residual CNN for Gaussian image denoising. Compared with
the recently proposed residual denoiser, our method can achieve comparable
performance with less computational cost. Specifically, we enlarge receptive
field by adopting dilated convolution in residual network, and the dilation
factor is set to a certain value. We utilize appropriate zero padding to make
the dimension of the output the same as the input. It has been proven that the
expansion of receptive field can boost the CNN performance in image
classification, and we further demonstrate that it can also lead to competitive
performance for denoising problem. Moreover, we present a formula to calculate
receptive field size when dilated convolution is incorporated. Thus, the change
of receptive field can be interpreted mathematically. To validate the efficacy
of our approach, we conduct extensive experiments for both gray and color image
denoising with specific or randomized noise levels. Both of the quantitative
measurements and the visual results of denoising are promising comparing with
state-of-the-art baselines.Comment: camera ready, 8 pages, accepted to IEEE ICTAI 201
Deep Neural Machine Translation with Linear Associative Unit
Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art
Neural Machine Translation (NMT) with their capability in modeling complex
functions and capturing complex linguistic structures. However NMT systems with
deep architecture in their encoder or decoder RNNs often suffer from severe
gradient diffusion due to the non-linear recurrent activations, which often
make the optimization much more difficult. To address this problem we propose
novel linear associative units (LAU) to reduce the gradient propagation length
inside the recurrent unit. Different from conventional approaches (LSTM unit
and GRU), LAUs utilizes linear associative connections between input and output
of the recurrent unit, which allows unimpeded information flow through both
space and time direction. The model is quite simple, but it is surprisingly
effective. Our empirical study on Chinese-English translation shows that our
model with proper configuration can improve by 11.7 BLEU upon Groundhog and the
best reported results in the same setting. On WMT14 English-German task and a
larger WMT14 English-French task, our model achieves comparable results with
the state-of-the-art.Comment: 10 pages, ACL 201
Memory-enhanced Decoder for Neural Machine Translation
We propose to enhance the RNN decoder in a neural machine translator (NMT)
with external memory, as a natural but powerful extension to the state in the
decoding RNN. This memory-enhanced RNN decoder is called \textsc{MemDec}. At
each time during decoding, \textsc{MemDec} will read from this memory and write
to this memory once, both with content-based addressing. Unlike the unbounded
memory in previous work\cite{RNNsearch} to store the representation of source
sentence, the memory in \textsc{MemDec} is a matrix with pre-determined size
designed to better capture the information important for the decoding process
at each time step. Our empirical study on Chinese-English translation shows
that it can improve by BLEU upon Groundhog and BLEU upon on Moses,
yielding the best performance achieved with the same training set.Comment: 11 page
Deep Semantic Role Labeling with Self-Attention
Semantic Role Labeling (SRL) is believed to be a crucial step towards natural
language understanding and has been widely studied. Recent years, end-to-end
SRL with recurrent neural networks (RNN) has gained increasing attention.
However, it remains a major challenge for RNNs to handle structural information
and long range dependencies. In this paper, we present a simple and effective
architecture for SRL which aims to address these problems. Our model is based
on self-attention which can directly capture the relationships between two
tokens regardless of their distance. Our single model achieves F on
the CoNLL-2005 shared task dataset and F on the CoNLL-2012 shared task
dataset, which outperforms the previous state-of-the-art results by and
F score respectively. Besides, our model is computationally
efficient, and the parsing speed is 50K tokens per second on a single Titan X
GPU.Comment: Accepted by AAAI-201
Asymmetric superradiant scattering and abnormal mode amplification induced by atomic density distortion
The superradiant Rayleigh scattering using a pump laser incident along the
short axis of a Bose-Einstein condensate with a density distortion is studied,
where the distortion is formed by shocking the condensate utilizing the
residual magnetic force after the switching-off of the trapping potential. We
find that very small variation of the atomic density distribution would induce
remarkable asymmetrically populated scattering modes by the matter-wave
superradiance with long time pulse. The optical field in the diluter region of
the atomic cloud is more greatly amplified, which is not an ordinary mode
amplification with the previous cognition. Our numerical simulations with the
density envelop distortion are consistent with the experimental results. This
supplies a useful method to reflect the geometric symmetries of the atomic
density profile by the superradiance scattering.Comment: 7pages,4 figures, Optical Express 21,(2013)1437
Spin excitations and the Fermi surface of superconducting FeS
High-temperature superconductivity occurs near antiferromagnetic
instabilities and nematic state. Debate remains on the origin of nematic order
in FeSe and its relation with superconductivity. Here, we use transport,
neutron scatter- ing and Fermi surface measurements to demonstrate that
hydro-thermo grown superconducting FeS, an isostructure of FeSe, is a
tetragonal paramagnet without nematic order and with a quasiparticle mass
significantly reduced from that of FeSe. Only stripe-type spin excitation is
observed up to 100 meV. No direct coupling between spin excitation and
superconductivity in FeS is found, suggesting that FeS is less correlated and
the nematic order in FeSe is due to competing checkerboard and stripe spin
fluctuations.Comment: 11 pages, 4 page
KINEMATIC ANALYSIS OF SHOT PUT IN ELITE ATHLETES – A CASE STUDY
This paper presented the application of biomechanics in the shot put. Three elite shot-putters was video recorded. By planar analysis, the following kinematic data have been discussed: (1) the loss of distance in performances, (2) the swinging span of the leg, (3) the height of the shot before the last effort, (4) the waving manner of the swinging arm, and (5) the influence of the differences between the velocity angle of the released shot and its optimum angle. The effects of the measured values of above parameters on performances and their mechanic causes were analyzed. The results of this study provided the information for improvement of performance in athletes
- …
