68,252 research outputs found
Effective equidistribution of primitive rational points on expanding horospheres
We prove an effective version of a result due to Einsiedler, Mozes, Shah and
Shapira who established the equidistribution of primitive rational points on
expanding horospheres in the space of unimodular lattices in at least
dimensions. Their proof uses techniques from homogeneous dynamics and relies in
particular on measure-classification theorems --- an approach which does not
lend itself to effective bounds. We implement a strategy based on spectral
theory, Fourier analysis and Weil's bound for Kloosterman sums in order to
quantify the rate of equidistribution for a specific horospherical subgroup in
any dimension. We apply our result to provide a rate of convergence to the
limiting distribution for the appropriately rescaled diameters of random
circulant graphs.Comment: 21 pages, incorporates the referee's comments and correction
CT-SRCNN: Cascade Trained and Trimmed Deep Convolutional Neural Networks for Image Super Resolution
We propose methodologies to train highly accurate and efficient deep
convolutional neural networks (CNNs) for image super resolution (SR). A cascade
training approach to deep learning is proposed to improve the accuracy of the
neural networks while gradually increasing the number of network layers. Next,
we explore how to improve the SR efficiency by making the network slimmer. Two
methodologies, the one-shot trimming and the cascade trimming, are proposed.
With the cascade trimming, the network's size is gradually reduced layer by
layer, without significant loss on its discriminative ability. Experiments on
benchmark image datasets show that our proposed SR network achieves the
state-of-the-art super resolution accuracy, while being more than 4 times
faster compared to existing deep super resolution networks.Comment: Accepted to IEEE Winter Conf. on Applications of Computer Vision
(WACV) 2018, Lake Tahoe, US
BridgeNets: Student-Teacher Transfer Learning Based on Recursive Neural Networks and its Application to Distant Speech Recognition
Despite the remarkable progress achieved on automatic speech recognition,
recognizing far-field speeches mixed with various noise sources is still a
challenging task. In this paper, we introduce novel student-teacher transfer
learning, BridgeNet which can provide a solution to improve distant speech
recognition. There are two key features in BridgeNet. First, BridgeNet extends
traditional student-teacher frameworks by providing multiple hints from a
teacher network. Hints are not limited to the soft labels from a teacher
network. Teacher's intermediate feature representations can better guide a
student network to learn how to denoise or dereverberate noisy input. Second,
the proposed recursive architecture in the BridgeNet can iteratively improve
denoising and recognition performance. The experimental results of BridgeNet
showed significant improvements in tackling the distant speech recognition
problem, where it achieved up to 13.24% relative WER reductions on AMI corpus
compared to a baseline neural network without teacher's hints.Comment: Accepted to 2018 IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP 2018
On the Construction and Decoding of Concatenated Polar Codes
A scheme for concatenating the recently invented polar codes with interleaved
block codes is considered. By concatenating binary polar codes with interleaved
Reed-Solomon codes, we prove that the proposed concatenation scheme captures
the capacity-achieving property of polar codes, while having a significantly
better error-decay rate. We show that for any , and total frame
length , the parameters of the scheme can be set such that the frame error
probability is less than , while the scheme is still
capacity achieving. This improves upon 2^{-N^{0.5-\eps}}, the frame error
probability of Arikan's polar codes. We also propose decoding algorithms for
concatenated polar codes, which significantly improve the error-rate
performance at finite block lengths while preserving the low decoding
complexity
- …