723 research outputs found
Hamming Compressed Sensing
Compressed sensing (CS) and 1-bit CS cannot directly recover quantized
signals and require time consuming recovery. In this paper, we introduce
\textit{Hamming compressed sensing} (HCS) that directly recovers a k-bit
quantized signal of dimensional from its 1-bit measurements via invoking
times of Kullback-Leibler divergence based nearest neighbor search.
Compared with CS and 1-bit CS, HCS allows the signal to be dense, takes
considerably less (linear) recovery time and requires substantially less
measurements (). Moreover, HCS recovery can accelerate the
subsequent 1-bit CS dequantizer. We study a quantized recovery error bound of
HCS for general signals and "HCS+dequantizer" recovery error bound for sparse
signals. Extensive numerical simulations verify the appealing accuracy,
robustness, efficiency and consistency of HCS.Comment: 33 pages, 8 figure
Bilateral Random Projections
Low-rank structure have been profoundly studied in data mining and machine
learning. In this paper, we show a dense matrix 's low-rank approximation
can be rapidly built from its left and right random projections and
, or bilateral random projection (BRP). We then show power scheme
can further improve the precision. The deterministic, average and deviation
bounds of the proposed method and its power scheme modification are proved
theoretically. The effectiveness and the efficiency of BRP based low-rank
approximation is empirically verified on both artificial and real datasets.Comment: 17 pages, 3 figures, technical repor
Asymptotic Generalization Bound of Fisher's Linear Discriminant Analysis
Fisher's linear discriminant analysis (FLDA) is an important dimension
reduction method in statistical pattern recognition. It has been shown that
FLDA is asymptotically Bayes optimal under the homoscedastic Gaussian
assumption. However, this classical result has the following two major
limitations: 1) it holds only for a fixed dimensionality , and thus does not
apply when and the training sample size are proportionally large; 2) it
does not provide a quantitative description on how the generalization ability
of FLDA is affected by and . In this paper, we present an asymptotic
generalization analysis of FLDA based on random matrix theory, in a setting
where both and increase and . The
obtained lower bound of the generalization discrimination power overcomes both
limitations of the classical result, i.e., it is applicable when and
are proportionally large and provides a quantitative description of the
generalization ability of FLDA in terms of the ratio and the
population discrimination power. Besides, the discrimination power bound also
leads to an upper bound on the generalization error of binary-classification
with FLDA
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Shakeout: A New Approach to Regularized Deep Neural Network Training
Recent years have witnessed the success of deep neural networks in dealing
with a plenty of practical problems. Dropout has played an essential role in
many successful deep neural networks, by inducing regularization in the model
training. In this paper, we present a new regularized training approach:
Shakeout. Instead of randomly discarding units as Dropout does at the training
stage, Shakeout randomly chooses to enhance or reverse each unit's contribution
to the next layer. This minor modification of Dropout has the statistical
trait: the regularizer induced by Shakeout adaptively combines , and
regularization terms. Our classification experiments with representative
deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that
Shakeout deals with over-fitting effectively and outperforms Dropout. We
empirically demonstrate that Shakeout leads to sparser weights under both
unsupervised and supervised settings. Shakeout also leads to the grouping
effect of the input units in a layer. Considering the weights in reflecting the
importance of connections, Shakeout is superior to Dropout, which is valuable
for the deep model compression. Moreover, we demonstrate that Shakeout can
effectively reduce the instability of the training process of the deep
architecture.Comment: Appears at T-PAMI 201
- β¦