458,243 research outputs found
Hierarchical and High-Girth QC LDPC Codes
We present a general approach to designing capacity-approaching high-girth
low-density parity-check (LDPC) codes that are friendly to hardware
implementation. Our methodology starts by defining a new class of
"hierarchical" quasi-cyclic (HQC) LDPC codes that generalizes the structure of
quasi-cyclic (QC) LDPC codes. Whereas the parity check matrices of QC LDPC
codes are composed of circulant sub-matrices, those of HQC LDPC codes are
composed of a hierarchy of circulant sub-matrices that are in turn constructed
from circulant sub-matrices, and so on, through some number of levels. We show
how to map any class of codes defined using a protograph into a family of HQC
LDPC codes. Next, we present a girth-maximizing algorithm that optimizes the
degrees of freedom within the family of codes to yield a high-girth HQC LDPC
code. Finally, we discuss how certain characteristics of a code protograph will
lead to inevitable short cycles, and show that these short cycles can be
eliminated using a "squashing" procedure that results in a high-girth QC LDPC
code, although not a hierarchical one. We illustrate our approach with designed
examples of girth-10 QC LDPC codes obtained from protographs of one-sided
spatially-coupled codes.Comment: Submitted to IEEE Transactions on Information THeor
New gravitational solutions via a Riemann-Hilbert approach
We consider the Riemann-Hilbert factorization approach to solving the field
equations of dimensionally reduced gravity theories. First we prove that
functions belonging to a certain class possess a canonical factorization due to
properties of the underlying spectral curve. Then we use this result, together
with appropriate matricial decompositions, to study the canonical factorization
of non-meromorphic monodromy matrices that describe deformations of seed
monodromy matrices associated with known solutions. This results in new
solutions, with unusual features, to the field equations.Comment: 29 pages, 2 figures; v2: reference added, matches published versio
RandomBoost: Simplified Multi-class Boosting through Randomization
We propose a novel boosting approach to multi-class classification problems,
in which multiple classes are distinguished by a set of random projection
matrices in essence. The approach uses random projections to alleviate the
proliferation of binary classifiers typically required to perform multi-class
classification. The result is a multi-class classifier with a single
vector-valued parameter, irrespective of the number of classes involved. Two
variants of this approach are proposed. The first method randomly projects the
original data into new spaces, while the second method randomly projects the
outputs of learned weak classifiers. These methods are not only conceptually
simple but also effective and easy to implement. A series of experiments on
synthetic, machine learning and visual recognition data sets demonstrate that
our proposed methods compare favorably to existing multi-class boosting
algorithms in terms of both the convergence rate and classification accuracy.Comment: 15 page
Smoothing Dynamic Systems with State-Dependent Covariance Matrices
Kalman filtering and smoothing algorithms are used in many areas, including
tracking and navigation, medical applications, and financial trend filtering.
One of the basic assumptions required to apply the Kalman smoothing framework
is that error covariance matrices are known and given. In this paper, we study
a general class of inference problems where covariance matrices can depend
functionally on unknown parameters. In the Kalman framework, this allows
modeling situations where covariance matrices may depend functionally on the
state sequence being estimated. We present an extended formulation and
generalized Gauss-Newton (GGN) algorithm for inference in this context. When
applied to dynamic systems inference, we show the algorithm can be implemented
to preserve the computational efficiency of the classic Kalman smoother. The
new approach is illustrated with a synthetic numerical example.Comment: 8 pages, 1 figur
A nonparametric empirical Bayes approach to covariance matrix estimation
We propose an empirical Bayes method to estimate high-dimensional covariance
matrices. Our procedure centers on vectorizing the covariance matrix and
treating matrix estimation as a vector estimation problem. Drawing from the
compound decision theory literature, we introduce a new class of decision rules
that generalizes several existing procedures. We then use a nonparametric
empirical Bayes g-modeling approach to estimate the oracle optimal rule in that
class. This allows us to let the data itself determine how best to shrink the
estimator, rather than shrinking in a pre-determined direction such as toward a
diagonal matrix. Simulation results and a gene expression network analysis
shows that our approach can outperform a number of state-of-the-art proposals
in a wide range of settings, sometimes substantially.Comment: 20 pages, 4 figure
- …