14,252 research outputs found
A new class of permutation trinomials constructed from Niho exponents
Permutation polynomials over finite fields are an interesting subject due to
their important applications in the areas of mathematics and engineering. In
this paper we investigate the trinomial
over the finite field , where is an odd prime and
with being a positive integer. It is shown that when or ,
is a permutation trinomial of if and only if is even.
This property is also true for more general class of polynomials
, where is a
nonnegative integer and . Moreover, we also show that for
the permutation trinomials proposed here are new in the sense that
they are not multiplicative equivalent to previously known ones of similar
form.Comment: 17 pages, three table
Deterministic Construction of Binary Measurement Matrices with Various Sizes
We introduce a general framework to deterministically construct binary
measurement matrices for compressed sensing. The proposed matrices are composed
of (circulant) permutation submatrix blocks and zero submatrix blocks, thus
making their hardware realization convenient and easy. Firstly, using the
famous Johnson bound for binary constant weight codes, we derive a new lower
bound for the coherence of binary matrices with uniform column weights.
Afterwards, a large class of binary base matrices with coherence asymptotically
achieving this new bound are presented. Finally, by choosing proper rows and
columns from these base matrices, we construct the desired measurement matrices
with various sizes and they show empirically comparable performance to that of
the corresponding Gaussian matricesComment: 5 pages, 3 figure
Alternating direction algorithms for regularization in compressed sensing
In this paper we propose three iterative greedy algorithms for compressed
sensing, called \emph{iterative alternating direction} (IAD), \emph{normalized
iterative alternating direction} (NIAD) and \emph{alternating direction
pursuit} (ADP), which stem from the iteration steps of alternating direction
method of multiplier (ADMM) for -regularized least squares
(-LS) and can be considered as the alternating direction versions of
the well-known iterative hard thresholding (IHT), normalized iterative hard
thresholding (NIHT) and hard thresholding pursuit (HTP) respectively. Firstly,
relative to the general iteration steps of ADMM, the proposed algorithms have
no splitting or dual variables in iterations and thus the dependence of the
current approximation on past iterations is direct. Secondly, provable
theoretical guarantees are provided in terms of restricted isometry property,
which is the first theoretical guarantee of ADMM for -LS to the best of
our knowledge. Finally, they outperform the corresponding IHT, NIHT and HTP
greatly when reconstructing both constant amplitude signals with random signs
(CARS signals) and Gaussian signals.Comment: 16 pages, 1 figur
Nonextensive information theoretical machine
In this paper, we propose a new discriminative model named \emph{nonextensive
information theoretical machine (NITM)} based on nonextensive generalization of
Shannon information theory. In NITM, weight parameters are treated as random
variables. Tsallis divergence is used to regularize the distribution of weight
parameters and maximum unnormalized Tsallis entropy distribution is used to
evaluate fitting effect. On the one hand, it is showed that some well-known
margin-based loss functions such as loss, hinge loss, squared
hinge loss and exponential loss can be unified by unnormalized Tsallis entropy.
On the other hand, Gaussian prior regularization is generalized to Student-t
prior regularization with similar computational complexity. The model can be
solved efficiently by gradient-based convex optimization and its performance is
illustrated on standard datasets
Bayesian linear regression with Student-t assumptions
As an automatic method of determining model complexity using the training
data alone, Bayesian linear regression provides us a principled way to select
hyperparameters. But one often needs approximation inference if distribution
assumption is beyond Gaussian distribution. In this paper, we propose a
Bayesian linear regression model with Student-t assumptions (BLRS), which can
be inferred exactly. In this framework, both conjugate prior and expectation
maximization (EM) algorithm are generalized. Meanwhile, we prove that the
maximum likelihood solution is equivalent to the standard Bayesian linear
regression with Gaussian assumptions (BLRG). The -EM algorithm for BLRS is
nearly identical to the EM algorithm for BLRG. It is showed that -EM for
BLRS can converge faster than EM for BLRG for the task of predicting online
news popularity
Johnson Type Bounds on Constant Dimension Codes
Very recently, an operator channel was defined by Koetter and Kschischang
when they studied random network coding. They also introduced constant
dimension codes and demonstrated that these codes can be employed to correct
errors and/or erasures over the operator channel. Constant dimension codes are
equivalent to the so-called linear authentication codes introduced by Wang,
Xing and Safavi-Naini when constructing distributed authentication systems in
2003. In this paper, we study constant dimension codes. It is shown that
Steiner structures are optimal constant dimension codes achieving the
Wang-Xing-Safavi-Naini bound. Furthermore, we show that constant dimension
codes achieve the Wang-Xing-Safavi-Naini bound if and only if they are certain
Steiner structures. Then, we derive two Johnson type upper bounds, say I and
II, on constant dimension codes. The Johnson type bound II slightly improves on
the Wang-Xing-Safavi-Naini bound. Finally, we point out that a family of known
Steiner structures is actually a family of optimal constant dimension codes
achieving both the Johnson type bounds I and II.Comment: 12 pages, submitted to Designs, Codes and Cryptograph
Minimum Pseudo-Weight and Minimum Pseudo-Codewords of LDPC Codes
In this correspondence, we study the minimum pseudo-weight and minimum
pseudo-codewords of low-density parity-check (LDPC) codes under linear
programming (LP) decoding. First, we show that the lower bound of Kelly,
Sridhara, Xu and Rosenthal on the pseudo-weight of a pseudo-codeword of an LDPC
code with girth greater than 4 is tight if and only if this pseudo-codeword is
a real multiple of a codeword. Then, we show that the lower bound of Kashyap
and Vardy on the stopping distance of an LDPC code is also a lower bound on the
pseudo-weight of a pseudo-codeword of this LDPC code with girth 4, and this
lower bound is tight if and only if this pseudo-codeword is a real multiple of
a codeword. Using these results we further show that for some LDPC codes, there
are no other minimum pseudo-codewords except the real multiples of minimum
codewords. This means that the LP decoding for these LDPC codes is
asymptotically optimal in the sense that the ratio of the probabilities of
decoding errors of LP decoding and maximum-likelihood decoding approaches to 1
as the signal-to-noise ratio leads to infinity. Finally, some LDPC codes are
listed to illustrate these results.Comment: 17 pages, 1 figur
Three-flavor Nambu--Jona-Lasinio model at finite isospin chemical potential
QCD at finite isospin chemical potential possesses a
positively definite fermion determinant and the lattice simulation can be
successfully performed. While the two-flavor effective models may be sufficient
to describe the phenomenon of pion condensation, it is interesting to study the
roles of the strangeness degree of freedom and the U anomaly. In
this paper, we present a systematic study of the three-flavor
Nambu--Jona-Lasinio model with a Kobayashi-Maskawa-'t Hooft (KMT) term that
mimics the U anomaly at finite isospin chemical potential. In the
mean-field approximation, the model predicts a phase transition from the vacuum
to the pion superfluid phase, which takes place at equal to the
pion mass . Due to the U anomaly, the strangeness degree of
freedom couples to the light quark degrees of freedom and the strange quark
effective mass depends on the pion condensate. However, the strange quark
condensate and the strange quark effective mass change slightly in the pion
superfluid phase, which verifies the validity of the two-flavor models. The
effective four-fermion interaction of the Kobayashi-Maskawa-'t Hooft term in
the presence of the pion condensation is constructed. Due to the U
anomaly, the pion condensation generally induces scalar-pseudoscalar
interaction. The Bethe-Salpeter equation for the mesonic excitations is
established and the meson mass spectra are obtained at finite isospin chemical
potential and temperature. Finally, the general expression for the topological
susceptibility at finite isospin chemical potential is
derived. In contrast to the finite temperature effect which suppresses ,
the isospin density effect leads to an enhancement of .Comment: Version punlished in PR
Topological Susceptibility in Three-Flavor Quark Meson Model at Finite Temperature
We study symmetry and its relation to chiral symmetry at finite
temperature through the application of functional renormalization group to the
quark meson model. Very different from the mass gap and mixing angel
between and mesons which are defined at mean field level and
behavior like the chiral condensates, the topological susceptibility includes a
fluctuations induced part which becomes dominant at high temperature. As a
result, the symmetry is still considerably broken in the chiral
symmetry restoration phase.Comment: 9 pages, 5 figure
Sparse signal recovery by minimization under restricted isometry property
In the context of compressed sensing, the nonconvex minimization
with has been studied in recent years. In this paper, by generalizing
the sharp bound for minimization of Cai and Zhang, we show that the
condition in terms of
\emph{restricted isometry constant (RIC)} can guarantee the exact recovery of
-sparse signals in noiseless case and the stable recovery of approximately
-sparse signals in noisy case by minimization. This result is more
general than the sharp bound for minimization when the order of RIC is
greater than and illustrates the fact that a better approximation to
minimization is provided by minimization than that provided
by minimization
- β¦