6,251 research outputs found
Uniform Quadratic Optimization and Extensions
The uniform quadratic optimizatin problem (UQ) is a nonconvex quadratic
constrained quadratic programming (QCQP) sharing the same Hessian matrix. Based
on the second-order cone programming (SOCP) relaxation, we establish a new
sufficient condition to guarantee strong duality for (UQ) and then extend it to
(QCQP), which not only covers several well-known results in literature but also
partially gives answers to a few open questions. For convex constrained
nonconvex (UQ), we propose an improved approximation algorithm based on (SOCP).
Our approximation bound is dimensional independent. As an application, we
establish the first approximation bound for the problem of finding the
Chebyshev center of the intersection of several balls.Comment: 28 page
Alternating direction algorithms for regularization in compressed sensing
In this paper we propose three iterative greedy algorithms for compressed
sensing, called \emph{iterative alternating direction} (IAD), \emph{normalized
iterative alternating direction} (NIAD) and \emph{alternating direction
pursuit} (ADP), which stem from the iteration steps of alternating direction
method of multiplier (ADMM) for -regularized least squares
(-LS) and can be considered as the alternating direction versions of
the well-known iterative hard thresholding (IHT), normalized iterative hard
thresholding (NIHT) and hard thresholding pursuit (HTP) respectively. Firstly,
relative to the general iteration steps of ADMM, the proposed algorithms have
no splitting or dual variables in iterations and thus the dependence of the
current approximation on past iterations is direct. Secondly, provable
theoretical guarantees are provided in terms of restricted isometry property,
which is the first theoretical guarantee of ADMM for -LS to the best of
our knowledge. Finally, they outperform the corresponding IHT, NIHT and HTP
greatly when reconstructing both constant amplitude signals with random signs
(CARS signals) and Gaussian signals.Comment: 16 pages, 1 figur
Nonextensive information theoretical machine
In this paper, we propose a new discriminative model named \emph{nonextensive
information theoretical machine (NITM)} based on nonextensive generalization of
Shannon information theory. In NITM, weight parameters are treated as random
variables. Tsallis divergence is used to regularize the distribution of weight
parameters and maximum unnormalized Tsallis entropy distribution is used to
evaluate fitting effect. On the one hand, it is showed that some well-known
margin-based loss functions such as loss, hinge loss, squared
hinge loss and exponential loss can be unified by unnormalized Tsallis entropy.
On the other hand, Gaussian prior regularization is generalized to Student-t
prior regularization with similar computational complexity. The model can be
solved efficiently by gradient-based convex optimization and its performance is
illustrated on standard datasets
Bayesian linear regression with Student-t assumptions
As an automatic method of determining model complexity using the training
data alone, Bayesian linear regression provides us a principled way to select
hyperparameters. But one often needs approximation inference if distribution
assumption is beyond Gaussian distribution. In this paper, we propose a
Bayesian linear regression model with Student-t assumptions (BLRS), which can
be inferred exactly. In this framework, both conjugate prior and expectation
maximization (EM) algorithm are generalized. Meanwhile, we prove that the
maximum likelihood solution is equivalent to the standard Bayesian linear
regression with Gaussian assumptions (BLRG). The -EM algorithm for BLRS is
nearly identical to the EM algorithm for BLRG. It is showed that -EM for
BLRS can converge faster than EM for BLRG for the task of predicting online
news popularity
Johnson Type Bounds on Constant Dimension Codes
Very recently, an operator channel was defined by Koetter and Kschischang
when they studied random network coding. They also introduced constant
dimension codes and demonstrated that these codes can be employed to correct
errors and/or erasures over the operator channel. Constant dimension codes are
equivalent to the so-called linear authentication codes introduced by Wang,
Xing and Safavi-Naini when constructing distributed authentication systems in
2003. In this paper, we study constant dimension codes. It is shown that
Steiner structures are optimal constant dimension codes achieving the
Wang-Xing-Safavi-Naini bound. Furthermore, we show that constant dimension
codes achieve the Wang-Xing-Safavi-Naini bound if and only if they are certain
Steiner structures. Then, we derive two Johnson type upper bounds, say I and
II, on constant dimension codes. The Johnson type bound II slightly improves on
the Wang-Xing-Safavi-Naini bound. Finally, we point out that a family of known
Steiner structures is actually a family of optimal constant dimension codes
achieving both the Johnson type bounds I and II.Comment: 12 pages, submitted to Designs, Codes and Cryptograph
The generalized connectivity of some regular graphs
The generalized -connectivity of a graph is a
parameter that can measure the reliability of a network to connect any
vertices in , which is proved to be NP-complete for a general graph . Let
and denote the maximum number of
edge-disjoint trees in such that
for any and . For an integer with , the {\em generalized
-connectivity} of a graph is defined as and .
In this paper, we study the generalized -connectivity of some general
-regular and -connected graphs constructed recursively and obtain
that , which attains the upper bound of
[Discrete Mathematics 310 (2010) 2147-2163] given by Li {\em et al.} for
. As applications of the main result, the generalized -connectivity
of many famous networks such as the alternating group graph , the
-ary -cube , the split-star network and the
bubble-sort-star graph etc. can be obtained directly.Comment: 19 pages, 6 figure
Minimum Pseudo-Weight and Minimum Pseudo-Codewords of LDPC Codes
In this correspondence, we study the minimum pseudo-weight and minimum
pseudo-codewords of low-density parity-check (LDPC) codes under linear
programming (LP) decoding. First, we show that the lower bound of Kelly,
Sridhara, Xu and Rosenthal on the pseudo-weight of a pseudo-codeword of an LDPC
code with girth greater than 4 is tight if and only if this pseudo-codeword is
a real multiple of a codeword. Then, we show that the lower bound of Kashyap
and Vardy on the stopping distance of an LDPC code is also a lower bound on the
pseudo-weight of a pseudo-codeword of this LDPC code with girth 4, and this
lower bound is tight if and only if this pseudo-codeword is a real multiple of
a codeword. Using these results we further show that for some LDPC codes, there
are no other minimum pseudo-codewords except the real multiples of minimum
codewords. This means that the LP decoding for these LDPC codes is
asymptotically optimal in the sense that the ratio of the probabilities of
decoding errors of LP decoding and maximum-likelihood decoding approaches to 1
as the signal-to-noise ratio leads to infinity. Finally, some LDPC codes are
listed to illustrate these results.Comment: 17 pages, 1 figur
The -good neighbour diagnosability of hierarchical cubic networks
Let be a connected graph, a subset is called an
-vertex-cut of if is disconnected and any vertex in has
at least neighbours in . The -vertex-connectivity is the size
of the minimum -vertex-cut and denoted by . Many
large-scale multiprocessor or multi-computer systems take interconnection
networks as underlying topologies. Fault diagnosis is especially important to
identify fault tolerability of such systems. The -good-neighbor
diagnosability such that every fault-free node has at least fault-free
neighbors is a novel measure of diagnosability. In this paper, we show that the
-good-neighbor diagnosability of the hierarchical cubic networks
under the PMC model for and the model for is , respectively
Sparse signal recovery by minimization under restricted isometry property
In the context of compressed sensing, the nonconvex minimization
with has been studied in recent years. In this paper, by generalizing
the sharp bound for minimization of Cai and Zhang, we show that the
condition in terms of
\emph{restricted isometry constant (RIC)} can guarantee the exact recovery of
-sparse signals in noiseless case and the stable recovery of approximately
-sparse signals in noisy case by minimization. This result is more
general than the sharp bound for minimization when the order of RIC is
greater than and illustrates the fact that a better approximation to
minimization is provided by minimization than that provided
by minimization
Approximation of the weighted maximin dispersion problem over Lp-ball: SDP relaxation is misleading
Consider the problem of finding a point in a unit -dimensional
-ball () such that the minimum of the weighted Euclidean
distance from given points is maximized. We show in this paper that the
recent SDP-relaxation-based approximation algorithm [SIAM J. Optim. 23(4),
2264-2294, 2013] will not only provide the first theoretical approximation
bound of , but also perform much
better in practice, if the SDP relaxation is removed and the optimal solution
of the SDP relaxation is replaced by a simple scalar matrix.Comment: 8pages,2figure
- β¦