6,251 research outputs found

    Uniform Quadratic Optimization and Extensions

    Full text link
    The uniform quadratic optimizatin problem (UQ) is a nonconvex quadratic constrained quadratic programming (QCQP) sharing the same Hessian matrix. Based on the second-order cone programming (SOCP) relaxation, we establish a new sufficient condition to guarantee strong duality for (UQ) and then extend it to (QCQP), which not only covers several well-known results in literature but also partially gives answers to a few open questions. For convex constrained nonconvex (UQ), we propose an improved approximation algorithm based on (SOCP). Our approximation bound is dimensional independent. As an application, we establish the first approximation bound for the problem of finding the Chebyshev center of the intersection of several balls.Comment: 28 page

    Alternating direction algorithms for β„“0\ell_0 regularization in compressed sensing

    Full text link
    In this paper we propose three iterative greedy algorithms for compressed sensing, called \emph{iterative alternating direction} (IAD), \emph{normalized iterative alternating direction} (NIAD) and \emph{alternating direction pursuit} (ADP), which stem from the iteration steps of alternating direction method of multiplier (ADMM) for β„“0\ell_0-regularized least squares (β„“0\ell_0-LS) and can be considered as the alternating direction versions of the well-known iterative hard thresholding (IHT), normalized iterative hard thresholding (NIHT) and hard thresholding pursuit (HTP) respectively. Firstly, relative to the general iteration steps of ADMM, the proposed algorithms have no splitting or dual variables in iterations and thus the dependence of the current approximation on past iterations is direct. Secondly, provable theoretical guarantees are provided in terms of restricted isometry property, which is the first theoretical guarantee of ADMM for β„“0\ell_0-LS to the best of our knowledge. Finally, they outperform the corresponding IHT, NIHT and HTP greatly when reconstructing both constant amplitude signals with random signs (CARS signals) and Gaussian signals.Comment: 16 pages, 1 figur

    Nonextensive information theoretical machine

    Full text link
    In this paper, we propose a new discriminative model named \emph{nonextensive information theoretical machine (NITM)} based on nonextensive generalization of Shannon information theory. In NITM, weight parameters are treated as random variables. Tsallis divergence is used to regularize the distribution of weight parameters and maximum unnormalized Tsallis entropy distribution is used to evaluate fitting effect. On the one hand, it is showed that some well-known margin-based loss functions such as β„“0/1\ell_{0/1} loss, hinge loss, squared hinge loss and exponential loss can be unified by unnormalized Tsallis entropy. On the other hand, Gaussian prior regularization is generalized to Student-t prior regularization with similar computational complexity. The model can be solved efficiently by gradient-based convex optimization and its performance is illustrated on standard datasets

    Bayesian linear regression with Student-t assumptions

    Full text link
    As an automatic method of determining model complexity using the training data alone, Bayesian linear regression provides us a principled way to select hyperparameters. But one often needs approximation inference if distribution assumption is beyond Gaussian distribution. In this paper, we propose a Bayesian linear regression model with Student-t assumptions (BLRS), which can be inferred exactly. In this framework, both conjugate prior and expectation maximization (EM) algorithm are generalized. Meanwhile, we prove that the maximum likelihood solution is equivalent to the standard Bayesian linear regression with Gaussian assumptions (BLRG). The qq-EM algorithm for BLRS is nearly identical to the EM algorithm for BLRG. It is showed that qq-EM for BLRS can converge faster than EM for BLRG for the task of predicting online news popularity

    Johnson Type Bounds on Constant Dimension Codes

    Full text link
    Very recently, an operator channel was defined by Koetter and Kschischang when they studied random network coding. They also introduced constant dimension codes and demonstrated that these codes can be employed to correct errors and/or erasures over the operator channel. Constant dimension codes are equivalent to the so-called linear authentication codes introduced by Wang, Xing and Safavi-Naini when constructing distributed authentication systems in 2003. In this paper, we study constant dimension codes. It is shown that Steiner structures are optimal constant dimension codes achieving the Wang-Xing-Safavi-Naini bound. Furthermore, we show that constant dimension codes achieve the Wang-Xing-Safavi-Naini bound if and only if they are certain Steiner structures. Then, we derive two Johnson type upper bounds, say I and II, on constant dimension codes. The Johnson type bound II slightly improves on the Wang-Xing-Safavi-Naini bound. Finally, we point out that a family of known Steiner structures is actually a family of optimal constant dimension codes achieving both the Johnson type bounds I and II.Comment: 12 pages, submitted to Designs, Codes and Cryptograph

    The generalized connectivity of some regular graphs

    Full text link
    The generalized kk-connectivity ΞΊk(G)\kappa_{k}(G) of a graph GG is a parameter that can measure the reliability of a network GG to connect any kk vertices in GG, which is proved to be NP-complete for a general graph GG. Let SβŠ†V(G)S\subseteq V(G) and ΞΊG(S)\kappa_{G}(S) denote the maximum number rr of edge-disjoint trees T1,T2,⋯ ,TrT_{1}, T_{2}, \cdots, T_{r} in GG such that V(Ti)β‹‚V(Tj)=SV(T_{i})\bigcap V(T_{j})=S for any i,j∈{1,2,⋯ ,r}i, j \in \{1, 2, \cdots, r\} and iβ‰ ji\neq j. For an integer kk with 2≀k≀n2\leq k\leq n, the {\em generalized kk-connectivity} of a graph GG is defined as ΞΊk(G)=min{ΞΊG(S)∣SβŠ†V(G)\kappa_{k}(G)= min\{\kappa_{G}(S)|S\subseteq V(G) and ∣S∣=k}|S|=k\}. In this paper, we study the generalized 33-connectivity of some general mm-regular and mm-connected graphs GnG_{n} constructed recursively and obtain that ΞΊ3(Gn)=mβˆ’1\kappa_{3}(G_{n})=m-1, which attains the upper bound of ΞΊ3(G)\kappa_{3}(G) [Discrete Mathematics 310 (2010) 2147-2163] given by Li {\em et al.} for G=GnG=G_{n}. As applications of the main result, the generalized 33-connectivity of many famous networks such as the alternating group graph AGnAG_{n}, the kk-ary nn-cube QnkQ_{n}^{k}, the split-star network Sn2S_{n}^{2} and the bubble-sort-star graph BSnBS_{n} etc. can be obtained directly.Comment: 19 pages, 6 figure

    Minimum Pseudo-Weight and Minimum Pseudo-Codewords of LDPC Codes

    Full text link
    In this correspondence, we study the minimum pseudo-weight and minimum pseudo-codewords of low-density parity-check (LDPC) codes under linear programming (LP) decoding. First, we show that the lower bound of Kelly, Sridhara, Xu and Rosenthal on the pseudo-weight of a pseudo-codeword of an LDPC code with girth greater than 4 is tight if and only if this pseudo-codeword is a real multiple of a codeword. Then, we show that the lower bound of Kashyap and Vardy on the stopping distance of an LDPC code is also a lower bound on the pseudo-weight of a pseudo-codeword of this LDPC code with girth 4, and this lower bound is tight if and only if this pseudo-codeword is a real multiple of a codeword. Using these results we further show that for some LDPC codes, there are no other minimum pseudo-codewords except the real multiples of minimum codewords. This means that the LP decoding for these LDPC codes is asymptotically optimal in the sense that the ratio of the probabilities of decoding errors of LP decoding and maximum-likelihood decoding approaches to 1 as the signal-to-noise ratio leads to infinity. Finally, some LDPC codes are listed to illustrate these results.Comment: 17 pages, 1 figur

    The gg-good neighbour diagnosability of hierarchical cubic networks

    Full text link
    Let G=(V,E)G=(V, E) be a connected graph, a subset SβŠ†V(G)S\subseteq V(G) is called an RgR^{g}-vertex-cut of GG if Gβˆ’FG-F is disconnected and any vertex in Gβˆ’FG-F has at least gg neighbours in Gβˆ’FG-F. The RgR^{g}-vertex-connectivity is the size of the minimum RgR^{g}-vertex-cut and denoted by ΞΊg(G)\kappa^{g}(G). Many large-scale multiprocessor or multi-computer systems take interconnection networks as underlying topologies. Fault diagnosis is especially important to identify fault tolerability of such systems. The gg-good-neighbor diagnosability such that every fault-free node has at least gg fault-free neighbors is a novel measure of diagnosability. In this paper, we show that the gg-good-neighbor diagnosability of the hierarchical cubic networks HCNnHCN_{n} under the PMC model for 1≀g≀nβˆ’11\leq g\leq n-1 and the MMβˆ—MM^{*} model for 1≀g≀nβˆ’11\leq g\leq n-1 is 2g(n+2βˆ’g)βˆ’12^{g}(n+2-g)-1, respectively

    Sparse signal recovery by β„“q\ell_q minimization under restricted isometry property

    Full text link
    In the context of compressed sensing, the nonconvex β„“q\ell_q minimization with 0<q<10<q<1 has been studied in recent years. In this paper, by generalizing the sharp bound for β„“1\ell_1 minimization of Cai and Zhang, we show that the condition Ξ΄(sq+1)k<1sqβˆ’2+1\delta_{(s^q+1)k}<\dfrac{1}{\sqrt{s^{q-2}+1}} in terms of \emph{restricted isometry constant (RIC)} can guarantee the exact recovery of kk-sparse signals in noiseless case and the stable recovery of approximately kk-sparse signals in noisy case by β„“q\ell_q minimization. This result is more general than the sharp bound for β„“1\ell_1 minimization when the order of RIC is greater than 2k2k and illustrates the fact that a better approximation to β„“0\ell_0 minimization is provided by β„“q\ell_q minimization than that provided by β„“1\ell_1 minimization

    Approximation of the weighted maximin dispersion problem over Lp-ball: SDP relaxation is misleading

    Full text link
    Consider the problem of finding a point in a unit nn-dimensional β„“p\ell_p-ball (pβ‰₯2p\ge 2) such that the minimum of the weighted Euclidean distance from given mm points is maximized. We show in this paper that the recent SDP-relaxation-based approximation algorithm [SIAM J. Optim. 23(4), 2264-2294, 2013] will not only provide the first theoretical approximation bound of 1βˆ’O(ln⁑(m)/n)2\frac{1-O\left(\sqrt{ \ln(m)/n}\right)}{2}, but also perform much better in practice, if the SDP relaxation is removed and the optimal solution of the SDP relaxation is replaced by a simple scalar matrix.Comment: 8pages,2figure
    • …
    corecore