514 research outputs found

    On Sombor Index of Graphs

    Full text link
    Recently, Gutman defined a new vertex-degree-based graph invariant, named the Sombor index SOSO of a graph GG, and is defined by SO(G)=∑uv∈E(G)dG(u)2+dG(v)2,SO(G)=\sum_{uv\in E(G)}\sqrt{d_G(u)^2+d_G(v)^2}, where dG(v)d_G(v) is the degree of the vertex vv of GG. In this paper, we obtain the sharp lower and upper bounds on SO(G)SO(G) of a connected graph, and characterize graphs for which these bounds are attained

    The vanishing order of certain Hecke L-functions of imaginary quadratic fields

    Get PDF
    AbstractLet −D<−4 denote a fundamental discriminant which is either odd or divisible by 8, so that the canonical Hecke character of Q(−D) exists. Let d be a fundamental discriminant prime to D. Let 2k−1 be an odd natural number prime to the class number of Q(−D). Let χ be the twist of the (2k−1)th power of a canonical Hecke character of Q(−D) by the Kronecker's symbol n↦(dn). It is proved that the vanishing order of the Hecke L-function L(s,χ) at its central point s=k is determined by its root number when |d|⪡D112−ϵ, where the constant implied in the symbol ⪡ depends only on k and ϵ, and is effective for L-functions with root number −1

    Quantized Compressive Sensing with RIP Matrices: The Benefit of Dithering

    Full text link
    Quantized compressive sensing (QCS) deals with the problem of coding compressive measurements of low-complexity signals with quantized, finite precision representations, i.e., a mandatory process involved in any practical sensing model. While the resolution of this quantization clearly impacts the quality of signal reconstruction, there actually exist incompatible combinations of quantization functions and sensing matrices that proscribe arbitrarily low reconstruction error when the number of measurements increases. This work shows that a large class of random matrix constructions known to respect the restricted isometry property (RIP) is "compatible" with a simple scalar and uniform quantization if a uniform random vector, or a random dither, is added to the compressive signal measurements before quantization. In the context of estimating low-complexity signals (e.g., sparse or compressible signals, low-rank matrices) from their quantized observations, this compatibility is demonstrated by the existence of (at least) one signal reconstruction method, the projected back projection (PBP), whose reconstruction error decays when the number of measurements increases. Interestingly, given one RIP matrix and a single realization of the dither, a small reconstruction error can be proved to hold uniformly for all signals in the considered low-complexity set. We confirm these observations numerically in several scenarios involving sparse signals, low-rank matrices, and compressible signals, with various RIP matrix constructions such as sub-Gaussian random matrices and random partial discrete cosine transform (DCT) matrices.Comment: 42 pages, 9 figures. Diff. btw V3 & V2: better paper structure, new concepts (e.g., RIP matrix distribution, connections with Bussgang's theorem), as well as many clarifications and correction

    1-Bit Compressive Sensing: Reformulation and RRSP-Based Sign Recovery Theory

    Full text link
    Recently, the 1-bit compressive sensing (1-bit CS) has been studied in the field of sparse signal recovery. Since the amplitude information of sparse signals in 1-bit CS is not available, it is often the support or the sign of a signal that can be exactly recovered with a decoding method. In this paper, we first show that a necessary assumption (that has been overlooked in the literature) should be made for some existing theories and discussions for 1-bit CS. Without such an assumption, the found solution by some existing decoding algorithms might be inconsistent with 1-bit measurements. This motivates us to pursue a new direction to develop uniform and nonuniform recovery theories for 1-bit CS with a new decoding method which always generates a solution consistent with 1-bit measurements. We focus on an extreme case of 1-bit CS, in which the measurements capture only the sign of the product of a sensing matrix and a signal. We show that the 1-bit CS model can be reformulated equivalently as an â„“0\ell_0-minimization problem with linear constraints. This reformulation naturally leads to a new linear-program-based decoding method, referred to as the 1-bit basis pursuit, which is remarkably different from existing formulations. It turns out that the uniqueness condition for the solution of the 1-bit basis pursuit yields the so-called restricted range space property (RRSP) of the transposed sensing matrix. This concept provides a basis to develop sign recovery conditions for sparse signals through 1-bit measurements. We prove that if the sign of a sparse signal can be exactly recovered from 1-bit measurements with 1-bit basis pursuit, then the sensing matrix must admit a certain RRSP, and that if the sensing matrix admits a slightly enhanced RRSP, then the sign of a kk-sparse signal can be exactly recovered with 1-bit basis pursuit

    Sparsity optimization and RRSP-based theory far l-bit compressive sensing

    Get PDF
    Due to the fact that only a few significant components can capture the key information of the signal, acquiring a sparse representation of the signal can be interpreted as finding a sparsest solution to an underdetermined system of linear equations. Theoretical results obtained from studying the sparsest solution to a system of linear equations provide the foundation for many practical problems in signal and image processing, sample theory, statistical and machine learning, and error correction. The first contribution of this thesis is the development of sufficient conditions for the uniqueness of solutions of the partial l0_0-minimization, where only a part of the solution is sparse. In particular, l0_0-minimization is a special case of the partial l0_0-minimization. To study and develop uniqueness conditions for the partial sparsest solution, some concepts, such as lp_p-induced quasi-norm, maximal scaled spark and maximal scaled mutual coherence, are introduced. The main contribution of this thesis is the development of a framework for l-bit compressive sensing and the restricted range space property based support recovery theories. The l-bit compressive sensing is an extreme case of compressive sensing. We show that such a l-bit framework can be reformulated equivalently as an l0_0-minimization with linear equality and inequality constraints. We establish a decoding method, so-called l-bit basis pursuit, to possibly attack this l-bit l0_0-minimization problem. The support recovery theories via l-bit basis pursuit have been developed through the restricted range space property of transposed sensing matrices. In the last part of this thesis, we study the numerical performance of l-bit basis pursuit. We present simulation results to demonstrate that l-bit basis pursuit achieves support recovery, approximate sparse recovery and cardinality recovery with Gaussian matrices and Bernoulli matrices. It is not necessary to require that the sensing matrix be underdetermined due to the single-bit per measurement assumption. Furthermore, we introduce the truncated l-bit measurements method and the reweighted l-bit l1_1-minimization method to further enhance the numerical performance of l-bit basis pursuit
    • …
    corecore