224 research outputs found

    Envelope Word and Gap Sequence in Doubling Sequence

    Full text link
    Let Ο‰\omega be a factor of Doubling sequence D∞=x1x2β‹―D_\infty=x_1x_2\cdots, then it occurs in the sequence infinitely many times. Let Ο‰p\omega_p be the pp-th occurrence of Ο‰\omega and Gp(Ο‰)G_p(\omega) be the gap between Ο‰p\omega_p and Ο‰p+1\omega_{p+1}. In this paper, we discuss the structure of the gap sequence {Gp(Ο‰)}pβ‰₯1\{G_p(\omega)\}_{p\geq1}. We prove that all factors can be divided into two types, one type has exactly two distinct gaps G1(Ο‰)G_1(\omega) and G2(Ο‰)G_2(\omega), the other type has exactly three distinct gaps G1(Ο‰)G_1(\omega), G2(Ο‰)G_2(\omega) and G4(Ο‰)G_4(\omega). We determine the expressions of gaps completely. And also give the substitution of each gap sequence. The main tool in this paper is "envelope word", which is a new notion, denoted by Em,iE_{m,i}. As an application, we determine the positions of all Ο‰p\omega_p, discuss some combinatorial properties of factors, and count the distinct squares beginning in D∞[1,N]D_\infty[1,N] for Nβ‰₯1N\geq1.Comment: 14 pages, 7 figures. arXiv admin note: text overlap with arXiv:1408.372

    Rainbow Hamiltonicity and the spectral radius

    Full text link
    Let G={G1,…,Gn}\mathcal{G}=\{G_1,\ldots,G_n \} be a family of graphs of order nn with the same vertex set. A rainbow Hamiltonian cycle in G\mathcal{G} is a cycle that visits each vertex precisely once such that any two edges belong to different graphs of G\mathcal{G}. We obtain a rainbow version of Ore's size condition of Hamiltonicity, and pose a related problem. Towards a solution of that problem, we give a sufficient condition for the existence of a rainbow Hamiltonian cycle in terms of the spectral radii of the graphs in G\mathcal{G} and completely characterize the corresponding extremal graphs

    Matching extension and distance spectral radius

    Get PDF
    A graph is called k-extendable if each k-matching can be extended to a perfect matching. We give spectral conditions for the k-extendability of graphs and bipartite graphs using Tutte-type and Hall-type structural characterizations. Concretely, we give a sufficient condition in terms of the spectral radius of the distance matrix for the k-extendability of a graph and completely characterize the corresponding extremal graphs. A similar result is obtained for bipartite graphs.</p

    Mitigate Replication and Copying in Diffusion Models with Generalized Caption and Dual Fusion Enhancement

    Full text link
    While diffusion models demonstrate a remarkable capability for generating high-quality images, their tendency to `replicate' training data raises privacy concerns. Although recent research suggests that this replication may stem from the insufficient generalization of training data captions and duplication of training images, effective mitigation strategies remain elusive. To address this gap, our paper first introduces a generality score that measures the caption generality and employ large language model (LLM) to generalize training captions. Subsequently, we leverage generalized captions and propose a novel dual fusion enhancement approach to mitigate the replication of diffusion models. Our empirical results demonstrate that our proposed methods can significantly reduce replication by 43.5% compared to the original diffusion model while maintaining the diversity and quality of generations

    Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference

    Full text link
    Large number of ReLU and MAC operations of Deep neural networks make them ill-suited for latency and compute-efficient private inference. In this paper, we present a model optimization method that allows a model to learn to be shallow. In particular, we leverage the ReLU sensitivity of a convolutional block to remove a ReLU layer and merge its succeeding and preceding convolution layers to a shallow block. Unlike existing ReLU reduction methods, our joint reduction method can yield models with improved reduction of both ReLUs and linear operations by up to 1.73x and 1.47x, respectively, evaluated with ResNet18 on CIFAR-100 without any significant accuracy-drop
    • …
    corecore