224 research outputs found
Envelope Word and Gap Sequence in Doubling Sequence
Let be a factor of Doubling sequence , then
it occurs in the sequence infinitely many times. Let be the -th
occurrence of and be the gap between and
. In this paper, we discuss the structure of the gap sequence
. We prove that all factors can be divided into two
types, one type has exactly two distinct gaps and ,
the other type has exactly three distinct gaps , and
. We determine the expressions of gaps completely. And also give
the substitution of each gap sequence. The main tool in this paper is "envelope
word", which is a new notion, denoted by . As an application, we
determine the positions of all , discuss some combinatorial
properties of factors, and count the distinct squares beginning in
for .Comment: 14 pages, 7 figures. arXiv admin note: text overlap with
arXiv:1408.372
Rainbow Hamiltonicity and the spectral radius
Let be a family of graphs of order with
the same vertex set. A rainbow Hamiltonian cycle in is a cycle
that visits each vertex precisely once such that any two edges belong to
different graphs of . We obtain a rainbow version of Ore's size
condition of Hamiltonicity, and pose a related problem. Towards a solution of
that problem, we give a sufficient condition for the existence of a rainbow
Hamiltonian cycle in terms of the spectral radii of the graphs in
and completely characterize the corresponding extremal graphs
Matching extension and distance spectral radius
A graph is called k-extendable if each k-matching can be extended to a perfect matching. We give spectral conditions for the k-extendability of graphs and bipartite graphs using Tutte-type and Hall-type structural characterizations. Concretely, we give a sufficient condition in terms of the spectral radius of the distance matrix for the k-extendability of a graph and completely characterize the corresponding extremal graphs. A similar result is obtained for bipartite graphs.</p
Mitigate Replication and Copying in Diffusion Models with Generalized Caption and Dual Fusion Enhancement
While diffusion models demonstrate a remarkable capability for generating
high-quality images, their tendency to `replicate' training data raises privacy
concerns. Although recent research suggests that this replication may stem from
the insufficient generalization of training data captions and duplication of
training images, effective mitigation strategies remain elusive. To address
this gap, our paper first introduces a generality score that measures the
caption generality and employ large language model (LLM) to generalize training
captions. Subsequently, we leverage generalized captions and propose a novel
dual fusion enhancement approach to mitigate the replication of diffusion
models. Our empirical results demonstrate that our proposed methods can
significantly reduce replication by 43.5% compared to the original diffusion
model while maintaining the diversity and quality of generations
Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference
Large number of ReLU and MAC operations of Deep neural networks make them
ill-suited for latency and compute-efficient private inference. In this paper,
we present a model optimization method that allows a model to learn to be
shallow. In particular, we leverage the ReLU sensitivity of a convolutional
block to remove a ReLU layer and merge its succeeding and preceding convolution
layers to a shallow block. Unlike existing ReLU reduction methods, our joint
reduction method can yield models with improved reduction of both ReLUs and
linear operations by up to 1.73x and 1.47x, respectively, evaluated with
ResNet18 on CIFAR-100 without any significant accuracy-drop
- β¦