3,907 research outputs found

    Distribution of zeros of matching polynomials of hypergraphs

    Full text link
    Let \h be a connected kk-graph with maximum degree Δ2{\Delta}\geq 2 and let \mu(\h, x) be the matching polynomial of \h. In this paper, we focus on studying the distribution of zeros of the matching polynomials of kk-graphs. We prove that the zeros (with multiplicities) of \mu(\h, x) are invariant under a rotation of an angle 2π/2\pi/{\ell} in the complex plane for some positive integer \ell and kk is the maximum integer with this property. Let \lambda(\h) denote the maximum modulus of all zeros of \mu(\h, x). We show that \lambda(\h) is a simple root of \mu(\h, x) and \Delta^{1\over k} \leq \lambda(\h)< \frac{k}{k-1}\big((k-1)(\Delta-1)\big)^{1\over k}. To achieve these, we introduce the path tree \T(\h,u) of \h with respect to a vertex uu of \h, which is a kk-tree, and prove that \frac{\mu(\h-u,x)}{\mu(\h, x)} = \frac{\mu(\T(\h,u)-u,x) }{\mu(\T(\h,u),x)}, which generalizes the celebrated Godsil's identity on the matching polynomial of graphs

    Adaptive Encoding Strategies for Erasing-Based Lossless Floating-Point Compression

    Full text link
    Lossless floating-point time series compression is crucial for a wide range of critical scenarios. Nevertheless, it is a big challenge to compress time series losslessly due to the complex underlying layouts of floating-point values. The state-of-the-art erasing-based compression algorithm Elf demonstrates a rather impressive performance. We give an in-depth exploration of the encoding strategies of Elf, and find that there is still much room for improvement. In this paper, we propose Elf*, which employs a set of optimizations for leading zeros, center bits and sharing condition. Specifically, we develop a dynamic programming algorithm with a set of pruning strategies to compute the adaptive approximation rules efficiently. We theoretically prove that the adaptive approximation rules are globally optimal. We further extend Elf* to Streaming Elf*, i.e., SElf*, which achieves almost the same compression ratio as Elf*, while enjoying even higher efficiency in streaming scenarios. We compare Elf* and SElf* with 8 competitors using 22 datasets. The results demonstrate that SElf* achieves 9.2% relative compression ratio improvement over the best streaming competitor while maintaining similar efficiency, and that Elf* ranks among the most competitive batch compressors. All source codes are publicly released

    Self-supervised Point Cloud Representation Learning via Separating Mixed Shapes

    Full text link
    The manual annotation for large-scale point clouds costs a lot of time and is usually unavailable in harsh real-world scenarios. Inspired by the great success of the pre-training and fine-tuning paradigm in both vision and language tasks, we argue that pre-training is one potential solution for obtaining a scalable model to 3D point cloud downstream tasks as well. In this paper, we, therefore, explore a new self-supervised learning method, called Mixing and Disentangling (MD), for 3D point cloud representation learning. As the name implies, we mix two input shapes and demand the model learning to separate the inputs from the mixed shape. We leverage this reconstruction task as the pretext optimization objective for self-supervised learning. There are two primary advantages: 1) Compared to prevailing image datasets, eg, ImageNet, point cloud datasets are de facto small. The mixing process can provide a much larger online training sample pool. 2) On the other hand, the disentangling process motivates the model to mine the geometric prior knowledge, eg, key points. To verify the effectiveness of the proposed pretext task, we build one baseline network, which is composed of one encoder and one decoder. During pre-training, we mix two original shapes and obtain the geometry-aware embedding from the encoder, then an instance-adaptive decoder is applied to recover the original shapes from the embedding. Albeit simple, the pre-trained encoder can capture the key points of an unseen point cloud and surpasses the encoder trained from scratch on downstream tasks. The proposed method has improved the empirical performance on both ModelNet-40 and ShapeNet-Part datasets in terms of point cloud classification and segmentation tasks. We further conduct ablation studies to explore the effect of each component and verify the generalization of our proposed strategy by harnessing different backbones
    corecore