168 research outputs found

    Balancing Summarization and Change Detection in Graph Streams

    Full text link
    This study addresses the issue of balancing graph summarization and graph change detection. Graph summarization compresses large-scale graphs into a smaller scale. However, the question remains: To what extent should the original graph be compressed? This problem is solved from the perspective of graph change detection, aiming to detect statistically significant changes using a stream of summary graphs. If the compression rate is extremely high, important changes can be ignored, whereas if the compression rate is extremely low, false alarms may increase with more memory. This implies that there is a trade-off between compression rate in graph summarization and accuracy in change detection. We propose a novel quantitative methodology to balance this trade-off to simultaneously realize reliable graph summarization and change detection. We introduce a probabilistic structure of hierarchical latent variable model into a graph, thereby designing a parameterized summary graph on the basis of the minimum description length principle. The parameter specifying the summary graph is then optimized so that the accuracy of change detection is guaranteed to suppress Type I error probability (probability of raising false alarms) to be less than a given confidence level. First, we provide a theoretical framework for connecting graph summarization with change detection. Then, we empirically demonstrate its effectiveness on synthetic and real datasets.Comment: 6 pages, Accepted to 23rd IEEE International Conference on Data Mining (ICDM2023

    How do socio-ecological factors shape culture? Understanding the process of micro-macro interactions

    Get PDF
    Socio-ecological environments produce certain psychological functions. that are adaptive for survival in each environment. Past evidence suggests that interdependence-related psychological features are prevalent in East Asian cultures partly due to the history of ‘rice-crop farming’ (versus herding) in those areas. However, it is unclear how and why certain functional behaviors required by the socio-ecological environment are sublimated to become cultural values, which are then transmitted and shared among people. In this paper, we conceptually review the works examining various macro sharing processes for cultural values, and focus on the use of multilevel analysis in elucidating the effect of both macro and individual level factors. Uchida et al.’s study (2019) suggests that collective activities at the macro level (community-level), which is required by a certain socio-ecological environment, promote interdependence not only among farmers but also non-farmers. The multilevel processes of how psychological characteristics are construed by macro factors will be discussed

    Time-Memory Trade-off Algorithms for Homomorphically Evaluating Look-up Table in TFHE

    Get PDF
    We propose time-memory trade-off algorithms for evaluating look-up table (LUT) in both the leveled homomorphic encryption (LHE) and fully homomorphic encryption (FHE) modes in TFHE. For an arbitrary nn-bit Boolean function, we reduce evaluation time by a factor of O(n)O(n) at the expense of an additional memory of only O(2n)O(2^n) as a trade-off: The total asymptotic memory is also O(2n)O(2^n), which is the same as that of prior works. Our empirical results demonstrate that a 7.8×7.8 \times speedup in runtime is obtained with a 3.8×3.8 \times increase in memory usage for 16-bit Boolean functions in the LHE mode. Additionally, in the FHE mode, we achieve reductions in both runtime and memory usage by factors of 17.9×17.9 \times and 2.5×2.5 \times , respectively, for 8-bit Boolean functions. The core idea is to decompose the function ff into sufficiently small subfunctions and leverage the precomputed results for these subfunctions, thereby achieving significant performance improvements at the cost of additional memory

    GPU Acceleration of High-Precision Homomorphic Computation Utilizing Redundant Representation

    Get PDF
    Fully homomorphic encryption (FHE) can perform computations on encrypted data, allowing us to analyze sensitive data without losing its security. The main issue for FHE is its lower performance, especially for high-precision computations, compared to calculations on plaintext data. Making FHE viable for practical use requires both algorithmic improvements and hardware acceleration. Recently, Klemsa and Önen (CODASPY\u2722) presented fast homomorphic algorithms for high-precision integers, including addition, multiplication and some fundamental functions, by utilizing a technique called redundant representation. Their algorithms were applied on TFHE, which was proposed by Chillotti et al. (Asiacrypt\u2716). In this paper, we further accelerate this method by extending their algorithms to multithreaded environments. The experimental results show that our approach performs 128-bit addition in 0.41 seconds, 32-bit multiplication in 4.3 seconds, and 128-bit Max and ReLU functions in 1.4 seconds using a Tesla V100S server

    Efficient Homomorphic Evaluation of Arbitrary Uni/Bivariate Integer Functions and Their Applications

    Get PDF
    We propose how to homomorphically evaluate arbitrary univariate and bivariate integer functions such as division. A prior work proposed by Okada et al. (WISTP\u2718) uses polynomial evaluations such that the scheme is still compatible with the SIMD operations in BFV and BGV, and is implemented with the input domain size Z257\mathbb{Z}_{257}. However, the scheme of Okada et al. requires the quadratic number of plaintext-ciphertext multiplications and ciphertext-ciphertext additions in the input domain size, and although these operations are more lightweight than the ciphertext-ciphertext multiplication, the quadratic complexity makes handling larger inputs quite inefficient. In this work, first we improve the prior work and also propose a new approach that exploits the packing method to handle the larger input domain size instead of enabling the SIMD operation, thus making it possible to work with the larger input domain size, e.g., Z215\mathbb{Z}_{2^{15}} in a reasonably efficient way. In addition, we show how to slightly extend the input domain size to Z216\mathbb{Z}_{2^{16}} with a relatively moderate overhead. Further we show another approach to handling the larger input domain size by using two ciphertexts to encrypt one integer plaintext and applying our techniques for uni/bivariate function evaluation. We implement the prior work of Okada et al., our improved scheme of Okada et al., and our new scheme in PALISADE with the input domain size Z215\mathbb{Z}_{2^{15}}, and confirm that the estimated run-times of the prior work and our improved scheme of the prior work are still about 117 days and 59 days respectively while our new scheme can be computed in 307 seconds

    MegaCRN: Meta-Graph Convolutional Recurrent Network for Spatio-Temporal Modeling

    Full text link
    Spatio-temporal modeling as a canonical task of multivariate time series forecasting has been a significant research topic in AI community. To address the underlying heterogeneity and non-stationarity implied in the graph streams, in this study, we propose Spatio-Temporal Meta-Graph Learning as a novel Graph Structure Learning mechanism on spatio-temporal data. Specifically, we implement this idea into Meta-Graph Convolutional Recurrent Network (MegaCRN) by plugging the Meta-Graph Learner powered by a Meta-Node Bank into GCRN encoder-decoder. We conduct a comprehensive evaluation on two benchmark datasets (METR-LA and PEMS-BAY) and a large-scale spatio-temporal dataset that contains a variaty of non-stationary phenomena. Our model outperformed the state-of-the-arts to a large degree on all three datasets (over 27% MAE and 34% RMSE). Besides, through a series of qualitative evaluations, we demonstrate that our model can explicitly disentangle locations and time slots with different patterns and be robustly adaptive to different anomalous situations. Codes and datasets are available at https://github.com/deepkashiwa20/MegaCRN.Comment: Preprint submitted to Artificial Intelligence. arXiv admin note: substantial text overlap with arXiv:2211.1470

    Frequency-dependent bifurcation point between field-cooled and zero-field-cooled dielectric constant of LiTaO3 nanoparticles embedded in amorphous SiO2

    Get PDF
    Splitting between the field-cooled dielectric constant and the zero-field-cooled dielectric constant was observed for a diluted system of LiTaO3 nanoparticles (diameter 30 Å) embedded in amorphous SiO2. At the applied field frequency of 100 kHz, the real part of the field-cooled dielectric constant diverged from that of the zero-field-cooled one at 380 °C. The bifurcation point of the history-dependent dielectric constant rose from 310 to 540 °C upon increasing the field frequency from 10 to 1000 kHz. Bulk LiTaO3 powders showed no splitting in the history-dependent dielectric constant and the maximum at 645 °C in the real part of the dielectric constant, despite the variation of frequency. Both the splitting of the history-dependent dielectric constant and the frequency dependence of the bifurcation point suggest that the LiTaO3 nanoparticles with a single-domain structure were in the superparaelectric state as a consequence of insignificant cooperative interactions among the nanoparticles in the diluted system. The energy barrier of 0.9 eV separating two (+p and –p) polarization states corroborated the potential of the LiTaO3 nanoparticle for ultrahigh-density recording media applications

    Solving McEliece-1409 in One Day --- Cryptanalysis with the Improved BJMM Algorithm

    Get PDF
    Syndrome decoding problem (SDP) is the security assumption of the code-based cryptography. Three out of the four NIST-PQC round 4 candidates are code-based cryptography. Information set decoding (ISD) is known for the fastest existing algorithm to solve SDP instances with relatively high code rate. Security of code-based cryptography is often constructed on the asymptotic complexity of the ISD algorithm. However, the concrete complexity of the ISD algorithm has hardly ever been known. Recently, Esser, May and Zweydinger (Eurocrypt \u2722) provide the first implementation of the representation-based ISD, such as May--Meurer--Thomae (MMT) or Becker--Joux--May--Meurer (BJMM) algorithm and solve the McEliece-1284 instance in the decoding challenge, revealing the practical efficiency of these ISDs. In this work, we propose a practically fast depth-2 BJMM algorithm and provide the first publicly available GPU implementation. We solve the McEliece-1409 instance for the first time and present concrete analysis for the record. Cryptanalysis for NIST-PQC round 4 code-based candidates against the improved BJMM algorithm is also conducted. In addition, we revise the asymptotic space complexity of the time-memory trade-off MMT algorithm presented by Esser and Zweydinger (Eurocrypt \u2723) from 20.375n2^{0.375n} to 20.376n2^{0.376n}
    corecore