221 research outputs found

    Quasi-SLCA based Keyword Query Processing over Probabilistic XML Data

    Full text link
    The probabilistic threshold query is one of the most common queries in uncertain databases, where a result satisfying the query must be also with probability meeting the threshold requirement. In this paper, we investigate probabilistic threshold keyword queries (PrTKQ) over XML data, which is not studied before. We first introduce the notion of quasi-SLCA and use it to represent results for a PrTKQ with the consideration of possible world semantics. Then we design a probabilistic inverted (PI) index that can be used to quickly return the qualified answers and filter out the unqualified ones based on our proposed lower/upper bounds. After that, we propose two efficient and comparable algorithms: Baseline Algorithm and PI index-based Algorithm. To accelerate the performance of algorithms, we also utilize probability density function. An empirical study using real and synthetic data sets has verified the effectiveness and the efficiency of our approaches

    CHIEF : clustering With higher-order motifs in big networks

    Get PDF
    Clustering network vertices is an enabler of various applications such as social computing and Internet of Things. However, challenges arise for clustering when networks increase in scale. This paper proposes CHIEF (Clustering with HIgher-ordEr motiFs), a solution which consists of two motif clustering techniques: standard acceleration CHIEF-ST and approximate acceleration CHIEF-AP. Both algorithms firstly find the maximal kk-edge-connected subgraphs within the target networks to lower the network scale by optimizing the network structure with maximal kk-edge-connected subgraphs, and then use heterogeneous four-node motifs clustering in higher-order dense networks. For CHIEF-ST, we illustrate that all target motifs will be kept after this procedure when the minimum node degree of the target motif is equal or greater than kk. For CHIEF-AP, we prove that the eigenvalues of the adjacency matrix and the Laplacian matrix are relatively stable after this step. CHIEF offers an improved efficiency of motif clustering for big networks, and it verifies higher-order motif significance. Experiments on real and synthetic networks demonstrate that the proposed solutions outperform baseline approaches in large network analysis, and higher-order motifs outperform traditional triangle motifs in clustering. © 2022 IEEE Computer Society. All rights reserved

    Unveiling the Implicit Toxicity in Large Language Models

    Full text link
    The open-endedness of large language models (LLMs) combined with their impressive capabilities may lead to new safety issues when being exploited for malicious use. While recent studies primarily focus on probing toxic outputs that can be easily detected with existing toxicity classifiers, we show that LLMs can generate diverse implicit toxic outputs that are exceptionally difficult to detect via simply zero-shot prompting. Moreover, we propose a reinforcement learning (RL) based attacking method to further induce the implicit toxicity in LLMs. Specifically, we optimize the language model with a reward that prefers implicit toxic outputs to explicit toxic and non-toxic ones. Experiments on five widely-adopted toxicity classifiers demonstrate that the attack success rate can be significantly improved through RL fine-tuning. For instance, the RL-finetuned LLaMA-13B model achieves an attack success rate of 90.04% on BAD and 62.85% on Davinci003. Our findings suggest that LLMs pose a significant threat in generating undetectable implicit toxic outputs. We further show that fine-tuning toxicity classifiers on the annotated examples from our attacking method can effectively enhance their ability to detect LLM-generated implicit toxic language. The code is publicly available at https://github.com/thu-coai/Implicit-Toxicity.Comment: EMNLP 2023 Main Conferenc
    • …
    corecore