2,294 research outputs found

    On Efficiently Detecting Overlapping Communities over Distributed Dynamic Graphs

    Full text link
    Modern networks are of huge sizes as well as high dynamics, which challenges the efficiency of community detection algorithms. In this paper, we study the problem of overlapping community detection on distributed and dynamic graphs. Given a distributed, undirected and unweighted graph, the goal is to detect overlapping communities incrementally as the graph is dynamically changing. We propose an efficient algorithm, called \textit{randomized Speaker-Listener Label Propagation Algorithm} (rSLPA), based on the \textit{Speaker-Listener Label Propagation Algorithm} (SLPA) by relaxing the probability distribution of label propagation. Besides detecting high-quality communities, rSLPA can incrementally update the detected communities after a batch of edge insertion and deletion operations. To the best of our knowledge, rSLPA is the first algorithm that can incrementally capture the same communities as those obtained by applying the detection algorithm from the scratch on the updated graph. Extensive experiments are conducted on both synthetic and real-world datasets, and the results show that our algorithm can achieve high accuracy and efficiency at the same time.Comment: A short version of this paper will be published as ICDE'2018 poste

    Potentiation Of The Nasopharyngeal Carcinoma Cell Lines By Maritoclax To Abt-263 In 2-Dimensional And 3-Dimensional Cell Culture Methods

    Get PDF
    Malaysia mempunyai kes kanser nasofarinks, lebih dikenali sebagai kanser pangkal hidung, yang tertinggi di dunia. Kanser nasofarinks merupakan sejenis kanser yang boleh diubati jika dirawat pada peringkat awal Malaysia has one of the highest incidences of Nasopharyngeal carcinoma (NPC) in the world. The cancer is remarkably curable at early stages but treatment options become limited when patients develop a recurrence or diagnosed late, leaving them with very little hope to combat the cance

    Learning-Based Data Storage [Vision] (Technical Report)

    Full text link
    Deep neural network (DNN) and its variants have been extensively used for a wide spectrum of real applications such as image classification, face/speech recognition, fraud detection, and so on. In addition to many important machine learning tasks, as artificial networks emulating the way brain cells function, DNNs also show the capability of storing non-linear relationships between input and output data, which exhibits the potential of storing data via DNNs. We envision a new paradigm of data storage, "DNN-as-a-Database", where data are encoded in well-trained machine learning models. Compared with conventional data storage that directly records data in raw formats, learning-based structures (e.g., DNN) can implicitly encode data pairs of inputs and outputs and compute/materialize actual output data of different resolutions only if input data are provided. This new paradigm can greatly enhance the data security by allowing flexible data privacy settings on different levels, achieve low space consumption and fast computation with the acceleration of new hardware (e.g., Diffractive Neural Network and AI chips), and can be generalized to distributed DNN-based storage/computing. In this paper, we propose this novel concept of learning-based data storage, which utilizes a learning structure called learning-based memory unit (LMU), to store, organize, and retrieve data. As a case study, we use DNNs as the engine in the LMU, and study the data capacity and accuracy of the DNN-based data storage. Our preliminary experimental results show the feasibility of the learning-based data storage by achieving high (100%) accuracy of the DNN storage. We explore and design effective solutions to utilize the DNN-based data storage to manage and query relational tables. We discuss how to generalize our solutions to other data types (e.g., graphs) and environments such as distributed DNN storage/computing.Comment: 14 pages, 16 figure

    Evaluation of the Impacts of Data Model and Query Language on Query Performance

    Get PDF
    It is important to understand how users can utilize database systems more effectively to enhance performance. A major research interest is to evaluate and compare user performance across different data models and query languages. So far, experiments have tested combinations of model plus language. An interesting theoretical and practical question is: how much of the performance difference is caused by the data model itself, and how much by the additional query language syntax? A cognitive model of query processing suggests measurement at two stages. The data model has impact at the first stage, and the model with the query language syntax together has the impact at the second stage. An experiment that compares the objected-oriented and relational models and query languages at the two stages provides fresh results
    corecore