98 research outputs found

    STAR: Steiner tree approximation in relationship-graphs

    No full text
    Large-scale graphs and networks are abundant in modern information systems: entity-relationship graphs over relational data or Web-extracted entities, biological networks, social online communities, knowledge bases, and many more. Often such data comes with expressive node and edge labels that allow an interpretation as a semantic graph, and edge weights that reflect the strengths of semantic relations between entities. Finding close relationships between a given set of two, three, or more entities is an important building block for many search, ranking, and analysis tasks. From an algorithmic point of view, this translates into computing the best Steiner trees between the given nodes, a classical NP-hard problem. In this paper, we present a new approximation algorithm, coined STAR, for relationship queries over large graphs that do not fit into memory. We prove that for n query entities, STAR yields an O(log(n))-approximation of the optimal Steiner tree, and show that in practical cases the results returned by STAR are qualitatively better than the results returned by a classical 2-approximation algorithm. We then describe an extension to our algorithm to return the top-k Steiner trees. Finally, we evaluate our algorithm over both main-memory as well as completely disk-resident graphs containing millions of nodes. Our experiments show that STAR outperforms the best state-of-the returns qualitatively better results

    A neighborhood-based approach for clustering of linked document collections

    No full text
    This technical report addresses the problem of automatically structuring linked document collections by using clustering. In contrast to traditional clustering, we study the clustering problem in the light of available link structure information for the data set (e.g., hyperlinks among web documents or co-authorship among bibliographic data entries). Our approach is based on iterative relaxation of cluster assignments, and can be built on top of any clustering algorithm (e.g., k-means or DBSCAN). These techniques result in higher cluster purity, better overall accuracy, and make self-organization more robust. Our comprehensive experiments on three different real-world corpora demonstrate the benefits of our approach

    Sequencing by enumerative methods

    Get PDF

    DEUM: a framework for an estimation of distribution algorithm based on Markov random fields.

    Get PDF
    Estimation of Distribution Algorithms (EDAs) belong to the class of population based optimisation algorithms. They are motivated by the idea of discovering and exploiting the interaction between variables in the solution. They estimate a probability distribution from population of solutions, and sample it to generate the next population. Many EDAs use probabilistic graphical modelling techniques for this purpose. In particular, directed graphical models (Bayesian networks) have been widely used in EDA. This thesis proposes an undirected graphical model (Markov Random Field (MRF)) approach to estimate and sample the distribution in EDAs. The interaction between variables in the solution is modelled as an undirected graph and the joint probability of a solution is factorised as a Gibbs distribution. The thesis describes a model of fitness function that approximates the energy in the Gibbs distribution, and shows how this model can be fitted to a population of solutions to estimate the parameters of the MRF. The estimated MRF is then sampled to generate the next population. This approach is applied to estimation of distribution in a general framework of an EDA, called Distribution Estimation using Markov Random Fields (DEUM). The thesis then proposes several variants of DEUM using different sampling techniques and tests their performance on a range of optimisation problems. The results show that, for most of the tested problems, the DEUM algorithms significantly outperform other EDAs, both in terms of number of fitness evaluations and the quality of the solutions found by them. There are two main explanations for the success of DEUM algorithms. Firstly, DEUM builds a model of fitness function to approximate the MRF. This contrasts with other EDAs, which build a model of selected solutions. This allows DEUM to use fitness in variation part of the evolution. Secondly, DEUM exploits the temperature coefficient in the Gibbs distribution to regulate the behaviour of the algorithm. In particular, with higher temperature, the distribution is closer to being uniform and with lower temperature it concentrates near some global optima. This gives DEUM an explicit control over the convergence of the algorithm, resulting in better optimisation

    Guided Probabilistic Topic Models for Agenda-setting and Framing

    Get PDF
    Probabilistic topic models are powerful methods to uncover hidden thematic structures in text by projecting each document into a low dimensional space spanned by a set of topics. Given observed text data, topic models infer these hidden structures and use them for data summarization, exploratory analysis, and predictions, which have been applied to a broad range of disciplines. Politics and political conflicts are often captured in text. Traditional approaches to analyze text in political science and other related fields often require close reading and manual labeling, which is labor-intensive and hinders the use of large-scale collections of text. Recent work, both in computer science and political science, has used automated content analysis methods, especially topic models to substantially reduce the cost of analyzing text at large scale. In this thesis, we follow this approach and develop a series of new probabilistic topic models, guided by additional information associated with the text, to discover and analyze agenda-setting (i.e., what topics people talk about) and framing (i.e., how people talk about those topics), a central research problem in political science, communication, public policy and other related fields. We first focus on study agendas and agenda control behavior in political debates and other conversations. The model we introduce, Speaker Identity for Topic Segmentation (SITS), is able to discover what topics that are talked about during the debates, when these topics change, and a speaker-specific measure of agenda control. To make the analysis process more effective, we build Argviz, an interactive visualization which leverages SITS's outputs to allow users to quickly grasp the conversational topic dynamics, discover when the topic changes and by whom, and interactively visualize the conversation's details on demand. We then analyze policy agendas in a more general setting of political text. We present the Label to Hierarchy (L2H) model to learn a hierarchy of topics from multi-labeled data, in which each document is tagged with multiple labels. The model captures the dependencies among labels using an interpretable tree-structured hierarchy, which helps provide insights about the political attentions that policymakers focus on, and how these policy issues relate to each other. We then go beyond just agenda-setting and expand our focus to framing--the study of how agenda issues are talked about, which can be viewed as second-level agenda-setting. To capture this hierarchical views of agendas and frames, we introduce the Supervised Hierarchical Latent Dirichlet Allocation (SHLDA) model, which jointly captures a collection of documents, each is associated with a continuous response variable such as the ideological position of the document's author on a liberal-conservative spectrum. In the topic hierarchy discovered by SHLDA, higher-level nodes map to more general agenda issues while lower-level nodes map to issue-specific frames. Although qualitative analysis shows that the topic hierarchies learned by SHLDA indeed capture the hierarchical view of agenda-setting and framing motivating the work, interpreting the discovered hierarchy still incurs moderately high cost due to the complex and abstract nature of framing. Motivated by improving the hierarchy, we introduce Hierarchical Ideal Point Topic Model (HIPTM) which jointly models a collection of votes (e.g., congressional roll call votes) and both the text associated with the voters (e.g., members of Congress) and the items (e.g., congressional bills). Customized specifically for capturing the two-level view of agendas and frames, HIPTM learns a two-level hierarchy of topics, in which first-level nodes map to an interpretable policy issue and second-level nodes map to issue-specific frames. In addition, instead of using pre-computed response variable, HIPTM also jointly estimates the ideological positions of voters on multiple interpretable dimensions

    Multi-paradigm frameworks for scalable intrusion detection

    Get PDF
    Research in network security and intrusion detection systems (IDSs) has typically focused on small or artificial data sets. Tools are developed that work well on these data sets but have trouble meeting the demands of real-world, large-scale network environments. In addressing this problem, improvements must be made to the foundations of intrusion detection systems, including data management, IDS accuracy and alert volume;We address data management of network security and intrusion detection information by presenting a database mediator system that provides single query access via a domain specific query language. Results are returned in the form of XML using web services, allowing analysts to access information from remote networks in a uniform manner. The system also provides scalable data capture of log data for multi-terabyte datasets;Next, we address IDS alert accuracy by building an agent-based framework that utilizes web services to make the system easy to deploy and capable of spanning network boundaries. Agents in the framework process IDS alerts managed by a central alert broker. The broker can define processing hierarchies by assigning dependencies on agents to achieve scalability. The framework can also be used for the task of event correlation, or gathering information relevant to an IDS alert;Lastly, we address alert volume by presenting an approach to alert correlation that is IDS independent. Using correlated events gathered in our agent framework, we build a feature vector for each IDS alert representing the network traffic profile of the internal host at the time of the alert. This feature vector is used as a statistical fingerprint in a clustering algorithm that groups related alerts. We analyze our results with a combination of domain expert evaluation and feature selection

    LIPIcs, Volume 258, SoCG 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 258, SoCG 2023, Complete Volum

    Doctor of Philosophy

    Get PDF
    dissertationKernel smoothing provides a simple way of finding structures in data sets without the imposition of a parametric model, for example, nonparametric regression and density estimates. However, in many data-intensive applications, the data set could be large. Thus, evaluating a kernel density estimate or kernel regression over the data set directly can be prohibitively expensive in big data. This dissertation is working on how to efficiently find a smaller data set that can approximate the original data set with a theoretical guarantee in the kernel smoothing setting and how to extend it to more general smooth range spaces. For kernel density estimates, we propose randomized and deterministic algorithms with quality guarantees that are orders of magnitude more efficient than previous algorithms, which do not require knowledge of the kernel or its bandwidth parameter and are easily parallelizable. Our algorithms are applicable to any large-scale data processing framework. We then further investigate how to measure the error between two kernel density estimates, which is usually measured either in L1 or L2 error. In this dissertation, we investigate the challenges in using a stronger error, L ∞ (or worst case) error. We present efficient solutions for how to estimate the L∞ error and how to choose the bandwidth parameter for a kernel density estimate built on a subsample of a large data set. We next extend smoothed versions of geometric range spaces from kernel range spaces to more general types of ranges, so that an element of the ground set can be contained in a range with a non-binary value in [0,1]. We investigate the approximation of these range spaces through ϵ-nets and ϵ-samples. Finally, we study coresets algorithms for kernel regression. The size of the coresets are independent of the size of the data set, rather they only depend on the error guarantee, and in some cases the size of domain and amount of smoothing. We evaluate our methods on very large time series and spatial data, demonstrate that they can be constructed extremely efficiently, and allow for great computational gains
    • …
    corecore