599 research outputs found

    Partitioning A Graph In Alliances And Its Application To Data Clustering

    Get PDF
    Any reasonably large group of individuals, families, states, and parties exhibits the phenomenon of subgroup formations within the group such that the members of each group have a strong connection or bonding between each other. The reasons of the formation of these subgroups that we call alliances differ in different situations, such as, kinship and friendship (in the case of individuals), common economic interests (for both individuals and states), common political interests, and geographical proximity. This structure of alliances is not only prevalent in social networks, but it is also an important characteristic of similarity networks of natural and unnatural objects. (A similarity network defines the links between two objects based on their similarities). Discovery of such structure in a data set is called clustering or unsupervised learning and the ability to do it automatically is desirable for many applications in the areas of pattern recognition, computer vision, artificial intelligence, behavioral and social sciences, life sciences, earth sciences, medicine, and information theory. In this dissertation, we study a graph theoretical model of alliances where an alliance of the vertices of a graph is a set of vertices in the graph, such that every vertex in the set is adjacent to equal or more vertices inside the set than the vertices outside it. We study the problem of partitioning a graph into alliances and identify classes of graphs that have such a partition. We present results on the relationship between the existence of such a partition and other well known graph parameters, such as connectivity, subgraph structure, and degrees of vertices. We also present results on the computational complexity of finding such a partition. An alliance cover set is a set of vertices in a graph that contains at least one vertex from every alliance of the graph. The complement of an alliance cover set is an alliance free set, that is, a set that does not contain any alliance as a subset. We study the properties of these sets and present tight bounds on their cardinalities. In addition, we also characterize the graphs that can be partitioned into alliance free and alliance cover sets. Finally, we present an approximate algorithm to discover alliances in a given graph. At each step, the algorithm finds a partition of the vertices into two alliances such that the alliances are strongest among all such partitions. The strength of an alliance is defined as a real number p, such that every vertex in the alliance has at least p times more neighbors in the set than its total number of neighbors in the graph). We evaluate the performance of the proposed algorithm on standard data sets

    Global defensive k-alliances in directed graphs: combinatorial and computational issues

    Get PDF
    In this paper we define the global defensive k-alliance (number) in a digraph D, and give several bounds on this parameter with characterizations of all digraphs attaining the bounds. In particular, for the case k = -1, we give a lower (an upper) bound on this parameter for directed trees (rooted trees). Moreover, the characterization of all directed trees (rooted trees) for which the equality holds is given. Finally, we show that the problem of finding the global defensive k-alliance number of a digraph is NP-hard for any suitable non-negative value of k, and in contrast with it, we also show that finding a minimum global defensive (-1)-alliance for any rooted tree is polynomial-time solvable

    Proceedings of the 8th Cologne-Twente Workshop on Graphs and Combinatorial Optimization

    No full text
    International audienceThe Cologne-Twente Workshop (CTW) on Graphs and Combinatorial Optimization started off as a series of workshops organized bi-annually by either Köln University or Twente University. As its importance grew over time, it re-centered its geographical focus by including northern Italy (CTW04 in Menaggio, on the lake Como and CTW08 in Gargnano, on the Garda lake). This year, CTW (in its eighth edition) will be staged in France for the first time: more precisely in the heart of Paris, at the Conservatoire National d’Arts et Métiers (CNAM), between 2nd and 4th June 2009, by a mixed organizing committee with members from LIX, Ecole Polytechnique and CEDRIC, CNAM

    Using The Software Adapter To Connect Legacy Simulation Models To The Rti

    Get PDF
    The establishment of a network of persistent shared simulations depends on the presence of a robust standard for communicating state information between those simulations. The High Level Architecture (HLA) can serve as the basis for such a standard. While the HLA is architecture, not software, use of Run Time Infrastructure (RTI) software is required to support operations of a federation execution. The integration of RTI with existing simulation models is complex and requires a lot of expertise. This thesis implements a less complex and effective interaction between a legacy simulation model and RTI using a middleware tool known as Distributed Manufacturing Simulation (DMS) adapter. Shuttle Model, an Arena based discrete-event simulation model for shuttle operations, is connected to the RTI using the DMS adapter. The adapter provides a set of functions that are to be incorporated within the Shuttle Model, in a procedural manner, in order to connect to RTI. This thesis presents the procedure when the Shuttle Model connects to the RTI, to communicate with the Scrub Model for approval of its shuttle\u27s launch

    Analyzing The Community Structure Of Web-like Networks: Models And Algorithms

    Get PDF
    This dissertation investigates the community structure of web-like networks (i.e., large, random, real-life networks such as the World Wide Web and the Internet). Recently, it has been shown that many such networks have a locally dense and globally sparse structure with certain small, dense subgraphs occurring much more frequently than they do in the classical Erdös-Rényi random graphs. This peculiarity--which is commonly referred to as community structure--has been observed in seemingly unrelated networks such as the Web, email networks, citation networks, biological networks, etc. The pervasiveness of this phenomenon has led many researchers to believe that such cohesive groups of nodes might represent meaningful entities. For example, in the Web such tightly-knit groups of nodes might represent pages with a common topic, geographical location, etc., while in the neural networks they might represent evolved computational units. The notion of community has emerged in an effort to formalize the empirical observation of the locally dense globally sparse structure of web-like networks. In the broadest sense, a community in a web-like network is defined as a group of nodes that induces a dense subgraph which is sparsely linked with the rest of the network. Due to a wide array of envisioned applications, ranging from crawlers and search engines to network security and network compression, there has recently been a widespread interest in finding efficient community-mining algorithms. In this dissertation, the community structure of web-like networks is investigated by a combination of analytical and computational techniques: First, we consider the problem of modeling the web-like networks. In the recent years, many new random graph models have been proposed to account for some recently discovered properties of web-like networks that distinguish them from the classical random graphs. The vast majority of these random graph models take into account only the addition of new nodes and edges. Yet, several empirical observations indicate that deletion of nodes and edges occurs frequently in web-like networks. Inspired by such observations, we propose and analyze two dynamic random graph models that combine node and edge addition with a uniform and a preferential deletion of nodes, respectively. In both cases, we find that the random graphs generated by such models follow power-law degree distributions (in agreement with the degree distribution of many web-like networks). Second, we analyze the expected density of certain small subgraphs--such as defensive alliances on three and four nodes--in various random graphs models. Our findings show that while in the binomial random graph the expected density of such subgraphs is very close to zero, in some dynamic random graph models it is much larger. These findings converge with our results obtained by computing the number of communities in some Web crawls. Next, we investigate the computational complexity of the community-mining problem under various definitions of community. Assuming the definition of community as a global defensive alliance, or a global offensive alliance we prove--using transformations from the dominating set problem--that finding optimal communities is an NP-complete problem. These and other similar complexity results coupled with the fact that many web-like networks are huge, indicate that it is unlikely that fast, exact sequential algorithms for mining communities may be found. To handle this difficulty we adopt an algorithmic definition of community and a simpler version of the community-mining problem, namely: find the largest community to which a given set of seed nodes belong. We propose several greedy algorithms for this problem: The first proposed algorithm starts out with a set of seed nodes--the initial community--and then repeatedly selects some nodes from community\u27s neighborhood and pulls them in the community. In each step, the algorithm uses clustering coefficient--a parameter that measures the fraction of the neighbors of a node that are neighbors themselves--to decide which nodes from the neighborhood should be pulled in the community. This algorithm has time complexity of order , where denotes the number of nodes visited by the algorithm and is the maximum degree encountered. Thus, assuming a power-law degree distribution this algorithm is expected to run in near-linear time. The proposed algorithm achieved good accuracy when tested on some real and computer-generated networks: The fraction of community nodes classified correctly is generally above 80% and often above 90% . A second algorithm based on a generalized clustering coefficient, where not only the first neighborhood is taken into account but also the second, the third, etc., is also proposed. This algorithm achieves a better accuracy than the first one but also runs slower. Finally, a randomized version of the second algorithm which improves the time complexity without affecting the accuracy significantly, is proposed. The main target application of the proposed algorithms is focused crawling--the selective search for web pages that are relevant to a pre-defined topic

    Self-Stabilizing Distributed Cooperative Reset

    Get PDF
    Self-stabilization is a versatile fault-tolerance approach that characterizes the ability of a system to eventually resume a correct behavior after any finite number of transient faults. In this paper, we propose a self-stabilizing reset algorithm working in anonymous networks. This algorithm resets the network in a distributed non-centralized manner, i.e., it is multi-initiator, as each process detecting an inconsistency may initiate a reset. It is also cooperative in the sense that it coordinates concurrent reset executions in order to gain efficiency. Our approach is general since our reset algorithm allows to build self-stabilizing solutions for various problems and settings. As a matter of facts, we show that it applies to both static and dynamic specifications since we propose efficient self-stabilizing reset-based algorithms for the (1-minimal) (f, g)-alliance (a generalization of the dominating set problem) in identified networks and the unison problem in anonymous networks. Notice that these two latter instantiations enhance the state of the art. Indeed, in the former case, our solution is more general than the previous ones, while in the latter case, the complexity of our unison algorithm is better than that of previous solutions of the literature

    Economics of Conflict and Terrorism

    Get PDF
    This book contributes to the literature on conflict and terrorism through a selection of articles that deal with theoretical, methodological and empirical issues related to the topic. The papers study important problems, are original in their approach and innovative in the techniques used. This will be useful for researchers in the fields of game theory, economics and political sciences

    Real Time Crime Prediction Using Social Media

    Get PDF
    There is no doubt that crime is on the increase and has a detrimental influence on a nation's economy despite several attempts of studies on crime prediction to minimise crime rates. Historically, data mining techniques for crime prediction models often rely on historical information and its mostly country specific. In fact, only a few of the earlier studies on crime prediction follow standard data mining procedure. Hence, considering the current worldwide crime trend in which criminals routinely publish their criminal intent on social media and ask others to see and/or engage in different crimes, an alternative, and more dynamic strategy is needed. The goal of this research is to improve the performance of crime prediction models. Thus, this thesis explores the potential of using information on social media (Twitter) for crime prediction in combination with historical crime data. It also figures out, using data mining techniques, the most relevant feature engineering needed for United Kingdom dataset which could improve crime prediction model performance. Additionally, this study presents a function that could be used by every state in the United Kingdom for data cleansing, pre-processing and feature engineering. A shinny App was also use to display the tweets sentiment trends to prevent crime in near-real time.Exploratory analysis is essential for revealing the necessary data pre-processing and feature engineering needed prior to feeding the data into the machine learning model for efficient result. Based on earlier documented studies available, this is the first research to do a full exploratory analysis of historical British crime statistics using stop and search historical dataset. Also, based on the findings from the exploratory study, an algorithm was created to clean the data, and prepare it for further analysis and model creation. This is an enormous success because it provides a perfect dataset for future research, particularly for non-experts to utilise in constructing models to forecast crime or conducting investigations in around 32 police districts of the United Kingdom.Moreover, this study is the first study to present a complete collection of geo-spatial parameters for training a crime prediction model by combining demographic data from the same source in the United Kingdom with hourly sentiment polarity that was not restricted to Twitter keyword search. Six unique base models that were frequently mentioned in the previous literature was selected and used to train stop-and-search historical crime dataset and evaluated on test data and finally validated with dataset from London and Kent crime datasets.Two different datasets were created from twitter and historical data (historical crime data with twitter sentiment score and historical data without twitter sentiment score). Six of the most prevalent machine learning classifiers (Random Forest, Decision Tree, K-nearest model, support vector machine, neural network and naïve bayes) were trained and tested on these datasets. Additionally, hyperparameters of each of the six models developed were tweaked using random grid search. Voting classifiers and logistic regression stacked ensemble of different models were also trained and tested on the same datasets to enhance the individual model performance.In addition, two combinations of stack ensembles of multiple models were constructed to enhance and choose the most suitable models for crime prediction, and based on their performance, the appropriate prediction model for the UK dataset would be selected. In terms of how the research may be interpreted, it differs from most earlier studies that employed Twitter data in that several methodologies were used to show how each attribute contributed to the construction of the model, and the findings were discussed and interpreted in the context of the study. Further, a shiny app visualisation tool was designed to display the tweets’ sentiment score, the text, the users’ screen name, and the tweets’ vicinity which allows the investigation of any criminal actions in near-real time. The evaluation of the models revealed that Random Forest, Decision Tree, and K nearest neighbour outperformed other models. However, decision trees and Random Forests perform better consistently when evaluated on test data
    • …
    corecore