50 research outputs found

    Explaining Snapshots of Network Diffusions: Structural and Hardness Results

    Full text link
    Much research has been done on studying the diffusion of ideas or technologies on social networks including the \textit{Influence Maximization} problem and many of its variations. Here, we investigate a type of inverse problem. Given a snapshot of the diffusion process, we seek to understand if the snapshot is feasible for a given dynamic, i.e., whether there is a limited number of nodes whose initial adoption can result in the snapshot in finite time. While similar questions have been considered for epidemic dynamics, here, we consider this problem for variations of the deterministic Linear Threshold Model, which is more appropriate for modeling strategic agents. Specifically, we consider both sequential and simultaneous dynamics when deactivations are allowed and when they are not. Even though we show hardness results for all variations we consider, we show that the case of sequential dynamics with deactivations allowed is significantly harder than all others. In contrast, sequential dynamics make the problem trivial on cliques even though it's complexity for simultaneous dynamics is unknown. We complement our hardness results with structural insights that can help better understand diffusions of social networks under various dynamics.Comment: 14 pages, 3 figure

    Applying a Cut-Based Data Reduction Rule for Weighted Cluster Editing in Polynomial Time

    Get PDF
    Given an undirected graph, the task in Cluster Editing is to insert and delete a minimum number of edges to obtain a cluster graph, that is, a disjoint union of cliques. In the weighted variant each vertex pair comes with a weight and the edge modifications have to be of minimum overall weight. In this work, we provide the first polynomial-time algorithm to apply the following data reduction rule of Böcker et al. [Algorithmica, 2011] for Weighted Cluster Editing: For a graph G=(V,E)G = (V,E), merge a vertex set S⊆VS ⊆ V into a single vertex if the minimum cut of G[S] is at least the combined cost of inserting all missing edges within G[S] plus the cost of cutting all edges from S to the rest of the graph. Complementing our theoretical findings, we experimentally demonstrate the effectiveness of the data reduction rule, shrinking real-world test instances from the PACE Challenge 2021 by around 24% while previous heuristic implementations of the data reduction rule only achieve 8%

    The power of linear-time data reduction for matching.

    Get PDF
    Finding maximum-cardinality matchings in undirected graphs is arguably one of the most central graph primitives. For m-edge and n-vertex graphs, it is well-known to be solvable in O(m\sqrt{n}) time; however, for several applications this running time is still too slow. We investigate how linear-time (and almost linear-time) data reduction (used as preprocessing) can alleviate the situation. More specifically, we focus on linear-time kernelization. We start a deeper and systematic study both for general graphs and for bipartite graphs. Our data reduction algorithms easily comply (in form of preprocessing) with every solution strategy (exact, approximate, heuristic), thus making them attractive in various settings

    The Power of Linear-Time Data Reduction for Maximum Matching

    Get PDF
    Finding maximum-cardinality matchings in undirected graphs is arguably one of the most central graph primitives. For m-edge and n-vertex graphs, it is well-known to be solvable in O(mn−−√) time; however, for several applications this running time is still too slow. We investigate how linear-time (and almost linear-time) data reduction (used as preprocessing) can alleviate the situation. More specifically, we focus on linear-time kernelization. We start a deeper and systematic study both for general graphs and for bipartite graphs. Our data reduction algorithms easily comply (in form of preprocessing) with every solution strategy (exact, approximate, heuristic), thus making them attractive in various settings

    A linear-time algorithm for maximum-cardinality matching on cocomparability graphs

    Get PDF
    Finding maximum-cardinality matchings in undirected graphs is arguably one of the most central graph problems. For general mm-edge and nn-vertex graphs, it is well known to be solvable in O(mn)O(m\sqrt{n}) time. We present a linear-time algorithm to find maximum-cardinality matchings on cocomparability graphs, a prominent subclass of perfect graphs that strictly contains interval graphs as well as permutation graphs. Our greedy algorithm is based on the recently discovered Lexicographic Depth First Search (LDFS)

    Prospects for macro-level analysis of agricultural innovation systems to enhance the eco-efficiency of farming in developing countries

    No full text
    Agricultural innovation is an essential component in the transition to more sustainable and resilient farming systems across the world. Innovations generally emerge from collective intelligence and action, but innovation systems are often poorly understood. This study explores the properties of innovation systems and their contribution to increased eco-efficiency in agriculture. Using aggregate data and econometric methods, the eco-efficiency of 79 countries was computed and a range of factors relating to research, extension, business and policy was examined. Despite data limitations, the analysis produced significant results. Extension plays an important role in improving the eco-efficiency of agriculture, while agricultural research, under current conditions, has a positive effect on eco-efficiency only in the case of less developed economies. These and other results suggest the importance of context-specific interventions rather than a one size fits all approach. Overall, the analysis illustrated the potential of a macro-level diagnostic approach for assessing the role of innovation systems for sustainability in agriculture. Acknowledgement : The authors would like to thank the UN Food and Agriculture Organisation for funding this research

    The complexity of finding a large subgraph under anonymity constraints

    No full text
    We define and analyze an anonymization problem in undirected graphs, which is motivated by certain privacy issues in social networks. The goal is to remove a small number of vertices from the graph such that in the resulting subgraph every occurring vertex degree occurs many times. We prove that the problem is NP-hard for trees, and also for a number of other highly structured graph classes. Furthermore we provide polynomial time algorithms for other graph classes (like threshold graphs), and thereby establish a sharp borderline between hard and easy cases of the problem. Finally we perform a parametrized analysis, and we concisely characterize combinations of natural parameters that allow FPT algorithms
    corecore