8,627 research outputs found

    A 2k2k-Vertex Kernel for Maximum Internal Spanning Tree

    Full text link
    We consider the parameterized version of the maximum internal spanning tree problem, which, given an nn-vertex graph and a parameter kk, asks for a spanning tree with at least kk internal vertices. Fomin et al. [J. Comput. System Sci., 79:1-6] crafted a very ingenious reduction rule, and showed that a simple application of this rule is sufficient to yield a 3k3k-vertex kernel. Here we propose a novel way to use the same reduction rule, resulting in an improved 2k2k-vertex kernel. Our algorithm applies first a greedy procedure consisting of a sequence of local exchange operations, which ends with a local-optimal spanning tree, and then uses this special tree to find a reducible structure. As a corollary of our kernel, we obtain a deterministic algorithm for the problem running in time 4knO(1)4^k \cdot n^{O(1)}

    The Minimum Wiener Connector

    Full text link
    The Wiener index of a graph is the sum of all pairwise shortest-path distances between its vertices. In this paper we study the novel problem of finding a minimum Wiener connector: given a connected graph G=(V,E)G=(V,E) and a set QVQ\subseteq V of query vertices, find a subgraph of GG that connects all query vertices and has minimum Wiener index. We show that The Minimum Wiener Connector admits a polynomial-time (albeit impractical) exact algorithm for the special case where the number of query vertices is bounded. We show that in general the problem is NP-hard, and has no PTAS unless P=NP\mathbf{P} = \mathbf{NP}. Our main contribution is a constant-factor approximation algorithm running in time O~(QE)\widetilde{O}(|Q||E|). A thorough experimentation on a large variety of real-world graphs confirms that our method returns smaller and denser solutions than other methods, and does so by adding to the query set QQ a small number of important vertices (i.e., vertices with high centrality).Comment: Published in Proceedings of the 2015 ACM SIGMOD International Conference on Management of Dat

    Dynamic Exploration of Networks: from general principles to the traceroute process

    Full text link
    Dynamical processes taking place on real networks define on them evolving subnetworks whose topology is not necessarily the same of the underlying one. We investigate the problem of determining the emerging degree distribution, focusing on a class of tree-like processes, such as those used to explore the Internet's topology. A general theory based on mean-field arguments is proposed, both for single-source and multiple-source cases, and applied to the specific example of the traceroute exploration of networks. Our results provide a qualitative improvement in the understanding of dynamical sampling and of the interplay between dynamics and topology in large networks like the Internet.Comment: 13 pages, 6 figure

    Parameterized algorithms of fundamental NP-hard problems: a survey

    Get PDF
    Parameterized computation theory has developed rapidly over the last two decades. In theoretical computer science, it has attracted considerable attention for its theoretical value and significant guidance in many practical applications. We give an overview on parameterized algorithms for some fundamental NP-hard problems, including MaxSAT, Maximum Internal Spanning Trees, Maximum Internal Out-Branching, Planar (Connected) Dominating Set, Feedback Vertex Set, Hyperplane Cover, Vertex Cover, Packing and Matching problems. All of these problems have been widely applied in various areas, such as Internet of Things, Wireless Sensor Networks, Artificial Intelligence, Bioinformatics, Big Data, and so on. In this paper, we are focused on the algorithms’ main idea and algorithmic techniques, and omit the details of them

    Universal Compressed Text Indexing

    Get PDF
    The rise of repetitive datasets has lately generated a lot of interest in compressed self-indexes based on dictionary compression, a rich and heterogeneous family that exploits text repetitions in different ways. For each such compression scheme, several different indexing solutions have been proposed in the last two decades. To date, the fastest indexes for repetitive texts are based on the run-length compressed Burrows-Wheeler transform and on the Compact Directed Acyclic Word Graph. The most space-efficient indexes, on the other hand, are based on the Lempel-Ziv parsing and on grammar compression. Indexes for more universal schemes such as collage systems and macro schemes have not yet been proposed. Very recently, Kempa and Prezza [STOC 2018] showed that all dictionary compressors can be interpreted as approximation algorithms for the smallest string attractor, that is, a set of text positions capturing all distinct substrings. Starting from this observation, in this paper we develop the first universal compressed self-index, that is, the first indexing data structure based on string attractors, which can therefore be built on top of any dictionary-compressed text representation. Let γ\gamma be the size of a string attractor for a text of length nn. Our index takes O(γlog(n/γ))O(\gamma\log(n/\gamma)) words of space and supports locating the occocc occurrences of any pattern of length mm in O(mlogn+occlogϵn)O(m\log n + occ\log^{\epsilon}n) time, for any constant ϵ>0\epsilon>0. This is, in particular, the first index for general macro schemes and collage systems. Our result shows that the relation between indexing and compression is much deeper than what was previously thought: the simple property standing at the core of all dictionary compressors is sufficient to support fast indexed queries.Comment: Fixed with reviewer's comment

    Galaxy Formation Spanning Cosmic History

    Get PDF
    Over the past several decades, galaxy formation theory has met with significant successes. In order to test current theories thoroughly we require predictions for as yet unprobed regimes. To this end, we describe a new implementation of the Galform semi-analytic model of galaxy formation. Our motivation is the success of the model described by Bower et al. in explaining many aspects of galaxy formation. Despite this success, the Bower et al. model fails to match some observational constraints and certain aspects of its physical implementation are not as realistic as we would like. The model described in this work includes substantially updated physics, taking into account developments in our understanding over the past decade, and removes certain limiting assumptions made by this (and most other) semi-analytic models. This allows it to be exploited reliably in high-redshift and low mass regimes. Furthermore, we have performed an exhaustive search of model parameter space to find a particular set of model parameters which produce results in good agreement with a wide range of observational data (luminosity functions, galaxy sizes and dynamics, clustering, colours, metal content) over a wide range of redshifts. This model represents a solid basis on which to perform calculations of galaxy formation in as yet unprobed regimes.Comment: MNRAS accepted. Extended version (with additional figures and details of implementation) is available at http://www.galform.or

    Single-tree detection in high-density LiDAR data from UAV-based survey

    Get PDF
    UAV-based LiDAR survey provides very-high-density point clouds, which involve very rich information about forest detailed structure, allowing for detection of individual trees, as well as demanding high computational load. Single-tree detection is of great interest for forest management and ecology purposes, and the task is relatively well solved for forests made of single or largely dominant species, and trees having a very evident pointed shape in the upper part of the canopy (in particular conifers). Most authors proposed methods based totally or partially on search of local maxima in the canopy, which has poor performance for species that have flat or irregular upper canopy, and for mixed forests, especially where taller trees hide smaller ones. Such considerations apply in particular to Mediterranean hardwood forests. In such context, it is imperative to use the whole volume of the point cloud, however keeping computational load tractable. The authors propose the use of a methodology based on modelling the 3D-shape of the tree, which improves performance w.r.t to maxima-based models. A case study, performed on a hazel grove, is provided to document performance improvement on a relatively simple, but significant, case

    The bi-objective travelling salesman problem with profits and its connection to computer networks.

    Get PDF
    This is an interdisciplinary work in Computer Science and Operational Research. As it is well known, these two very important research fields are strictly connected. Among other aspects, one of the main areas where this interplay is strongly evident is Networking. As far as most recent decades have seen a constant growing of every kind of network computer connections, the need for advanced algorithms that help in optimizing the network performances became extremely relevant. Classical Optimization-based approaches have been deeply studied and applied since long time. However, the technology evolution asks for more flexible and advanced algorithmic approaches to model increasingly complex network configurations. In this thesis we study an extension of the well known Traveling Salesman Problem (TSP): the Traveling Salesman Problem with Profits (TSPP). In this generalization, a profit is associated with each vertex and it is not necessary to visit all vertices. The goal is to determine a route through a subset of nodes that simultaneously minimizes the travel cost and maximizes the collected profit. The TSPP models the problem of sending a piece of information through a network where, in addition to the sending costs, it is also important to consider what “profit” this information can get during its routing. Because of its formulation, the right way to tackled the TSPP is by Multiobjective Optimization algorithms. Within this context, the aim of this work is to study new ways to solve the problem in both the exact and the approximated settings, giving all feasible instruments that can help to solve it, and to provide experimental insights into feasible networking instances
    corecore