215 research outputs found

    Orienting Graphs to Optimize Reachability

    Full text link
    The paper focuses on two problems: (i) how to orient the edges of an undirected graph in order to maximize the number of ordered vertex pairs (x,y) such that there is a directed path from x to y, and (ii) how to orient the edges so as to minimize the number of such pairs. The paper describes a quadratic-time algorithm for the first problem, and a proof that the second problem is NP-hard to approximate within some constant 1+epsilon > 1. The latter proof also shows that the second problem is equivalent to ``comparability graph completion''; neither problem was previously known to be NP-hard

    An algorithm for orienting graphs based on cause-effect pairs and its applications to orienting protein networks.

    Get PDF
    Acknowledgments: I would like to thank my thesis advisor, Prof. Roded Sharan, for the initial idea and the excellent guidance throughout the research. I would like to thank Prof. Uri Zwick and Prof. Vineet Bafna for substantial contribution to this work and for co-authoring the paper, upon which this thesis is based. I also thank Andreas Beyer and Silpa Suthram for providing the kinase-substrate data, Oved Ourfali for his help with Integer Programming implementation, and Rani Hod for his help with some theoretical issues. Abstract In recent years we have seen a vast increase in the amount of protein-protein interaction data. Study of the resulting biological networks can provide us a better understanding of the processes taking place within a cell. In this work we consider a graph orientation problem arising in the study of biological networks. Given an undirected graph and a list of ordered source-target pairs, the goal is to orient the graph so that a maximum number of pairs will admit a directed path from the source to the target. We show that the problem is NP-hard and hard to approximate to within a constant ratio. We then study restrictions of the problem to various graph classes, and provide an O(log n) approximation algorithm for the general case. We show that this algorithm achieves very tight approximation ratios in practice and is able to infer edge directions with high accuracy on both simulated and real network data

    Exploiting bounded signal flow for graph orientation based on cause-effect pairs

    Get PDF
    Background: We consider the following problem: Given an undirected network and a set of sender–receiver pairs, direct all edges such that the maximum number of “signal flows ” defined by the pairs can be routed respecting edge directions. This problem has applications in understanding protein interaction based cell regulation mechanisms. Since this problem is NP-hard, research so far concentrated on polynomial-time approximation algorithms and tractable special cases. Results: We take the viewpoint of parameterized algorithmics and examine several parameters related to the maximum signal flow over vertices or edges. We provide several fixed-parameter tractability results, and in one case a sharp complexity dichotomy between a linear-time solvable case and a slightly more general NP-hard case. We examine the value of these parameters for several real-world network instances. Conclusions: Several biologically relevant special cases of the NP-hard problem can be solved to optimality. In this way, parameterized analysis yields both deeper insight into the computational complexity and practical solving strategies. Background Current technologies [1] like two-hybrid screening ca

    Parameterized Inapproximability for Steiner Orientation by Gap Amplification

    Get PDF
    In the k-Steiner Orientation problem, we are given a mixed graph, that is, with both directed and undirected edges, and a set of k terminal pairs. The goal is to find an orientation of the undirected edges that maximizes the number of terminal pairs for which there is a path from the source to the sink. The problem is known to be W[1]-hard when parameterized by k and hard to approximate up to some constant for FPT algorithms assuming Gap-ETH. On the other hand, no approximation factor better than ?(k) is known. We show that k-Steiner Orientation is unlikely to admit an approximation algorithm with any constant factor, even within FPT running time. To obtain this result, we construct a self-reduction via a hashing-based gap amplification technique, which turns out useful even outside of the FPT paradigm. Precisely, we rule out any approximation factor of the form (log k)^o(1) for FPT algorithms (assuming FPT ? W[1]) and (log n)^o(1) for purely polynomial-time algorithms (assuming that the class W[1] does not admit randomized FPT algorithms). This constitutes a novel inapproximability result for polynomial-time algorithms obtained via tools from the FPT theory. Moreover, we prove k-Steiner Orientation to belong to W[1], which entails W[1]-completeness of (log k)^o(1)-approximation for k-Steiner Orientation. This provides an example of a natural approximation task that is complete in a parameterized complexity class. Finally, we apply our technique to the maximization version of directed multicut - Max (k,p)-Directed Multicut - where we are given a directed graph, k terminals pairs, and a budget p. The goal is to maximize the number of separated terminal pairs by removing p edges. We present a simple proof that the problem admits no FPT approximation with factor ?(k^(1/2 - ?)) (assuming FPT ? W[1]) and no polynomial-time approximation with ratio ?(|E(G)|^(1/2 - ?)) (assuming NP ? co-RP)

    Abstractions, Analysis Techniques, and Synthesis of Scalable Control Strategies for Robot Swarms

    Get PDF
    Tasks that require parallelism, redundancy, and adaptation to dynamic, possibly hazardous environments can potentially be performed very efficiently and robustly by a swarm robotic system. Such a system would consist of hundreds or thousands of anonymous, resource-constrained robots that operate autonomously, with little to no direct human supervision. The massive parallelism of a swarm would allow it to perform effectively in the event of robot failures, and the simplicity of individual robots facilitates a low unit cost. Key challenges in the development of swarm robotic systems include the accurate prediction of swarm behavior and the design of robot controllers that can be proven to produce a desired macroscopic outcome. The controllers should be scalable, meaning that they ensure system operation regardless of the swarm size. This thesis presents a comprehensive approach to modeling a swarm robotic system, analyzing its performance, and synthesizing scalable control policies that cause the populations of different swarm elements to evolve in a specified way that obeys time and efficiency constraints. The control policies are decentralized, computed a priori, implementable on robots with limited sensing and communication capabilities, and have theoretical guarantees on performance. To facilitate this framework of abstraction and top-down controller synthesis, the swarm is designed to emulate a system of chemically reacting molecules. The majority of this work considers well-mixed systems when there are interaction-dependent task transitions, with some modeling and analysis extensions to spatially inhomogeneous systems. The methodology is applied to the design of a swarm task allocation approach that does not rely on inter-robot communication, a reconfigurable manufacturing system, and a cooperative transport strategy for groups of robots. The third application incorporates observations from a novel experimental study of the mechanics of cooperative retrieval in Aphaenogaster cockerelli ants. The correctness of the abstractions and the correspondence of the evolution of the controlled system to the target behavior are validated with computer simulations. The investigated applications form the building blocks for a versatile swarm system with integrated capabilities that have performance guarantees

    Engineering Benchmarks for Planning: the Domains Used in the Deterministic Part of IPC-4

    Full text link
    In a field of research about general reasoning mechanisms, it is essential to have appropriate benchmarks. Ideally, the benchmarks should reflect possible applications of the developed technology. In AI Planning, researchers more and more tend to draw their testing examples from the benchmark collections used in the International Planning Competition (IPC). In the organization of (the deterministic part of) the fourth IPC, IPC-4, the authors therefore invested significant effort to create a useful set of benchmarks. They come from five different (potential) real-world applications of planning: airport ground traffic control, oil derivative transportation in pipeline networks, model-checking safety properties, power supply restoration, and UMTS call setup. Adapting and preparing such an application for use as a benchmark in the IPC involves, at the time, inevitable (often drastic) simplifications, as well as careful choice between, and engineering of, domain encodings. For the first time in the IPC, we used compilations to formulate complex domain features in simple languages such as STRIPS, rather than just dropping the more interesting problem constraints in the simpler language subsets. The article explains and discusses the five application domains and their adaptation to form the PDDL test suites used in IPC-4. We summarize known theoretical results on structural properties of the domains, regarding their computational complexity and provable properties of their topology under the h+ function (an idealized version of the relaxed plan heuristic). We present new (empirical) results illuminating properties such as the quality of the most wide-spread heuristic functions (planning graph, serial planning graph, and relaxed plan), the growth of propositional representations over instance size, and the number of actions available to achieve each fact; we discuss these data in conjunction with the best results achieved by the different kinds of planners participating in IPC-4
    corecore