2,625 research outputs found

    Dynamic Matching Algorithms Under Vertex Updates

    Get PDF
    Dynamic graph matching algorithms have been extensively studied, but mostly under edge updates. This paper concerns dynamic matching algorithms under vertex updates, where in each update step a single vertex is either inserted or deleted along with its incident edges. A basic setting arising in online algorithms and studied by Bosek et al. [FOCS\u2714] and Bernstein et al. [SODA\u2718] is that of dynamic approximate maximum cardinality matching (MCM) in bipartite graphs in which one side is fixed and vertices on the other side either arrive or depart via vertex updates. In the BASIC-incremental setting, vertices only arrive, while in the BASIC-decremental setting vertices only depart. When vertices can both arrive and depart, we have the BASIC-dynamic setting. In this paper we also consider the setting in which both sides of the bipartite graph are dynamic. We call this the MEDIUM-dynamic setting, and MEDIUM-decremental is the restriction when vertices can only depart. The GENERAL-dynamic setting is when the graph is not necessarily bipartite and the vertices can both depart and arrive. Denote by K the total number of edges inserted and deleted to and from the graph throughout the entire update sequence. A well-studied measure, the recourse of a dynamic matching algorithm is the number of changes made to the matching per step. We largely focus on Maximal Matching (MM) which is a 2-approximation to the MCM. Our main results are as follows. - In the BASIC-dynamic setting, there is a straightforward algorithm for maintaining a MM, with a total runtime of O(K) and constant worst-case recourse. In fact, this algorithm never removes an edge from the matching; we refer to such an algorithm as irrevocable. - For the MEDIUM-dynamic setting we give a strong conditional lower bound that even holds in the MEDIUM-decremental setting: if for any fixed ? > 0, there is an irrevocable decremental MM algorithm with a total runtime of O(K ? n^{1-?}), this would refute the OMv conjecture; a similar (but weaker) hardness result can be achieved via a reduction from the Triangle Detection conjecture. - Next, we consider the GENERAL-dynamic setting, and design an MM algorithm with a total runtime of O(K) and constant worst-case recourse. We achieve this result via a 1-revocable algorithm, which may remove just one edge per update step. As argued above, an irrevocable algorithm with such a runtime is not likely to exist. - Finally, back to the BASIC-dynamic setting, we present an algorithm with a total runtime of O(K), which provides an (e/(e-1))-approximation to the MCM. To this end, we build on the classic "ranking" online algorithm by Karp et al. [STOC\u2790]. Beyond the results, our work draws connections between the areas of dynamic graph algorithms and online algorithms, and it proposes several open questions that seem to be overlooked thus far

    A Multiobjective Branch-and-Bound Method for Planning Wastewater and Residual Management Systems

    Get PDF
    A multiobjective branch-and-bound algorithm is proposed for use in analysing multiobjective fixed-charge network-flow problems which are found commonly in water resources planning situations. Also proposed is a multiobjective imputed value analysis which makes use of the branch-and-bound tree structure and allows the comparison of the importance of facilities in the network as represented by individual arcs or sets of arcs. The mathematical formulation and the analysis procedure of the method are described, and the potential usefulness of the method is demonstrated using two hypothetical example problems dealing with regional wastewater treatment and residual management systems. A FORTRAN program for implementing the algorithm is available from the first author

    Analysis of a network design problem

    Get PDF

    The Power Of Locality In Network Algorithms

    Get PDF
    Over the last decade we have witnessed the rapid proliferation of large-scale complex networks, spanning many social, information and technological domains. While many of the tasks which users of such networks face are essentially global and involve the network as a whole, the size of these networks is huge and the information available to users is only local. In this dissertation we show that even when faced with stringent locality constraints, one can still effectively solve prominent algorithmic problems on such networks. In the first part of the dissertation we present a natural algorithmic framework designed to model the behaviour of an external agent trying to solve a network optimization problem with limited access to the network data. Our study focuses on local information algorithms --- sequential algorithms where the network topology is initially unknown and is revealed only within a local neighborhood of vertices that have been irrevocably added to the output set. We address both network coverage problems as well as network search problems. Our results include local information algorithms for coverage problems whose performance closely match the best possible even when information about network structure is unrestricted. We also demonstrate a sharp threshold on the level of visibility required: at a certain visibility level it is possible to design algorithms that nearly match the best approximation possible even with full access to the network structure, but with any less information it is impossible to achieve a reasonable approximation. For preferential attachment networks, we obtain polylogarithmic approximations to the problem of finding the smallest subgraph that connects a subset of nodes and the problem of finding the highest-degree nodes. This is achieved by addressing a decade-old open question of Bollobás and Riordan on locally finding the root in a preferential attachment process. In the second part of the dissertation we focus on designing highly time efficient local algorithms for central mining problems on complex networks that have been in the focus of the research community over a decade: finding a small set of influential nodes in the network, and fast ranking of nodes. Among our results is an essentially runtime-optimal local algorithm for the influence maximization problem in the standard independent cascades model of information diffusion and an essentially runtime-optimal local algorithm for the problem of returning all nodes with PageRank bigger than a given threshold. Our work demonstrates that locality is powerful enough to allow efficient solutions to many central algorithmic problems on complex networks

    Synthesizing stream control

    Get PDF
    For the management of reactive systems, controllers must coordinate time, data streams, and data transformations, all joint by the high level perspective of their control flow. This control flow is required to drive the system correctly and continuously, which turns the development into a challenge. The process is error-prone, time consuming, unintuitive, and costly. An attractive alternative is to synthesize the system instead, where the developer only needs to specify the desired behavior. The synthesis engine then automatically takes care of all the technical details. However, while current algorithms for the synthesis of reactive systems are well-suited to handle control, they fail on complex data transformations due to the complexity of the comparably large data space. Thus, to overcome the challenge of explicitly handling the data we must separate data and control. We introduce Temporal Stream Logic (TSL), a logic which exclusively argues about the control of the controller, while treating data and functional transformations as interchangeable black-boxes. In TSL it is possible to specify control flow properties independently of the complexity of the handled data. Furthermore, with TSL at hand a synthesis engine can check for realizability, even without a concrete implementation of the data transformations. We present a modular development framework that first uses synthesis to identify the high level control flow of a program. If successful, the created control flow then is extended with concrete data transformations in order to be compiled into a final executable. Our results also show that the current synthesis approaches cannot replace existing manual development work flows immediately. During the development of a reactive system, the developer still may use incomplete or faulty specifications at first, that need the be refined after a subsequent inspection. In the worst case, constraints are contradictory or miss important assumptions, which leads to unrealizable specifications. In both scenarios, the developer needs additional feedback from the synthesis engine to debug errors for finally improving the system specification. To this end, we explore two further possible improvements. On the one hand, we consider output sensitive synthesis metrics, which allow to synthesize simple and well structured solutions that help the developer to understand and verify the underlying behavior quickly. On the other hand, we consider the extension of delay, whose requirement is a frequent reason for unrealizability. With both methods at hand, we resolve the aforementioned problems and therefore help the developer in the development phase with the effective creation of a safe and correct reactive system.Um reaktive Systeme zu regeln müssen Steuergeräte Zeit, Datenströme und Datentransformationen koordinieren, die durch den übergeordneten Kontrollfluss zusammengefasst werden. Die Aufgabe des Kontrollflusses ist es das System korrekt und dauerhaft zu betreiben. Die Entwicklung solcher Systeme wird dadurch zu einer Herausforderung, denn der Prozess ist fehleranfällig, zeitraubend, unintuitiv und kostspielig. Eine attraktive Alternative ist es stattdessen das System zu synthetisieren, wobei der Entwickler nur das gewünschte Verhalten des Systems festlegt. Der Syntheseapparat kümmert sich dann automatisch um alle technischen Details. Während aktuelle Algorithmen für die Synthese von reaktiven Systemen erfolgreich mit dem Kontrollanteil umgehen können, versagen sie jedoch, sobald komplexe Datentransformationen hinzukommen, aufgrund der Komplexität des vergleichsweise großen Datenraums. Daten und Kontrolle müssen demnach getrennt behandelt werden, um auch große Datenräumen effizient handhaben zu können. Wir präsentieren Temporal Stream Logic (TSL), eine Logik die ausschließlich die Kontrolle einer Steuerung betrachtet, wohingegen Daten und funktionale Datentransformationen als austauschbare Blackboxen gehandhabt werden. In TSL ist es möglich Kontrollflusseigenschaften unabhängig von der Komplexität der zugrunde liegenden Daten zu beschreiben. Des Weiteren kann ein auf TSL beruhender Syntheseapparat die Realisierbarkeit einer Spezifikation prüfen, selbst ohne die konkreten Implementierungen der Datentransformationen zu kennen. Wir präsentieren ein modulares Grundgerüst für die Entwicklung. Es verwendet zunächst den Syntheseapparat um den übergeordneten Kontrollfluss zu erzeugen. Ist dies erfolgreich, so wird der resultierende Kontrollfluss um die konkreten Implementierungen der Datentransformationen erweitert und anschließend zu einer ausführbare Anwendung kompiliert. Wir zeigen auch auf, dass bisherige Syntheseverfahren bereits existierende manuelle Entwicklungsprozesse noch nicht instantan ersetzen können. Im Verlauf der Entwicklung ist es auch weiterhin möglich, dass der Entwickler zunächst unvollständige oder fehlerhafte Spezifikationen erstellt, welche dann erst nach genauerer Betrachtung des synthetisierten Systems weiter verbessert werden können. Im schlimmsten Fall sind Anforderungen inkonsistent oder wichtige Annahmen über das Verhalten fehlen, was zu unrealisierbaren Spezifikationen führt. In beiden Fällen benötigt der Entwickler zusätzliche Rückmeldungen vom Syntheseapparat, um Fehler zu identifizieren und die Spezifikation schlussendlich zu verbessern. In diesem Zusammenhang untersuchen wir zwei mögliche Erweiterungen. Zum einen betrachten wir ausgabeabhängige Metriken, die es dem Entwickler erlauben einfache und wohlstrukturierte Lösungen zu synthetisieren die verständlich sind und deren Verhalten einfach zu verifizieren ist. Zum anderen betrachten wir die Erweiterung um Verzögerungen, welche eine der Hauptursachen für Unrealisierbarkeit darstellen. Mit beiden Methoden beheben wir die jeweils zuvor genannten Probleme und helfen damit dem Entwickler während der Entwicklungsphase auch wirklich das reaktive System zu kreieren, dass er sich auch tatsächlich vorstellt

    構造化データに対する予測手法:グラフ,順序,時系列

    Get PDF
    京都大学新制・課程博士博士(情報学)甲第23439号情博第769号新制||情||131(附属図書館)京都大学大学院情報学研究科知能情報学専攻(主査)教授 鹿島 久嗣, 教授 山本 章博, 教授 阿久津 達也学位規則第4条第1項該当Doctor of InformaticsKyoto UniversityDFA

    Learning to compare nodes in branch and bound with graph neural networks

    Full text link
    En informatique, la résolution de problèmes NP-difficiles en un temps raisonnable est d’une grande importance : optimisation de la chaîne d’approvisionnement, planification, routage, alignement de séquences biologiques multiples, inference dans les modèles graphiques pro- babilistes, et même certains problèmes de cryptographie sont tous des examples de la classe NP-complet. En pratique, nous modélisons beaucoup d’entre eux comme un problème d’op- timisation en nombre entier, que nous résolvons à l’aide de la méthodologie séparation et évaluation. Un algorithme de ce style divise un espace de recherche pour l’explorer récursi- vement (séparation), et obtient des bornes d’optimalité en résolvant des relaxations linéaires sur les sous-espaces (évaluation). Pour spécifier un algorithme, il faut définir plusieurs pa- ramètres, tel que la manière d’explorer les espaces de recherche, de diviser une recherche l’espace une fois exploré, ou de renforcer les relaxations linéaires. Ces politiques peuvent influencer considérablement la performance de résolution. Ce travail se concentre sur une nouvelle manière de dériver politique de recherche, c’est à dire le choix du prochain sous-espace à séparer étant donné une partition en cours, en nous servant de l’apprentissage automatique profond. Premièrement, nous collectons des données résumant, sur une collection de problèmes donnés, quels sous-espaces contiennent l’optimum et quels ne le contiennent pas. En représentant ces sous-espaces sous forme de graphes bipartis qui capturent leurs caractéristiques, nous entraînons un réseau de neurones graphiques à déterminer la probabilité qu’un sous-espace contienne la solution optimale par apprentissage supervisé. Le choix d’un tel modèle est particulièrement utile car il peut s’adapter à des problèmes de différente taille sans modifications. Nous montrons que notre approche bat celle de nos concurrents, consistant à des modèles d’apprentissage automatique plus simples entraînés à partir des statistiques du solveur, ainsi que la politique par défaut de SCIP, un solveur open-source compétitif, sur trois familles NP-dures: des problèmes de recherche de stables de taille maximum, de flots de réseau multicommodité à charge fixe, et de satisfiabilité maximum.In computer science, solving NP-hard problems in a reasonable time is of great importance, such as in supply chain optimization, scheduling, routing, multiple biological sequence align- ment, inference in probabilistic graphical models, and even some problems in cryptography. In practice, we model many of them as a mixed integer linear optimization problem, which we solve using the branch and bound framework. An algorithm of this style divides a search space to explore it recursively (branch) and obtains optimality bounds by solving linear relaxations in such sub-spaces (bound). To specify an algorithm, one must set several pa- rameters, such as how to explore search spaces, how to divide a search space once it has been explored, or how to tighten these linear relaxations. These policies can significantly influence resolution performance. This work focuses on a novel method for deriving a search policy, that is, a rule for select- ing the next sub-space to explore given a current partitioning, using deep machine learning. First, we collect data summarizing which subspaces contain the optimum, and which do not. By representing these sub-spaces as bipartite graphs encoding their characteristics, we train a graph neural network to determine the probability that a subspace contains the optimal so- lution by supervised learning. The choice of such design is particularly useful as the machine learning model can automatically adapt to problems of different sizes without modifications. We show that our approach beats the one of our competitors, consisting of simpler machine learning models trained from solver statistics, as well as the default policy of SCIP, a state- of-the-art open-source solver, on three NP-hard benchmarks: generalized independent set, fixed-charge multicommodity network flow, and maximum satisfiability problems

    On The Multiparty Communication Complexity of Testing Triangle-Freeness

    Full text link
    In this paper we initiate the study of property testing in simultaneous and non-simultaneous multi-party communication complexity, focusing on testing triangle-freeness in graphs. We consider the coordinator\textit{coordinator} model, where we have kk players receiving private inputs, and a coordinator who receives no input; the coordinator can communicate with all the players, but the players cannot communicate with each other. In this model, we ask: if an input graph is divided between the players, with each player receiving some of the edges, how many bits do the players and the coordinator need to exchange to determine if the graph is triangle-free, or far\textit{far} from triangle-free? For general communication protocols, we show that O~(k(nd)1/4+k2)\tilde{O}(k(nd)^{1/4}+k^2) bits are sufficient to test triangle-freeness in graphs of size nn with average degree dd (the degree need not be known in advance). For simultaneous\textit{simultaneous} protocols, where there is only one communication round, we give a protocol that uses O~(kn)\tilde{O}(k \sqrt{n}) bits when d=O(n)d = O(\sqrt{n}) and O~(k(nd)1/3)\tilde{O}(k (nd)^{1/3}) when d=Ω(n)d = \Omega(\sqrt{n}); here, again, the average degree dd does not need to be known in advance. We show that for average degree d=O(1)d = O(1), our simultaneous protocol is asymptotically optimal up to logarithmic factors. For higher degrees, we are not able to give lower bounds on testing triangle-freeness, but we give evidence that the problem is hard by showing that finding an edge that participates in a triangle is hard, even when promised that at least a constant fraction of the edges must be removed in order to make the graph triangle-free.Comment: To Appear in PODC 201
    corecore