54 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    A Pragmatic Approach to Stateful Partial Order Reduction

    Full text link
    Partial order reduction (POR) is a classic technique for dealing with the state explosion problem in model checking of concurrent programs. Theoretical optimality, i.e., avoiding enumerating equivalent interleavings, does not necessarily guarantee optimal overall performance of the model checking algorithm. The computational overhead required to guarantee optimality may by far cancel out any benefits that an algorithm may have from exploring a smaller state space of interleavings. With a focus on overall performance, we propose new algorithms for stateful POR based on the recently proposed source sets, which are less precise but more efficient than the state of the art in practice. We evaluate efficiency using an implementation that extends Java Pathfinder in the context of verifying concurrent data structures

    Certifying Correctness for Combinatorial Algorithms : by Using Pseudo-Boolean Reasoning

    Get PDF
    Over the last decades, dramatic improvements in combinatorialoptimisation algorithms have significantly impacted artificialintelligence, operations research, and other areas. These advances,however, are achieved through highly sophisticated algorithms that aredifficult to verify and prone to implementation errors that can causeincorrect results. A promising approach to detect wrong results is touse certifying algorithms that produce not only the desired output butalso a certificate or proof of correctness of the output. An externaltool can then verify the proof to determine that the given answer isvalid. In the Boolean satisfiability (SAT) community, this concept iswell established in the form of proof logging, which has become thestandard solution for generating trustworthy outputs. The problem isthat there are still some SAT solving techniques for which prooflogging is challenging and not yet used in practice. Additionally,there are many formalisms more expressive than SAT, such as constraintprogramming, various graph problems and maximum satisfiability(MaxSAT), for which efficient proof logging is out of reach forstate-of-the-art techniques.This work develops a new proof system building on the cutting planesproof system and operating on pseudo-Boolean constraints (0-1 linearinequalities). We explain how such machine-verifiable proofs can becreated for various problems, including parity reasoning, symmetry anddominance breaking, constraint programming, subgraph isomorphism andmaximum common subgraph problems, and pseudo-Boolean problems. Weimplement and evaluate the resulting algorithms and a verifier for theproof format, demonstrating that the approach is practical for a widerange of problems. We are optimistic that the proposed proof system issuitable for designing certifying variants of algorithms inpseudo-Boolean optimisation, MaxSAT and beyond

    LIPIcs, Volume 248, ISAAC 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 248, ISAAC 2022, Complete Volum

    Scalable Algorithms for the Analysis of Massive Networks

    Get PDF
    Die Netzwerkanalyse zielt darauf ab, nicht-triviale Erkenntnisse aus vernetzten Daten zu gewinnen. Beispiele für diese Erkenntnisse sind die Wichtigkeit einer Entität im Verhältnis zu anderen nach bestimmten Kriterien oder das Finden des am besten geeigneten Partners für jeden Teilnehmer eines Netzwerks - bekannt als Maximum Weighted Matching (MWM). Da der Begriff der Wichtigkeit an die zu betrachtende Anwendung gebunden ist, wurden zahlreiche Zentralitätsmaße eingeführt. Diese Maße stammen hierbei aus Jahrzehnten, in denen die Rechenleistung sehr begrenzt war und die Netzwerke im Vergleich zu heute viel kleiner waren. Heute sind massive Netzwerke mit Millionen von Kanten allgegenwärtig und eine triviale Berechnung von Zentralitätsmaßen ist oft zu zeitaufwändig. Darüber hinaus ist die Suche nach der Gruppe von k Knoten mit hoher Zentralität eine noch kostspieligere Aufgabe. Skalierbare Algorithmen zur Identifizierung hochzentraler (Gruppen von) Knoten in großen Graphen sind von großer Bedeutung für eine umfassende Netzwerkanalyse. Heutigen Netzwerke verändern sich zusätzlich im zeitlichen Verlauf und die effiziente Aktualisierung der Ergebnisse nach einer Änderung ist eine Herausforderung. Effiziente dynamische Algorithmen sind daher ein weiterer wesentlicher Bestandteil moderner Analyse-Pipelines. Hauptziel dieser Arbeit ist es, skalierbare algorithmische Lösungen für die zwei oben genannten Probleme zu finden. Die meisten unserer Algorithmen benötigen Sekunden bis einige Minuten, um diese Aufgaben in realen Netzwerken mit bis zu Hunderten Millionen von Kanten zu lösen, was eine deutliche Verbesserung gegenüber dem Stand der Technik darstellt. Außerdem erweitern wir einen modernen Algorithmus für MWM auf dynamische Graphen. Experimente zeigen, dass unser dynamischer MWM-Algorithmus Aktualisierungen in Graphen mit Milliarden von Kanten in Millisekunden bewältigt.Network analysis aims to unveil non-trivial insights from networked data by studying relationship patterns between the entities of a network. Among these insights, a popular one is to quantify the importance of an entity with respect to the others according to some criteria. Another one is to find the most suitable matching partner for each participant of a network knowing the pairwise preferences of the participants to be matched with each other - known as Maximum Weighted Matching (MWM). Since the notion of importance is tied to the application under consideration, numerous centrality measures have been introduced. Many of these measures, however, were conceived in a time when computing power was very limited and networks were much smaller compared to today's, and thus scalability to large datasets was not considered. Today, massive networks with millions of edges are ubiquitous, and a complete exact computation for traditional centrality measures are often too time-consuming. This issue is amplified if our objective is to find the group of k vertices that is the most central as a group. Scalable algorithms to identify highly central (groups of) vertices on massive graphs are thus of pivotal importance for large-scale network analysis. In addition to their size, today's networks often evolve over time, which poses the challenge of efficiently updating results after a change occurs. Hence, efficient dynamic algorithms are essential for modern network analysis pipelines. In this work, we propose scalable algorithms for identifying important vertices in a network, and for efficiently updating them in evolving networks. In real-world graphs with hundreds of millions of edges, most of our algorithms require seconds to a few minutes to perform these tasks. Further, we extend a state-of-the-art algorithm for MWM to dynamic graphs. Experiments show that our dynamic MWM algorithm handles updates in graphs with billion edges in milliseconds
    • …
    corecore