30 research outputs found

    TRIANGULATION PROBLEMS ON GEOMETRIC GRAPHS - SAMPLING OVER CONVEX TRIANGULATIONS

    Get PDF
    Γεωμετρικό γράφημα καλείται ένα σύνολο σημείων V στο επίπεδο μαζί με ένα σύνολο ευθυγράμμων τμημάτων (ακμών) E που έχουν τα άκρα τους στο V, και εύκολα συσχετίζεται με το "αφηρημένο" γράφημα G(V,E). Μελετώντας το πάχος του, δηλαδή τη διαμέριση των ακμών του σε υποσύνολα ελεύθερα διασταυρώσεων (ένα NP-δύσκολο πρόβλημα βελτιστοποίησης), προκύπτει και το πρόβλημα της ύπαρξης τριγωνοποίησης ως ένα ελεύθερο διασταυρώσεων υποσύνολο T των ακμών, καθώς μια τριγωνοποίηση του V αποτελεί το μέγιστο δυνατό τέτοιο σύνολο που είναι δυνατόν να οριστεί δεδομένου του V. Η Διπλωματική αυτή Εργασία αφορά στη μελέτη μιας οικογένειας προβλημάτων ύπαρξης τριγωνοποίησης και την ταξινόμησή τους ως προς την πολυπλοκότητα απόφασης, αλλά και μέτρησης. Από αυτά, το γενικό πρόβλημα απόφασης είναι το μόνο μελετημένο στη βιβλιογραφία (Lloyd, 1977, NP-δύσκολο), ενώ εμείς μελετάμε αφενός την ειδική περίπτωση των κυρτών γεωμετρικών γραφημάτων, αφετέρου ένα "ενδιάμεσο" πρόβλημα ύπαρξης τριγωνοποιημένου πολυγώνου, δημιουργώντας έναν νέο 2 x 2 πίνακα αποτελεσμάτων. Στο τελευταίο κεφάλαιο, τροποποιούμε το πλαίσιο της δουλειάς μας έτσι ώστε να κατασκευάσουμε έναν αλγόριθμο για ομοιόμορφη δειγματοληψία και βέλτιστη κωδικοποίηση των κυρτών τριγωνοποιήσεων, ο οποίος υπερέχει έναντι κάθε γνωστού αλγορίθμου έως σήμερα.A geometric graph is a set of points V on the plane and a set of straight line segments E with endpoints in V, potentially and instinctively associated with the abstract G(V,E). When studying its thickness, i.e. partitioning its edges into crossing-free subsets (an NP-hard optimization problem), the problem of triangulation existence as a crossing-free subset T of the edges naturally occurs, as a triangulation of V is the largest such possible set that may be defined on V. In this Thesis, we examine a family of triangulation existence problems and classify them with respect to their complexity, both for their decision and their counting versions. The general case decision problem is the only one appearing in bibliography (Lloyd, 1977, NP-hard), while we deal with the convex case restriction and an "intermediate" polygon triangulation existence problem, fixing a new 2 by 2 table of results. In the final chapter, we modify our framework in order to build an exact uniform sampling and optimal coding algorithm for convex triangulations, which outperforms any known algorithm to date

    On primal-dual schema for the minimum satisfiability problem

    Get PDF
    Satisfiability problem is the first problem known to be NP-complete [8, 28]. In this thesis, we have studied the minimization version of the satisfiability problem called the MINSAT. Given a set of boolean variables and a set of clauses, such that each clause is a disjunction of variables, the goal is to find the boolean values of the variables so that minimum number of clauses are satisfied. We have used the concept of linear programming and the primal-dual method to study the problem. We have constructed the Linear program of the MINSAT and its restricted version. We have proposed two combinatorial methods to solve the dual of the restricted primal of the MINSAT. Further to this, these two algorithms also obtain an integral solution to the dual of the MINSAT problem. Lastly, we performed a comparison analysis of our proposed algorithms with the simplex method

    A polynomial upper bound for the mixing time of edge rotations on planar maps

    Get PDF
    We consider a natural local dynamic on the set of all rooted planar maps with nn edges that is in some sense analogous to "edge flip" Markov chains, which have been considered before on a variety of combinatorial structures (triangulations of the nn-gon and quadrangulations of the sphere, among others). We provide the first polynomial upper bound for the mixing time of this "edge rotation" chain on planar maps: we show that the spectral gap of the edge rotation chain is bounded below by an appropriate constant times n11/2n^{-11/2}. In doing so, we provide a partially new proof of the fact that the same bound applies to the spectral gap of edge flips on quadrangulations, which makes it possible to generalise a recent result of the author and Stauffer to a chain that relates to edge rotations via Tutte's bijection

    Proceedings of SAT Competition 2016 : Solver and Benchmark Descriptions

    Get PDF
    Peer reviewe

    Logic learning and optimized drawing: two hard combinatorial problems

    Get PDF
    Nowadays, information extraction from large datasets is a recurring operation in countless fields of applications. The purpose leading this thesis is to ideally follow the data flow along its journey, describing some hard combinatorial problems that arise from two key processes, one consecutive to the other: information extraction and representation. The approaches here considered will focus mainly on metaheuristic algorithms, to address the need for fast and effective optimization methods. The problems studied include data extraction instances, as Supervised Learning in Logic Domains and the Max Cut-Clique Problem, as well as two different Graph Drawing Problems. Moreover, stemming from these main topics, other additional themes will be discussed, namely two different approaches to handle Information Variability in Combinatorial Optimization Problems (COPs), and Topology Optimization of lightweight concrete structures

    Parallel and Flow-Based High Quality Hypergraph Partitioning

    Get PDF
    Balanced hypergraph partitioning is a classic NP-hard optimization problem that is a fundamental tool in such diverse disciplines as VLSI circuit design, route planning, sharding distributed databases, optimizing communication volume in parallel computing, and accelerating the simulation of quantum circuits. Given a hypergraph and an integer kk, the task is to divide the vertices into kk disjoint blocks with bounded size, while minimizing an objective function on the hyperedges that span multiple blocks. In this dissertation we consider the most commonly used objective, the connectivity metric, where we aim to minimize the number of different blocks connected by each hyperedge. The most successful heuristic for balanced partitioning is the multilevel approach, which consists of three phases. In the coarsening phase, vertex clusters are contracted to obtain a sequence of structurally similar but successively smaller hypergraphs. Once sufficiently small, an initial partition is computed. Lastly, the contractions are successively undone in reverse order, and an iterative improvement algorithm is employed to refine the projected partition on each level. An important aspect in designing practical heuristics for optimization problems is the trade-off between solution quality and running time. The appropriate trade-off depends on the specific application, the size of the data sets, and the computational resources available to solve the problem. Existing algorithms are either slow, sequential and offer high solution quality, or are simple, fast, easy to parallelize, and offer low quality. While this trade-off cannot be avoided entirely, our goal is to close the gaps as much as possible. We achieve this by improving the state of the art in all non-trivial areas of the trade-off landscape with only a few techniques, but employed in two different ways. Furthermore, most research on parallelization has focused on distributed memory, which neglects the greater flexibility of shared-memory algorithms and the wide availability of commodity multi-core machines. In this thesis, we therefore design and revisit fundamental techniques for each phase of the multilevel approach, and develop highly efficient shared-memory parallel implementations thereof. We consider two iterative improvement algorithms, one based on the Fiduccia-Mattheyses (FM) heuristic, and one based on label propagation. For these, we propose a variety of techniques to improve the accuracy of gains when moving vertices in parallel, as well as low-level algorithmic improvements. For coarsening, we present a parallel variant of greedy agglomerative clustering with a novel method to resolve cluster join conflicts on-the-fly. Combined with a preprocessing phase for coarsening based on community detection, a portfolio of from-scratch partitioning algorithms, as well as recursive partitioning with work-stealing, we obtain our first parallel multilevel framework. It is the fastest partitioner known, and achieves medium-high quality, beating all parallel partitioners, and is close to the highest quality sequential partitioner. Our second contribution is a parallelization of an n-level approach, where only one vertex is contracted and uncontracted on each level. This extreme approach aims at high solution quality via very fine-grained, localized refinement, but seems inherently sequential. We devise an asynchronous n-level coarsening scheme based on a hierarchical decomposition of the contractions, as well as a batch-synchronous uncoarsening, and later fully asynchronous uncoarsening. In addition, we adapt our refinement algorithms, and also use the preprocessing and portfolio. This scheme is highly scalable, and achieves the same quality as the highest quality sequential partitioner (which is based on the same components), but is of course slower than our first framework due to fine-grained uncoarsening. The last ingredient for high quality is an iterative improvement algorithm based on maximum flows. In the sequential setting, we first improve an existing idea by solving incremental maximum flow problems, which leads to smaller cuts and is faster due to engineering efforts. Subsequently, we parallelize the maximum flow algorithm and schedule refinements in parallel. Beyond the strive for highest quality, we present a deterministically parallel partitioning framework. We develop deterministic versions of the preprocessing, coarsening, and label propagation refinement. Experimentally, we demonstrate that the penalties for determinism in terms of partition quality and running time are very small. All of our claims are validated through extensive experiments, comparing our algorithms with state-of-the-art solvers on large and diverse benchmark sets. To foster further research, we make our contributions available in our open-source framework Mt-KaHyPar. While it seems inevitable, that with ever increasing problem sizes, we must transition to distributed memory algorithms, the study of shared-memory techniques is not in vain. With the multilevel approach, even the inherently slow techniques have a role to play in fast systems, as they can be employed to boost quality on coarse levels at little expense. Similarly, techniques for shared-memory parallelism are important, both as soon as a coarse graph fits into memory, and as local building blocks in the distributed algorithm

    Fifth Biennial Report : June 1999 - August 2001

    No full text

    Combinatorial Optimization

    Get PDF
    This report summarizes the meeting on Combinatorial Optimization where new and promising developments in the field were discussed. Th
    corecore