1,637 research outputs found

    Query processing of spatial objects: Complexity versus Redundancy

    Get PDF
    The management of complex spatial objects in applications, such as geography and cartography, imposes stringent new requirements on spatial database systems, in particular on efficient query processing. As shown before, the performance of spatial query processing can be improved by decomposing complex spatial objects into simple components. Up to now, only decomposition techniques generating a linear number of very simple components, e.g. triangles or trapezoids, have been considered. In this paper, we will investigate the natural trade-off between the complexity of the components and the redundancy, i.e. the number of components, with respect to its effect on efficient query processing. In particular, we present two new decomposition methods generating a better balance between the complexity and the number of components than previously known techniques. We compare these new decomposition methods to the traditional undecomposed representation as well as to the well-known decomposition into convex polygons with respect to their performance in spatial query processing. This comparison points out that for a wide range of query selectivity the new decomposition techniques clearly outperform both the undecomposed representation and the convex decomposition method. More important than the absolute gain in performance by a factor of up to an order of magnitude is the robust performance of our new decomposition techniques over the whole range of query selectivity

    The Lazy Flipper: MAP Inference in Higher-Order Graphical Models by Depth-limited Exhaustive Search

    Full text link
    This article presents a new search algorithm for the NP-hard problem of optimizing functions of binary variables that decompose according to a graphical model. It can be applied to models of any order and structure. The main novelty is a technique to constrain the search space based on the topology of the model. When pursued to the full search depth, the algorithm is guaranteed to converge to a global optimum, passing through a series of monotonously improving local optima that are guaranteed to be optimal within a given and increasing Hamming distance. For a search depth of 1, it specializes to Iterated Conditional Modes. Between these extremes, a useful tradeoff between approximation quality and runtime is established. Experiments on models derived from both illustrative and real problems show that approximations found with limited search depth match or improve those obtained by state-of-the-art methods based on message passing and linear programming.Comment: C++ Source Code available from http://hci.iwr.uni-heidelberg.de/software.ph

    Matheuristics: using mathematics for heuristic design

    Get PDF
    Matheuristics are heuristic algorithms based on mathematical tools such as the ones provided by mathematical programming, that are structurally general enough to be applied to different problems with little adaptations to their abstract structure. The result can be metaheuristic hybrids having components derived from the mathematical model of the problems of interest, but the mathematical techniques themselves can define general heuristic solution frameworks. In this paper, we focus our attention on mathematical programming and its contributions to developing effective heuristics. We briefly describe the mathematical tools available and then some matheuristic approaches, reporting some representative examples from the literature. We also take the opportunity to provide some ideas for possible future development

    Tree Diet: Reducing the Treewidth to Unlock FPT Algorithms in RNA Bioinformatics

    Get PDF
    Hard graph problems are ubiquitous in Bioinformatics, inspiring the design of specialized Fixed-Parameter Tractable algorithms, many of which rely on a combination of tree-decomposition and dynamic programming. The time/space complexities of such approaches hinge critically on low values for the treewidth tw of the input graph. In order to extend their scope of applicability, we introduce the Tree-Diet problem, i.e. the removal of a minimal set of edges such that a given tree-decomposition can be slimmed down to a prescribed treewidth tw\u27. Our rationale is that the time gained thanks to a smaller treewidth in a parameterized algorithm compensates the extra post-processing needed to take deleted edges into account. Our core result is an FPT dynamic programming algorithm for Tree-Diet, using 2^{O(tw)}n time and space. We complement this result with parameterized complexity lower-bounds for stronger variants (e.g., NP-hardness when tw\u27 or tw-tw\u27 is constant). We propose a prototype implementation for our approach which we apply on difficult instances of selected RNA-based problems: RNA design, sequence-structure alignment, and search of pseudoknotted RNAs in genomes, revealing very encouraging results. This work paves the way for a wider adoption of tree-decomposition-based algorithms in Bioinformatics

    A Literature Review On Combining Heuristics and Exact Algorithms in Combinatorial Optimization

    Get PDF
    There are several approaches for solving hard optimization problems. Mathematical programming techniques such as (integer) linear programming-based methods and metaheuristic approaches are two extremely effective streams for combinatorial problems. Different research streams, more or less in isolation from one another, created these two. Only several years ago, many scholars noticed the advantages and enormous potential of building hybrids of combining mathematical programming methodologies and metaheuristics. In reality, many problems can be solved much better by exploiting synergies between these approaches than by “pure” classical algorithms. The key question is how to integrate mathematical programming methods and metaheuristics to achieve such benefits. This paper reviews existing techniques for such combinations and provides examples of using them for vehicle routing problems

    In-Close, a fast algorithm for computing formal concepts

    Get PDF
    This paper presents an algorithm, called In-Close, that uses incremental closure and matrix searching to quickly compute all formal concepts in a formal context. In-Close is based, conceptually, on a well known algorithm called Close-By-One. The serial version of a recently published algorithm (Krajca, 2008) was shown to be in the order of 100 times faster than several well-known algorithms, and timings of other algorithms in reviews suggest that none of them are faster than Krajca. This paper compares In-Close to Krajca, discussing computational methods, data requirements and memory considerations. From experiments using several public data sets and random data, this paper shows that In-Close is in the order of 20 times faster than Krajca. In-Close is small, straightforward, requires no matrix pre-processing and is simple to implement.</p
    • …
    corecore