880 research outputs found

    Bounding Search Space Size via (Hyper)tree Decompositions

    Full text link
    This paper develops a measure for bounding the performance of AND/OR search algorithms for solving a variety of queries over graphical models. We show how drawing a connection to the recent notion of hypertree decompositions allows to exploit determinism in the problem specification and produce tighter bounds. We demonstrate on a variety of practical problem instances that we are often able to improve upon existing bounds by several orders of magnitude.Comment: Appears in Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence (UAI2008

    Cooperative Optimization for Energy Minimization: A Case Study of Stereo Matching

    Full text link
    Often times, individuals working together as a team can solve hard problems beyond the capability of any individual in the team. Cooperative optimization is a newly proposed general method for attacking hard optimization problems inspired by cooperation principles in team playing. It has an established theoretical foundation and has demonstrated outstanding performances in solving real-world optimization problems. With some general settings, a cooperative optimization algorithm has a unique equilibrium and converges to it with an exponential rate regardless initial conditions and insensitive to perturbations. It also possesses a number of global optimality conditions for identifying global optima so that it can terminate its search process efficiently. This paper offers a general description of cooperative optimization, addresses a number of design issues, and presents a case study to demonstrate its power

    The Decision Tree Complexity for kk-SUM is at most Nearly Quadratic

    Full text link
    Following a recent improvement of Cardinal et al. on the complexity of a linear decision tree for kk-SUM, resulting in O(n3log3n)O(n^3 \log^3{n}) linear queries, we present a further improvement to O(n2log2n)O(n^2 \log^2{n}) such queries

    Optimal Decomposition and Recombination of Isostatic Geometric Constraint Systems for Designing Layered Materials

    Full text link
    Optimal recursive decomposition (or DR-planning) is crucial for analyzing, designing, solving or finding realizations of geometric constraint sytems. While the optimal DR-planning problem is NP-hard even for general 2D bar-joint constraint systems, we describe an O(n^3) algorithm for a broad class of constraint systems that are isostatic or underconstrained. The algorithm achieves optimality by using the new notion of a canonical DR-plan that also meets various desirable, previously studied criteria. In addition, we leverage recent results on Cayley configuration spaces to show that the indecomposable systems---that are solved at the nodes of the optimal DR-plan by recombining solutions to child systems---can be minimally modified to become decomposable and have a small DR-plan, leading to efficient realization algorithms. We show formal connections to well-known problems such as completion of underconstrained systems. Well suited to these methods are classes of constraint systems that can be used to efficiently model, design and analyze quasi-uniform (aperiodic) and self-similar, layered material structures. We formally illustrate by modeling silica bilayers as body-hyperpin systems and cross-linking microfibrils as pinned line-incidence systems. A software implementation of our algorithms and videos demonstrating the software are publicly available online (visit http://cise.ufl.edu/~tbaker/drp/index.html.

    On the treewidth of triangulated 3-manifolds

    Full text link
    In graph theory, as well as in 3-manifold topology, there exist several width-type parameters to describe how "simple" or "thin" a given graph or 3-manifold is. These parameters, such as pathwidth or treewidth for graphs, or the concept of thin position for 3-manifolds, play an important role when studying algorithmic problems; in particular, there is a variety of problems in computational 3-manifold topology - some of them known to be computationally hard in general - that become solvable in polynomial time as soon as the dual graph of the input triangulation has bounded treewidth. In view of these algorithmic results, it is natural to ask whether every 3-manifold admits a triangulation of bounded treewidth. We show that this is not the case, i.e., that there exists an infinite family of closed 3-manifolds not admitting triangulations of bounded pathwidth or treewidth (the latter implies the former, but we present two separate proofs). We derive these results from work of Agol, of Scharlemann and Thompson, and of Scharlemann, Schultens and Saito by exhibiting explicit connections between the topology of a 3-manifold M on the one hand and width-type parameters of the dual graphs of triangulations of M on the other hand, answering a question that had been raised repeatedly by researchers in computational 3-manifold topology. In particular, we show that if a closed, orientable, irreducible, non-Haken 3-manifold M has a triangulation of treewidth (resp. pathwidth) k then the Heegaard genus of M is at most 24(k+1) (resp. 4(3k+1)).Comment: 25 pages, 6 figures, 1 table. An extended abstract of this paper appeared in the Proceedings of the 34th International Symposium on Computational Geometry (SoCG 2018), Budapest, June 11-14 201

    High-Dimensional Bayesian Optimization via Additive Models with Overlapping Groups

    Full text link
    Bayesian optimization (BO) is a popular technique for sequential black-box function optimization, with applications including parameter tuning, robotics, environmental monitoring, and more. One of the most important challenges in BO is the development of algorithms that scale to high dimensions, which remains a key open problem despite recent progress. In this paper, we consider the approach of Kandasamy et al. (2015), in which the high-dimensional function decomposes as a sum of lower-dimensional functions on subsets of the underlying variables. In particular, we significantly generalize this approach by lifting the assumption that the subsets are disjoint, and consider additive models with arbitrary overlap among the subsets. By representing the dependencies via a graph, we deduce an efficient message passing algorithm for optimizing the acquisition function. In addition, we provide an algorithm for learning the graph from samples based on Gibbs sampling. We empirically demonstrate the effectiveness of our methods on both synthetic and real-world data

    The Stellar tree: a Compact Representation for Simplicial Complexes and Beyond

    Full text link
    We introduce the Stellar decomposition, a model for efficient topological data structures over a broad range of simplicial and cell complexes. A Stellar decomposition of a complex is a collection of regions indexing the complex's vertices and cells such that each region has sufficient information to locally reconstruct the star of its vertices, i.e., the cells incident in the region's vertices. Stellar decompositions are general in that they can compactly represent and efficiently traverse arbitrary complexes with a manifold or non-manifold domain They are scalable to complexes in high dimension and of large size, and they enable users to easily construct tailored application-dependent data structures using a fraction of the memory required by the corresponding topological data structure on the global complex. As a concrete realization of this model for spatially embedded complexes, we introduce the Stellar tree, which combines a nested spatial tree with a simple tuning parameter to control the number of vertices in a region. Stellar trees exploit the complex's spatial locality by reordering vertex and cell indices according to the spatial decomposition and by compressing sequential ranges of indices. Stellar trees are competitive with state-of-the-art topological data structures for manifold simplicial complexes and offer significant improvements for cell complexes and non-manifold simplicial complexes. As a proxy for larger applications, we describe how Stellar trees can be used to generate existing state-of-the-art topological data structures. In addition to faster generation times, the reduced memory requirements of a Stellar tree enable generating these data structures over large and high-dimensional complexes even on machines with limited resources

    Decomposing arrangements of hyperplanes: VC-dimension, combinatorial dimension, and point location

    Full text link
    \renewcommand{\Re}{\mathbb{R}} We re-examine parameters for the two main space decomposition techniques---bottom-vertex triangulation, and vertical decomposition, including their explicit dependence on the dimension dd, and discover several unexpected phenomena, which show that, in both techniques, there are large gaps between the VC-dimension (and primal shatter dimension), and the combinatorial dimension. For vertical decomposition, the combinatorial dimension is only 2d2d, the primal shatter dimension is at most d(d+1)d(d+1), and the VC-dimension is at least 1+d(d+1)/21 + d(d+1)/2 and at most O(d3)O(d^3). For bottom-vertex triangulation, both the primal shatter dimension and the combinatorial dimension are Θ(d2)\Theta(d^2), but there seems to be a significant gap between them, as the combinatorial dimension is 12d(d+3)\frac12d(d+3), whereas the primal shatter dimension is at most d(d+1)d(d+1), and the VC-dimension is between d(d+1)d(d+1) and 5d2logd5d^2 \log{d} (for d9d\ge 9). Our main application is to point location in an arrangement of nn hyperplanes is d\Re^d, in which we show that the query cost in Meiser's algorithm can be improved if one uses vertical decomposition instead of bottom-vertex triangulation, at the cost of some increase in the preprocessing cost and storage. The best query time that we can obtain is O(d3logn)O(d^3\log n), instead of O(d4logdlogn)O(d^4\log d\log n) in Meiser's algorithm. For these bounds to hold, the preprocessing and storage are rather large (super-exponential in dd). We discuss the tradeoff between query cost and storage (in both approaches, the one using bottom-vertex trinagulation and the one using vertical decomposition)

    Deep Learning Assisted Heuristic Tree Search for the Container Pre-marshalling Problem

    Full text link
    The container pre-marshalling problem (CPMP) is concerned with the re-ordering of containers in container terminals during off-peak times so that containers can be quickly retrieved when the port is busy. The problem has received significant attention in the literature and is addressed by a large number of exact and heuristic methods. Existing methods for the CPMP heavily rely on problem-specific components (e.g., proven lower bounds) that need to be developed by domain experts with knowledge of optimization techniques and a deep understanding of the problem at hand. With the goal to automate the costly and time-intensive design of heuristics for the CPMP, we propose a new method called Deep Learning Heuristic Tree Search (DLTS). It uses deep neural networks to learn solution strategies and lower bounds customized to the CPMP solely through analyzing existing (near-) optimal solutions to CPMP instances. The networks are then integrated into a tree search procedure to decide which branch to choose next and to prune the search tree. DLTS produces the highest quality heuristic solutions to the CPMP to date with gaps to optimality below 2% on real-world sized instances

    Flexible Caching in Trie Joins

    Full text link
    Traditional algorithms for multiway join computation are based on rewriting the order of joins and combining results of intermediate subqueries. Recently, several approaches have been proposed for algorithms that are "worst-case optimal" wherein all relations are scanned simultaneously. An example is Veldhuizen's Leapfrog Trie Join (LFTJ). An important advantage of LFTJ is its small memory footprint, due to the fact that intermediate results are full tuples that can be dumped immediately. However, since the algorithm does not store intermediate results, recurring joins must be reconstructed from the source relations, resulting in excessive memory traffic. In this paper, we address this problem by incorporating caches into LFTJ. We do so by adopting recent developments on join optimization, tying variable ordering to tree decomposition. While the traditional usage of tree decomposition computes the result for each bag in advance, our proposed approach incorporates caching directly into LFTJ and can dynamically adjust the size of the cache. Consequently, our solution balances memory usage and repeated computation, as confirmed by our experiments over SNAP datasets
    corecore