186 research outputs found

    Computing tolerance parameters for fixturing and feeding

    Get PDF
    Fixtures and feeders are important components of automated assembly systems: fixtures accurately hold parts and feeders move parts into alignment. These components can fail when part shape varies. Parametric tolerance classes specify how much variation is allowable. In this paper we consider fixturing convex polygonal parts using right-angle brackets and feeding polygonal parts on conveyor belts using sequences of vertical fences. For both cases, we define new tolerance classes and give algorithms for computing the parameter specifications such that the fixture or feeder will work for all parts in the tolerance class. For fixturing we give an O(1) algorithm to compute the dimensions of rectangular tolerance zones. For feeding we give an O(n2) algorithm to compute the radius of the largest allowable tolerance zone around each vertex. For each, we give an O(n) time algorithm for testing if an n-sided part is in the tolerance class

    Fast Locality-Sensitive Hashing Frameworks for Approximate Near Neighbor Search

    Full text link
    The Indyk-Motwani Locality-Sensitive Hashing (LSH) framework (STOC 1998) is a general technique for constructing a data structure to answer approximate near neighbor queries by using a distribution H\mathcal{H} over locality-sensitive hash functions that partition space. For a collection of nn points, after preprocessing, the query time is dominated by O(nρlogn)O(n^{\rho} \log n) evaluations of hash functions from H\mathcal{H} and O(nρ)O(n^{\rho}) hash table lookups and distance computations where ρ(0,1)\rho \in (0,1) is determined by the locality-sensitivity properties of H\mathcal{H}. It follows from a recent result by Dahlgaard et al. (FOCS 2017) that the number of locality-sensitive hash functions can be reduced to O(log2n)O(\log^2 n), leaving the query time to be dominated by O(nρ)O(n^{\rho}) distance computations and O(nρlogn)O(n^{\rho} \log n) additional word-RAM operations. We state this result as a general framework and provide a simpler analysis showing that the number of lookups and distance computations closely match the Indyk-Motwani framework, making it a viable replacement in practice. Using ideas from another locality-sensitive hashing framework by Andoni and Indyk (SODA 2006) we are able to reduce the number of additional word-RAM operations to O(nρ)O(n^\rho).Comment: 15 pages, 3 figure

    Bringing Order to Special Cases of Klee's Measure Problem

    Full text link
    Klee's Measure Problem (KMP) asks for the volume of the union of n axis-aligned boxes in d-space. Omitting logarithmic factors, the best algorithm has runtime O*(n^{d/2}) [Overmars,Yap'91]. There are faster algorithms known for several special cases: Cube-KMP (where all boxes are cubes), Unitcube-KMP (where all boxes are cubes of equal side length), Hypervolume (where all boxes share a vertex), and k-Grounded (where the projection onto the first k dimensions is a Hypervolume instance). In this paper we bring some order to these special cases by providing reductions among them. In addition to the trivial inclusions, we establish Hypervolume as the easiest of these special cases, and show that the runtimes of Unitcube-KMP and Cube-KMP are polynomially related. More importantly, we show that any algorithm for one of the special cases with runtime T(n,d) implies an algorithm for the general case with runtime T(n,2d), yielding the first non-trivial relation between KMP and its special cases. This allows to transfer W[1]-hardness of KMP to all special cases, proving that no n^{o(d)} algorithm exists for any of the special cases under reasonable complexity theoretic assumptions. Furthermore, assuming that there is no improved algorithm for the general case of KMP (no algorithm with runtime O(n^{d/2 - eps})) this reduction shows that there is no algorithm with runtime O(n^{floor(d/2)/2 - eps}) for any of the special cases. Under the same assumption we show a tight lower bound for a recent algorithm for 2-Grounded [Yildiz,Suri'12].Comment: 17 page

    Computing the Fréchet Distance with a Retractable Leash

    Get PDF
    All known algorithms for the Fréchet distance between curves proceed in two steps: first, they construct an efficient oracle for the decision version; second, they use this oracle to find the optimum from a finite set of critical values. We present a novel approach that avoids the detour through the decision version. This gives the first quadratic time algorithm for the Fréchet distance between polygonal curves in (Formula presented.) under polyhedral distance functions (e.g., (Formula presented.) and (Formula presented.)). We also get a (Formula presented.)-approximation of the Fréchet distance under the Euclidean metric, in quadratic time for any fixed (Formula presented.). For the exact Euclidean case, our framework currently yields an algorithm with running time (Formula presented.). However, we conjecture that it may eventually lead to a faster exact algorithm

    Testing for Network and Spatial Autocorrelation

    Full text link
    Testing for dependence has been a well-established component of spatial statistical analyses for decades. In particular, several popular test statistics have desirable properties for testing for the presence of spatial autocorrelation in continuous variables. In this paper we propose two contributions to the literature on tests for autocorrelation. First, we propose a new test for autocorrelation in categorical variables. While some methods currently exist for assessing spatial autocorrelation in categorical variables, the most popular method is unwieldy, somewhat ad hoc, and fails to provide grounds for a single omnibus test. Second, we discuss the importance of testing for autocorrelation in data sampled from the nodes of a network, motivated by social network applications. We demonstrate that our proposed statistic for categorical variables can both be used in the spatial and network setting

    Parallel Write-Efficient Algorithms and Data Structures for Computational Geometry

    Full text link
    In this paper, we design parallel write-efficient geometric algorithms that perform asymptotically fewer writes than standard algorithms for the same problem. This is motivated by emerging non-volatile memory technologies with read performance being close to that of random access memory but writes being significantly more expensive in terms of energy and latency. We design algorithms for planar Delaunay triangulation, kk-d trees, and static and dynamic augmented trees. Our algorithms are designed in the recently introduced Asymmetric Nested-Parallel Model, which captures the parallel setting in which there is a small symmetric memory where reads and writes are unit cost as well as a large asymmetric memory where writes are ω\omega times more expensive than reads. In designing these algorithms, we introduce several techniques for obtaining write-efficiency, including DAG tracing, prefix doubling, reconstruction-based rebalancing and α\alpha-labeling, which we believe will be useful for designing other parallel write-efficient algorithms

    Developing serious games for cultural heritage: a state-of-the-art review

    Get PDF
    Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented

    SeqAn An efficient, generic C++ library for sequence analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The use of novel algorithmic techniques is pivotal to many important problems in life science. For example the sequencing of the human genome <abbrgrp><abbr bid="B1">1</abbr></abbrgrp> would not have been possible without advanced assembly algorithms. However, owing to the high speed of technological progress and the urgent need for bioinformatics tools, there is a widening gap between state-of-the-art algorithmic techniques and the actual algorithmic components of tools that are in widespread use.</p> <p>Results</p> <p>To remedy this trend we propose the use of SeqAn, a library of efficient data types and algorithms for sequence analysis in computational biology. SeqAn comprises implementations of existing, practical state-of-the-art algorithmic components to provide a sound basis for algorithm testing and development. In this paper we describe the design and content of SeqAn and demonstrate its use by giving two examples. In the first example we show an application of SeqAn as an experimental platform by comparing different exact string matching algorithms. The second example is a simple version of the well-known MUMmer tool rewritten in SeqAn. Results indicate that our implementation is very efficient and versatile to use.</p> <p>Conclusion</p> <p>We anticipate that SeqAn greatly simplifies the rapid development of new bioinformatics tools by providing a collection of readily usable, well-designed algorithmic components which are fundamental for the field of sequence analysis. This leverages not only the implementation of new algorithms, but also enables a sound analysis and comparison of existing algorithms.</p

    A randomized attitude slew planning algorithm for autonomous spacecraft

    Full text link
    The ability to autonomously generate and execute large angle attitude maneuvers, while operating under a number of celestial and dynamical constraints, is a key factor in the development of several future space platforms. In this paper we propose a ran-domized attitude slew planning algorithm for autonomous spacecraft, which is able to address a variety of pointing constraints, including bright object avoidance and ground link maintenance, as well as constraints on the control inputs and spacecraft states, and integral constraints such as those deriving from thermal control requirements. Moreover, through the scheduling of feedback control policies, the algorithm provides a consistent decoupling between low-level control and attitude motion planning, and is robust with respect to uncertainties in the spacecraft dynamics and environmental disturbances. Sim-ulation examples are presented and discussed

    Every Large Point Set contains Many Collinear Points or an Empty Pentagon

    Get PDF
    We prove the following generalised empty pentagon theorem: for every integer 2\ell \geq 2, every sufficiently large set of points in the plane contains \ell collinear points or an empty pentagon. As an application, we settle the next open case of the "big line or big clique" conjecture of K\'ara, P\'or, and Wood [\emph{Discrete Comput. Geom.} 34(3):497--506, 2005]
    corecore