14,547 research outputs found

    Improved Implementation of Point Location in General Two-Dimensional Subdivisions

    Full text link
    We present a major revamp of the point-location data structure for general two-dimensional subdivisions via randomized incremental construction, implemented in CGAL, the Computational Geometry Algorithms Library. We can now guarantee that the constructed directed acyclic graph G is of linear size and provides logarithmic query time. Via the construction of the Voronoi diagram for a given point set S of size n, this also enables nearest-neighbor queries in guaranteed O(log n) time. Another major innovation is the support of general unbounded subdivisions as well as subdivisions of two-dimensional parametric surfaces such as spheres, tori, cylinders. The implementation is exact, complete, and general, i.e., it can also handle non-linear subdivisions. Like the previous version, the data structure supports modifications of the subdivision, such as insertions and deletions of edges, after the initial preprocessing. A major challenge is to retain the expected O(n log n) preprocessing time while providing the above (deterministic) space and query-time guarantees. We describe an efficient preprocessing algorithm, which explicitly verifies the length L of the longest query path in O(n log n) time. However, instead of using L, our implementation is based on the depth D of G. Although we prove that the worst case ratio of D and L is Theta(n/log n), we conjecture, based on our experimental results, that this solution achieves expected O(n log n) preprocessing time.Comment: 21 page

    Optimal randomized incremental construction for guaranteed logarithmic planar point location

    Full text link
    Given a planar map of nn segments in which we wish to efficiently locate points, we present the first randomized incremental construction of the well-known trapezoidal-map search-structure that only requires expected O(nlogn)O(n \log n) preprocessing time while deterministically guaranteeing worst-case linear storage space and worst-case logarithmic query time. This settles a long standing open problem; the best previously known construction time of such a structure, which is based on a directed acyclic graph, so-called the history DAG, and with the above worst-case space and query-time guarantees, was expected O(nlog2n)O(n \log^2 n). The result is based on a deeper understanding of the structure of the history DAG, its depth in relation to the length of its longest search path, as well as its correspondence to the trapezoidal search tree. Our results immediately extend to planar maps induced by finite collections of pairwise interior disjoint well-behaved curves.Comment: The article significantly extends the theoretical aspects of the work presented in http://arxiv.org/abs/1205.543

    Multi-agent collaborative search : an agent-based memetic multi-objective optimization algorithm applied to space trajectory design

    Get PDF
    This article presents an algorithm for multi-objective optimization that blends together a number of heuristics. A population of agents combines heuristics that aim at exploring the search space both globally and in a neighbourhood of each agent. These heuristics are complemented with a combination of a local and global archive. The novel agent-based algorithm is tested at first on a set of standard problems and then on three specific problems in space trajectory design. Its performance is compared against a number of state-of-the-art multi-objective optimization algorithms that use the Pareto dominance as selection criterion: non-dominated sorting genetic algorithm (NSGA-II), Pareto archived evolution strategy (PAES), multiple objective particle swarm optimization (MOPSO), and multiple trajectory search (MTS). The results demonstrate that the agent-based search can identify parts of the Pareto set that the other algorithms were not able to capture. Furthermore, convergence is statistically better although the variance of the results is in some cases higher

    The Foundational Model of Anatomy Ontology

    Get PDF
    Anatomy is the structure of biological organisms. The term also denotes the scientific discipline devoted to the study of anatomical entities and the structural and developmental relations that obtain among these entities during the lifespan of an organism. Anatomical entities are the independent continuants of biomedical reality on which physiological and disease processes depend, and which, in response to etiological agents, can transform themselves into pathological entities. For these reasons, hard copy and in silico information resources in virtually all fields of biology and medicine, as a rule, make extensive reference to anatomical entities. Because of the lack of a generalizable, computable representation of anatomy, developers of computable terminologies and ontologies in clinical medicine and biomedical research represented anatomy from their own more or less divergent viewpoints. The resulting heterogeneity presents a formidable impediment to correlating human anatomy not only across computational resources but also with the anatomy of model organisms used in biomedical experimentation. The Foundational Model of Anatomy (FMA) is being developed to fill the need for a generalizable anatomy ontology, which can be used and adapted by any computer-based application that requires anatomical information. Moreover it is evolving into a standard reference for divergent views of anatomy and a template for representing the anatomy of animals. A distinction is made between the FMA ontology as a theory of anatomy and the implementation of this theory as the FMA artifact. In either sense of the term, the FMA is a spatial-structural ontology of the entities and relations which together form the phenotypic structure of the human organism at all biologically salient levels of granularity. Making use of explicit ontological principles and sound methods, it is designed to be understandable by human beings and navigable by computers. The FMA’s ontological structure provides for machine-based inference, enabling powerful computational tools of the future to reason with biomedical data

    Efficient adaptive integration of functions with sharp gradients and cusps in n-dimensional parallelepipeds

    Full text link
    In this paper, we study the efficient numerical integration of functions with sharp gradients and cusps. An adaptive integration algorithm is presented that systematically improves the accuracy of the integration of a set of functions. The algorithm is based on a divide and conquer strategy and is independent of the location of the sharp gradient or cusp. The error analysis reveals that for a C0C^0 function (derivative-discontinuity at a point), a rate of convergence of n+1n+1 is obtained in RnR^n. Two applications of the adaptive integration scheme are studied. First, we use the adaptive quadratures for the integration of the regularized Heaviside function---a strongly localized function that is used for modeling sharp gradients. Then, the adaptive quadratures are employed in the enriched finite element solution of the all-electron Coulomb problem in crystalline diamond. The source term and enrichment functions of this problem have sharp gradients and cusps at the nuclei. We show that the optimal rate of convergence is obtained with only a marginal increase in the number of integration points with respect to the pure finite element solution with the same number of elements. The adaptive integration scheme is simple, robust, and directly applicable to any generalized finite element method employing enrichments with sharp local variations or cusps in nn-dimensional parallelepiped elements.Comment: 22 page

    An Efficient Interpolation Technique for Jump Proposals in Reversible-Jump Markov Chain Monte Carlo Calculations

    Full text link
    Selection among alternative theoretical models given an observed data set is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty: it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the MCMC algorithm and convergence is correspondingly slow. Here we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose inter-model jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient "global" proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher-dimensional spaces efficiently.Comment: Minor revision to match published versio
    corecore