70 research outputs found

    A Generic Lazy Evaluation Scheme for Exact Geometric Computations

    Get PDF
    We present a generic C++ design to perform efficient and exact geometric computations using lazy evaluations. Exact geometric computations are critical for the robustness of geometric algorithms. Their efficiency is also critical for most applications, hence the need for delaying the exact computations at run time until they are actually needed. Our approach is generic and extensible in the sense that it is possible to make it a library which users can extend to their own geometric objects or primitives. It involves techniques such as generic functor adaptors, dynamic polymorphism, reference counting for the management of directed acyclic graphs and exception handling for detecting cases where exact computations are needed. It also relies on multiple precision arithmetic as well as interval arithmetic. We apply our approach to the whole geometric kernel of CGAL

    A Generic Lazy Evaluation Scheme for Exact Geometric Computations

    Get PDF
    International audienceWe present a generic C++ design to perform exact geometric computations efficiently using lazy evaluations. Exact geometric computations are critical for the robustness of geometric algorithms. Their efficiency is also important for many applications, hence the need for delaying the costly exact computations at run time until they are actually needed, if at all. Our approach is generic and extensible in the sense that it is possible to make it a library that users can apply to their own geometric objects and primitives. It involves techniques such as generic functor-adaptors, static and dynamic polymorphism, reference counting for the management of directed acyclic graphs, and exception handling for triggering exact computations when needed. It also relies on multi-precision arithmetic as well as interval arithmetic. We apply our approach to the whole geometry kernel of CGAL

    CGAL - The Computational Geometry Algorithms Library

    Get PDF
    National audienceSee http://hal.archives-ouvertes.fr/docs/00/59/26/85/ANNEX/r_3NE2KWB7.pd

    Scalable algorithms for bichromatic line segment intersection problems on coarse grained multicomputers

    Get PDF
    International audienceWe present output-sensitive scalable parallel algorithms for bichromatic line segment intersection problems for the coarse grained multicomputer model. Under the assumption that n≄p^2, where n is the number of line segments and p the number of processors, we obtain an intersection counting algorithm with a time complexity of O( (n log n log p)/p + Ts(n log p,p) ), where Ts(m, p) is the time used to sort m items on a p processor machine. The first term captures the time spent in sequential computation performed locally by each processor. The second term captures the interprocessor communication time. An additional O(k/p) time in sequential computation is spent on the reporting of the k intersections. As the sequential time complexity is O(n log n) for counting and an additional time O(k) for reporting, we obtain a speedup of p/log p in the sequential part of the algorithm. The speedup in the communication part obviously depends on the underlying architecture. For example for a hypercube it ranges between p/log^2p and p/log p depending on the ratio of n and p. As the reporting does not involve more interprocessor communication than the counting, the algorithm achieves a full speedup of p for k≄ O(max(n log n log p, n log^3 p)) even on a hypercube

    On the multisearching problem for hypercubes

    Get PDF
    We build on the work of Dehne and Rau-Chaplin and give improved bounds for the multisearch problem on a hypercube. This is a parallel search problem where the elements in the structure S to be searched are totally ordered, but where it is not possible to compare in constant time any two given queries q and q'. This problem is fundamental in computational geometry, for example it models planar point location in a slab. More precisely, we are given on a n-processor hypercube a sorted n-element sequence S and a set Q of n queries and we need to find for each query q [??]Q its location in the sorted S. Note that one cannot solve this problem by sorting S[??] Q, because every comparison- based parallel sorting algorithm needs to compare a pair q, q' [??]Q in constant time. We present an improved algorithm for the multisearch problem, one that takes 0(log n(log log n)3) time on a n-processor hypercube. This essentially replaces a logarithmic factor in the time complexities of previous schemes by a (log log n)3 factor. The hypercube model for which we claim our bounds is the standard one, with 0(1) memory registers per processor, and with one-port communication. Each register can store 0(log n) bits, so that a processor knows its ID

    Remote Monitoring of Railway Equipment Using Internet Technologies

    Get PDF
    This paper outlines the main benefits of using Internet technologies for the remote monitoring of railway equipment. We present two prototypes of a remote monitoring tool for railway equipment. The first has a 2-tier architecture and is based on Java technology and Java RMI as a communication protocol. The second has a 3-tier architecture and is based on XML/XSL technology and HTTP as a communication protocol. We compare both systems and we give some conclusions from the actual work. This paper is intended for people concerned with industrial applications on the Internet and especially for those developing remote monitoring tools for embedded systems

    Scalable parallel geometric algorithms for coarse grained multicomputers

    Get PDF
    Whereas most of the literature assumes that the number of processors p is a function of the problem size n, in scalable algorithms p becomes a parameter of the time complexity. This is a more realistic modelisation of real parallel machines and yields optimal algorithms, for the case that n H p, where H is a function depending on the architecture of the interconnexion network. In this paper we present scalable algorithms for a number of geometric problems, namely lower envelope of line segments, 2D-nearest neighbour, 3D-maxima, 2D-weighted dominance counting area of the union of rectangles, 2D-convex hull. The main idea of these algorithms is to decompose the problem in p subproblems of size 0(F(n;p) + f(p)), with f(p) 2 F(n;p) , which can be solved independently using optimal sequential algorithms. For each problem we present a spatial decomposition scheme based on some geometric observations. The decomposition schemes have in common that they can be computed by globally sorting the entire data set at most twice. The data redundancy of f(p) duplicates of data elements per processor does not increase the asymptotic time complexity and ranges for the algorithms presented in this paper, from p to p2. The algorithms do not depend on a specific architecture,they are easy to implement and in practice efficient as experiments show

    Sub-micron sized saccharide fibres via electrospinning

    Get PDF
    In this work, the production of continuous submicron diameter saccharide fibres is shown to be possible using the electrospinning process. The mechanism for the formation of electrospun polymer fibres is usually attributed to the physical entanglement of long molecular chains. The ability to electrospin continuous fibre from a low molecular weight saccharides was an unexpected phenomenon. The formation of sub-micron diameter “sugar syrup” fibres was observed in situ using high speed video. The trajectory of the electrospun saccharide fibre was observed to follow that typical of electrospun polymers. Based on initial food grade glucose syrup tests, various solutions based on combinations of syrup components, i.e. mono-, di- and tri-saccharides, were investigated to map out materials and electrospinning conditions that would lead to the formation of fibre. This work demonstrated that sucrose exhibits the highest propensity for fibre formation during electrospinning amongst the various types of saccharide solutions studied. The possibility of electrospinning low molecular weight saccharides into sub-micron fibres has implications for the electrospinability of supramolecular polymers and other biomaterial

    SeqAn An efficient, generic C++ library for sequence analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The use of novel algorithmic techniques is pivotal to many important problems in life science. For example the sequencing of the human genome <abbrgrp><abbr bid="B1">1</abbr></abbrgrp> would not have been possible without advanced assembly algorithms. However, owing to the high speed of technological progress and the urgent need for bioinformatics tools, there is a widening gap between state-of-the-art algorithmic techniques and the actual algorithmic components of tools that are in widespread use.</p> <p>Results</p> <p>To remedy this trend we propose the use of SeqAn, a library of efficient data types and algorithms for sequence analysis in computational biology. SeqAn comprises implementations of existing, practical state-of-the-art algorithmic components to provide a sound basis for algorithm testing and development. In this paper we describe the design and content of SeqAn and demonstrate its use by giving two examples. In the first example we show an application of SeqAn as an experimental platform by comparing different exact string matching algorithms. The second example is a simple version of the well-known MUMmer tool rewritten in SeqAn. Results indicate that our implementation is very efficient and versatile to use.</p> <p>Conclusion</p> <p>We anticipate that SeqAn greatly simplifies the rapid development of new bioinformatics tools by providing a collection of readily usable, well-designed algorithmic components which are fundamental for the field of sequence analysis. This leverages not only the implementation of new algorithms, but also enables a sound analysis and comparison of existing algorithms.</p

    On the Multisearching Problem for Hypercubes

    Get PDF
    In this paper we give improved bounds for the multisearch problem on a hypercube. This is a parallel search problem where the elements in the structure S to be searched are totally ordered, but where it is not possible to compare in constant time any two given queries q and q 0. More precisely, we are given on a n-processor hypercube a sorted n-element sequence S, and a set Q of n queries, and we need to nd for each query q 2 Q its location in the sorted S. We present an improved algorithm for the multisearch problem, one that takes O(log n(log log n) 3) time on a n-processor hypercube. This problem is fundamental in computational geometry, for example it models planar point location in a slab. We give as application a trapezoidal decomposition algorithm with the same time complexity on a n log n-processor hypercube. The hypercube model for which we claim our bounds is the standard one, SIMD, with O(1) memory registers per processor, and with one-port communication. Each register can store O(log n) bits, so that a processor knows its ID
    • 

    corecore