128 research outputs found

    Field D* pathfinding in weighted simplicial complexes

    Get PDF
    Includes abstract.Includes bibliographical references.The development of algorithms to efficiently determine an optimal path through a complex environment is a continuing area of research within Computer Science. When such environments can be represented as a graph, established graph search algorithms, such as Dijkstra’s shortest path and A*, can be used. However, many environments are constructed from a set of regions that do not conform to a discrete graph. The Weighted Region Problem was proposed to address the problem of finding the shortest path through a set of such regions, weighted with values representing the cost of traversing the region. Robust solutions to this problem are computationally expensive since finding shortest paths across a region requires expensive minimisation. Sampling approaches construct graphs by introducing extra points on region edges and connecting them with edges criss-crossing the region. Dijkstra or A* are then applied to compute shortest paths. The connectivity of these graphs is high and such techniques are thus not particularly well suited to environments where the weights and representation frequently change. The Field D* algorithm, by contrast, computes the shortest path across a grid of weighted square cells and has replanning capabilites that cater for environmental changes. However, representing an environment as a weighted grid (an image) is not space-efficient since high resolution is required to produce accurate paths through areas containing features sensitive to noise. In this work, we extend Field D* to weighted simplicial complexes – specifically – triangulations in 2D and tetrahedral meshes in 3D

    Computing Fast and Scalable Table Cartograms for Large Tables

    Get PDF
    Given an m x n table T of positive weights and a rectangle R with an area equal to the sum of the weights, a table cartogram computes a partition of R into m x n convex quadrilateral faces such that each face has the same adjacencies as its corresponding cell in T, and has an area equal to the cell's weight. In this thesis, we explored different table cartogram algorithms for a large table with thousands of cells and investigated the potential applications of large table cartograms. We implemented Evans et al.'s table cartogram algorithm that guarantees zero area error and adapted a diffusion-based cartographic transformation approach, FastFlow, to produce large table cartograms. We introduced a constraint optimization-based table cartogram generation technique, TCarto, leveraging the concept of force-directed layout. We implemented TCarto with column-based and quadtree-based parallelization to compute table cartograms for table with thousands of cells. We presented several potential applications of large table cartograms to create the diagrammatic representations in various real-life scenarios, e.g., for analyzing spatial correlations between geospatial variables, understanding clusters and densities in scatterplots, and creating visual effects in images (i.e., expanding illumination, mosaic art effect). We presented an empirical comparison among these three table cartogram techniques with two different real-life datasets: a meteorological weather dataset and a US State-to-State migration flow dataset. FastFlow and TCarto both performed well on the weather data table. However, for US State-to-State migration flow data, where the table contained many local optima with high value differences among adjacent cells, FastFlow generated concave quadrilateral faces. We also investigated some potential relationships among different measurement metrics such as cartographic error (accuracy), the average aspect ratio (the readability of the visualization), computational speed, and the grid size of the table. Furthermore, we augmented our proposed TCarto with angle constraint to enhance the readability of the visualization, conceding some cartographic error, and also inspected the potential relationship of the restricted angles with the accuracy and the readability of the visualization. In the output of the angle constrained TCarto algorithm on US State-to-State migration dataset, it was difficult to identify the rows and columns for a cell upto 20 degree angle constraint, but appeared to be identifiable for more than 40 degree angle constraint

    ARTIST-DRIVEN FRACTURING OF POLYHEDRAL SURFACE MESHES

    Get PDF
    This paper presents a robust and artist driven method for fracturing a surface polyhedral mesh via fracture maps. A fracture map is an undirected simple graph with nodes representing positions in UV-space and fracture lines along the surface of a mesh. Fracture maps allow artists to concisely and rapidly define, edit, and apply fracture patterns onto the surface of their mesh. The method projects a fracture map onto a polyhedral surface and splits its triangles accordingly. The polyhedral mesh is then segmented based on fracture lines to produce a set of independent surfaces called fracture components, containing the visible surface of each fractured mesh fragment. Subsequently, we utilize a Voronoi-based approximation of the input polyhedral mesh’s medial axis to derive a hidden surface for each fragment. The result is a new watertight polyhedral mesh representing the full fracture component. Results are aquired after a delay sufficiently brief for interactive design. As the size of the input mesh increases, the computation time has shown to grow linearly. A large mesh of 41,000 triangles requires approximately 3.4 seconds to perform a complete fracture of a complex pattern. For a wide variety of practices, the resulting fractures allows users to provide realistic feedback upon the application of extraneous forces

    Mesh generation using a correspondence distance field

    Get PDF
    The central tool of this work is a correspondence distance field to discrete surface points embedded within a quadtree data structure. The theory, development, and implementation of the distance field tool are described, and two main applications to two-dimensional mesh generation are presented with extension to three-dimensional capabilities in mind. First is a method for surface-oriented mesh generation from a sufficiently dense set of discrete surface points without connectivity information. Contour levels of distance from the body are specified and correspondences oriented normally to the contours are created. Regions of merging fronts inside and between objects are detected in the correspondence distance field and incorporated automatically. Second, the boundaries in a Voronoi diagram between specified coordinates are detected adaptively and used to make Delaunay tessellation. Tessellation of regions with holes is performed using ghost nodes. Images of meshed for each method are given for a sample set of test cases. Possible extensions, future work, and CFD applications are also discussed

    Parallel generalized Delaunay mesh refinement

    Get PDF
    The modeling of physical phenomena in computational fracture mechanics, computational fluid dynamics and other fields is based on solving systems of partial differential equations (PDEs). When PDEs are defined over geometrically complex domains, they often do not admit closed form solutions. In such cases, they are solved approximately using discretizations of domains into simple elements like triangles and quadrilaterals in two dimensions (2D), and tetrahedra and hexahedra in three dimensions (3D). These discretizations are called finite element meshes. Many applications, for example, real-time computer assisted surgery, or crack propagation from fracture mechanics, impose time and/or mesh size constraints that cannot be met on a single sequential machine. as a result, the development of parallel mesh generation algorithms is required.;In this dissertation, we describe a complete solution for both sequential and parallel construction of guaranteed quality Delaunay meshes for 2D and 3D geometries. First, we generalize the existing 2D and 3D Delaunay refinement algorithms along with theoretical proofs of mesh quality in terms of element shape and mesh gradation. Existing algorithms are constrained by just one or two specific positions for the insertion of a Steiner point inside a circumscribed disk of a poorly shaped element. We derive an entire 2D or 3D region for the selection of a Steiner point (i.e., infinitely many choices) inside the circumscribed disk. Second, we develop a novel theory which extends both the 2D and the 3D Generalized Delaunay Refinement methods for the concurrent and mathematically guaranteed independent insertion of Steiner points. Previous parallel algorithms are either reactive relying on implementation heuristics to resolve dependencies in parallel mesh generation computations or require the solution of a very difficult geometric optimization problem (the domain decomposition problem) which is still open for general 3D geometries. Our theory solves both of these drawbacks. Third, using our generalization of both the sequential and the parallel algorithms we implemented prototypes of practical and efficient parallel generalized guaranteed quality Delaunay refinement codes for both 2D and 3D geometries using existing state-of-the-art sequential codes for traditional Delaunay refinement methods. On a heterogeneous cluster of more than 100 processors our implementation can generate a uniform mesh with about a billion elements in less than 5 minutes. Even on a workstation with a few cores, we achieve a significant performance improvement over the corresponding state-of-the-art sequential 3D code, for graded meshes

    Automated interpretation of digital images of hydrographic charts.

    Get PDF
    Details of research into the automated generation of a digital database of hydrographic charts is presented. Low level processing of digital images of hydrographic charts provides image line feature segments which serve as input to a semi-automated feature extraction system, (SAFE). This system is able to perform a great deal of the building of chart features from the image segments simply on the basis of proximity of the segments. The system solicits user interaction when ambiguities arise. IThe creation of an intelligent knowledge based system (IKBS) implemented in the form of a backward chained production rule based system, which cooperates with the SAFE system, is described. The 1KBS attempts to resolve ambiguities using domain knowledge coded in the form of production rules. The two systems communicate by the passing of goals from SAFE to the IKBS and the return of a certainty factor by the IKBS for each goal submitted. The SAFE system can make additional feature building decisions on the basis of collected sets of certainty factors, thus reducing the need for user interaction. This thesis establishes that the cooperating IKBS approach to image interpretation offers an effective route to automated image understanding

    Field D* Pathfinding in Weighted Simplicial Complexes

    Get PDF
    The development of algorithms to efficiently determine an optimal path through a complex environment is a continuing area of research within Computer Science. When such environments can be represented as a graph, established graph search algorithms, such as Dijkstra’s shortest path and A*, can be used. However, many environments are constructed from a set of regions that do not conform to a discrete graph. The Weighted Region Problem was proposed to address the problem of finding the shortest path through a set of such regions, weighted with values representing the cost of traversing the region. Robust solutions to this problem are computationally expensive since finding shortest paths across a region requires expensive minimisation. Sampling approaches construct graphs by introducing extra points on region edges and connecting them with edges criss-crossing the region. Dijkstra or A* are then applied to compute shortest paths. The connectivity of these graphs is high and such techniques are thus not particularly well suited to environments where the weights and representation frequently change. The Field D* algorithm, by contrast, computes the shortest path across a grid of weighted square cells and has replanning capabilites that cater for environmental changes. However, representing an environment as a weighted grid (an image) is not space-efficient since high resolution is required to produce accurate paths through areas containing features sensitive to noise. In this work, we extend Field D* to weighted simplicial complexes – specifically – triangulations in 2D and tetrahedral meshes in 3D. Such representations offer benefits in terms of space over a weighted grid, since fewer triangles can represent polygonal objects with greater accuracy than a large number of grid cells. By exploiting these savings, we show that Triangulated Field D* can produce an equivalent path cost to grid-based Multi-resolution Field D*, using up to an order of magnitude fewer triangles over grid cells and visiting an order of magnitude fewer nodes. Finally, as a practical demonstration of the utility of our formulation, we show how Field D* can be used to approximate a distance field on the nodes of a simplicial complex, and how this distance field can be used to weight the simplicial complex to produce contour-following behaviour by shortest paths computed with Field D*

    Creating a virtual slide map from sputum smear images for region-of-interest localisation in automated microscopy

    Get PDF
    Includes abstract.Includes bibliographical references (leaves 140-144).Automated microscopy for the detection of tuberculosis (TB) in sputum smears seeks to address the strain on technicians in busy TB laboratories and to achieve faster diagnosis in countries with a heavy TB burden. As a step in the development of an automated microscope, the project described here was concerned with microscope auto-positioning; this primarily involves generating a point of reference on a slide, which can be used to automatically bring desired fields on the slide to the field-of-view of the microscope for re-examination. The study was carried out using a conventional microscope and Ziehl- Neelsen (ZN) stained sputum smear slides. All images were captured at 40x magnification. A digital replication, the virtual slide map, of an actual slide was constructed by combining the manually acquired images of the different fields of the slide. The geometric hashing scheme was found to be suitable for auto-stitching a large number of images (over 300 images) to form a virtual slide map. An object recognition algorithm, which was also based on the geometric hashing technique, was used to localise a query image (the current field-of-view) on the virtual slide map. This localised field-of-view then served as the point of reference. The true positive (correct localisation of a query image on the virtual slide map) rate achieved by the algorithm was above 88% even for noisy query images captured at slide orientations up to 26°. The image registration error, computed as the average mean square error, was less than 14 pixel2 (corresponding to 1.02 μm2 and 0.001% error in an image measuring 1030 x 1300 pixels) corresponding to a root mean square registration error of 3.7 pixels. Superior image registration accuracy was obtained at the expense of time using the scale invariant feature transform (SIFT), with a image registration error of 1 pixel2 (0.07 μm2). The object recognition algorithm is inherently robust to changes in slide orientation and placement, which are likely to occur in practice as it is impossible to place the slide in exactly the same position on the microscope at different times. Moreover, the algorithm showed high tolerance to illumination changes and robustness to noise
    • …
    corecore