571 research outputs found

    Efficient permutation-based range-join algorithms on N-dimensionalmeshes using data-shifting

    Get PDF
    ©2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.In this paper, we present two efficient parallel algorithms for computing a non-equijoin, range-join, of two relations an N-dimensional mesh-connected computers. The proposed algorithms uses the data-shifting approach to effectively permute every sorted subset of relation S to each processor in turn recursively in dimensions from low to high, where it is joined with the local subset of relation RShao Dong Chen, Hong Shen, Rodeny Topo

    Coarse-to-fine approximation of range images with bounded error adaptive triangular meshes

    Full text link
    Copyright 2007 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibitedA new technique for approximating range images with adaptive triangular meshes ensuring a user-defined approximation error is presented. This technique is based on an efficient coarse-to-fine refinement algorithm that avoids iterative optimization stages. The algorithm first maps the pixels of the given range image to 3D points defined in a curvature space. Those points are then tetrahedralized with a 3D Delaunay algorithm. Finally, an iterative process starts digging up the convex hull of the obtained tetrahedralization, progressively removing the triangles that do not fulfill the specified approximation error. This error is assessed in the original 3D space. The introduction of the aforementioned curvature space makes it possible for both convex and nonconvex object surfaces to be approximated with adaptive triangular meshes, improving thus the behavior of previous coarse-to-fine sculpturing techniques. The proposed technique is evaluated on real range images and compared to two simplification techniques that also ensure a user-defined approximation error: a fine-to-coarse approximation algorithm based on iterative optimization (Jade) and an optimization-free, fine-to-coarse algorithm (Simplification Envelopes).This work has been partially supported by the Spanish Ministry of Education and Science under projects TRA2004- 06702/AUT and DPI2004-07993-C03-03. The first author was supported by The Ramón y Cajal Program

    Shared memory with hidden latency on a family of mesh-like networks

    Get PDF

    Mesh Compression

    Get PDF
    Die Kompression von Netzen ist eine weitgefächerte Forschungsrichtung mit Anwendungen in den verschiedensten Bereichen, wie zum Beispiel im Bereich der Handhabung extrem großer Modelle, beim Austausch von dreidimensionalem Inhalt über das Internet, im elektronischen Handel, als anpassungsfähige Repräsentation für Volumendatensätze usw. In dieser Arbeit wird das Verfahren der Cut-Border Machine beschrieben. Die Cut-Border Machine kodiert Netze, indem ein Teilbereich durch das Netz wächst (region growing). Kodiert wird die Art und Weise, wie neue Netzelemente dem wachsenden Teilbereich einverleibt werden. Das Verfahren der Cut-Border Machine kann sowohl auf Dreiecksnetze als auch auf Tetraedernetze angewendet werden. Trotz der einfachen Struktur des Verfahrens kann eine sehr hohe Kompressionsrate erzielt werden. Im Falle von Tetraedernetzen erreicht die Cut-Border Machine die beste Kompressionsrate von allen bekannten Verfahren. Die einfache Struktur der Cut-Border Machine ermöglicht einerseits die Realisierung direkt in Hardware und ist auch als Implementierung in Software extrem schnell. Auf der anderen Seite erlaubt die Einfachheit eine theoretische Analyse des Algorithmus. Gezeigt werden konnte, dass für ebene Triangulierungen eine leicht modifizierte Version der Cut-Border Machine lineare Laufzeiten in der Zahl der Knoten erzielt und dass die komprimierte Darstellung nur linearen Speicherbedarf benötigt, d.h. nicht mehr als fünf Bits pro Knoten. Neben der detaillierten Beschreibung der Cut-Border Machine mit mehreren Verbesserungen und Optimierungen, enthält die Arbeit eine Einführung zu Netzen und geeigneten Datenstrukturen und entwickelt mehrere Kodierungsverfahren, die im Bereich der Netzkompression Anwendung finden. Eine breite Übersicht verwandter Arbeiten gibt Einblick in des Forschungsgebiet. Weiterhin wird die Effizienz mehrerer in der Literatur beschriebener Verfahren verbessert. Insbesondere konnte die algorithmisch erzielte obere Schranke für die KodiMesh Compression is a broad research area with applications in a lot of different areas, such as the handling of very large models, the exchange of three dimensional content over the internet, electronic commerce, the flexible representation of volumetric data and so on. In this thesis the mesh compression method of the Cut-Border Machine is described. The Cut-Border Machine encodes meshes by growing a region through the mesh and encoding the way, in which the mesh elements are incorporated into the growing region. The Cut-Border Machine can be applied to triangular and tetrahedral meshes. Although the method is not too complicated, it achieves very good compression rates. In the tetrahedral case the Cut-Border Machine performs best among all known methods. The simple nature of the Cut-Border Machine allows on the one hand for a hardware implementation and performs also as software implementation extremely well. On the other hand the simplicity allows for a theoretical analysis of the Cut-Border Machine. It could be shown, that for planar triangulations a slightly modified version of the Cut-Border Machine runs in linear time in the number of vertices and that the compressed representation only consumes linear storage space, i.e. no more than five bits per vertex. Besides the detailed description of the Cut-Border Machine with several improvements and optimizations, the thesis gives an introduction to meshes and appropriate data structures, develops several coding techniques useful for mesh compression and gives a broad overview of related work. Furthermore the author improves the encoding efficiency of several other compression techniques. In particular could the algorithmically achieved upper bound for the encoding of planar triangulations be improved to ten percent above the theoretical limit, what is the best known result up to now

    Efficient parallel processing with optical interconnections

    Get PDF
    With the advances in VLSI technology, it is now possible to build chips which can each contain thousands of processors. The efficiency of such chips in executing parallel algorithms heavily depends on the interconnection topology of the processors. It is not possible to build a fully interconnected network of processors with constant fan-in/fan-out using electrical interconnections. Free space optics is a remedy to this limitation. Qualities exclusive to the optical medium are its ability to be directed for propagation in free space and the property that optical channels can cross in space without any interference. In this thesis, we present an electro-optical interconnected architecture named Optical Reconfigurable Mesh (ORM). It is based on an existing optical model of computation. There are two layers in the architecture. The processing layer is a reconfigurable mesh and the deflecting layer contains optical devices to deflect light beams. ORM provides three types of communication mechanisms. The first is for arbitrary planar connections among sets of locally connected processors using the reconfigurable mesh. The second is for arbitrary connections among N of the processors using the electrical buses on the processing layer and N2 fixed passive deflecting units on the deflection layer. The third is for arbitrary connections among any of the N2 processors using the N2 mechanically reconfigurable deflectors in the deflection layer. The third type of communication mechanisms is significantly slower than the other two. Therefore, it is desirable to avoid reconfiguring this type of communication during the execution of the algorithms. Instead, the optical reconfiguration can be done before the execution of each algorithm begins. Determining a right configuration that would be suitable for the entire configuration of a task execution is studied in this thesis. The basic data movements for each of the mechanisms are studied. Finally, to show the power of ORM, we use all three types of communication mechanisms in the first O(logN) time algorithm for finding the convex hulls of all figures in an N x N binary image presented in this thesis

    Statistical Medial Model dor Cardiac Segmentation and Morphometry

    Get PDF
    In biomedical image analysis, shape information can be utilized for many purposes. For example, irregular shape features can help identify diseases; shape features can help match different instances of anatomical structures for statistical comparison; and prior knowledge of the mean and possible variation of an anatomical structure\u27s shape can help segment a new example of this structure in noisy, low-contrast images. A good shape representation helps to improve the performance of the above techniques. The overall goal of the proposed research is to develop and evaluate methods for representing shapes of anatomical structures. The medial model is a shape representation method that models a 3D object by explicitly defining its skeleton (medial axis) and deriving the object\u27s boundary via inverse-skeletonization . This model represents shape compactly, and naturally expresses descriptive global shape features like thickening , bending , and elongation . However, its application in biomedical image analysis has been limited, and it has not yet been applied to the heart, which has a complex shape. In this thesis, I focus on developing efficient methods to construct the medial model, and apply it to solve biomedical image analysis problems. I propose a new 3D medial model which can be efficiently applied to complex shapes. The proposed medial model closely approximates the medial geometry along medial edge curves and medial branching curves by soft-penalty optimization and local correction. I further develop a scheme to perform model-based segmentation using a statistical medial model which incorporates prior shape and appearance information. The proposed medial models are applied to a series of image analysis tasks. The 2D medial model is applied to the corpus callosum which results in an improved alignment of the patterns of commissural connectivity compared to a volumetric registration method. The 3D medial model is used to describe the myocardium of the left and right ventricles, which provides detailed thickness maps characterizing different disease states. The model-based myocardium segmentation scheme is tested in a heterogeneous adult MRI dataset. Our segmentation experiments demonstrate that the statistical medial model can accurately segment the ventricular myocardium and provide useful parameters to characterize heart function

    The 3D-3D registration problem revisited

    Get PDF
    We describe a new framework for globally solving the 3D-3D registration problem with unknown point correspondences. This problem is significant as it is frequently encountered in many applications. Existing methods are not fully satisfactory, mainly due to the risk of local minima. Our framework is grounded on the Lipschitz global optimization theory. It achieves a guaranteed global optimality without any initialization. By exploiting the special structure of the problem itself and of the 3D rotation space SO(3), we propose a Box-and-Ball algorithm, which solves the problem efficiently. The main idea of the work can be applied to many other problems as well

    Hypersweeps, Convective Clouds and Reeb Spaces

    Get PDF
    Isosurfaces are one of the most prominent tools in scientific data visualisation. An isosurface is a surface that defines the boundary of a feature of interest in space for a given threshold. This is integral in analysing data from the physical sciences which observe and simulate three or four dimensional phenomena. However it is time consuming and impractical to discover surfaces of interest by manually selecting different thresholds. The systematic way to discover significant isosurfaces in data is with a topological data structure called the contour tree. The contour tree encodes the connectivity and shape of each isosurface at all possible thresholds. The first part of this work has been devoted to developing algorithms that use the contour tree to discover significant features in data using high performance computing systems. Those algorithms provided a clear speedup over previous methods and were used to visualise physical plasma simulations. A major limitation of isosurfaces and contour trees is that they are only applicable when a single property is associated with data points. However scientific data sets often take multiple properties into account. A recent breakthrough generalised isosurfaces to fiber surfaces. Fiber surfaces define the boundary of a feature where the threshold is defined in terms of multiple parameters, instead of just one. In this work we used fiber surfaces together with isosurfaces and the contour tree to create a novel application that helps atmosphere scientists visualise convective cloud formation. Using this application, they were able to, for the first time, visualise the physical properties of certain structures that trigger cloud formation. Contour trees can also be generalised to handle multiple parameters. The natural extension of the contour tree is called the Reeb space and it comes from the pure mathematical field of fiber topology. The Reeb space is not yet fully understood mathematically and algorithms for computing it have significant practical limitations. A key difficulty is that while the contour tree is a traditional one dimensional data structure made up of points and lines between them, the Reeb space is far more complex. The Reeb space is made up of two dimensional sheets, attached to each other in intricate ways. The last part of this work focuses on understanding the structure of Reeb spaces and the rules that are followed when sheets are combined. This theory builds towards developing robust combinatorial algorithms to compute and use Reeb spaces for practical data analysis

    Numerical solution of 3-D electromagnetic problems in exploration geophysics and its implementation on massively parallel computers

    Get PDF
    The growing significance, technical development and employment of electromagnetic (EM) methods in exploration geophysics have led to the increasing need for reliable and fast techniques of interpretation of 3-D EM data sets acquired in complex geological environments. The first and most important step to creating an inversion method is the development of a solver for the forward problem. In order to create an efficient, reliable and practical 3-D EM inversion, it is necessary to have a 3-D EM modelling code that is highly accurate, robust and very fast. This thesis focuses precisely on this crucial and very demanding step to building a 3-D EM interpretation method. The thesis presents as its main contribution a highly accurate, robust, very fast and extremely scalable numerical method for 3-D EM modelling in geophysics that is based on finite elements (FE) and designed to run on massively parallel computing platforms. Thanks to the fact that the FE approach supports completely unstructured tetrahedral meshes as well as local mesh refinements, the presented solver is able to represent complex geometries of subsurface structures very precisely and thus improve the solution accuracy and avoid misleading artefacts in images. Consequently, it can be successfully used in geological environments of arbitrary geometrical complexities. The parallel implementation of the method, which is based on the domain decomposition and a hybrid MPI-OpenMP scheme, has proved to be highly scalable - the achieved speed-up is close to the linear for more than a thousand processors. Thanks to this, the code is able to deal with extremely large problems, which may have hundreds of millions of degrees of freedom, in a very efficient way. The importance of having this forward-problem solver lies in the fact that it is now possible to create a 3-D EM inversion that can deal with data obtained in extremely complex geological environments in a way that is realistic for practical use in industry. So far, such imaging tool has not been proposed due to a lack of efficient, parallel FE solutions as well as the limitations of efficient solvers based on finite differences. In addition, the thesis discusses physical, mathematical and numerical aspects and challenges of 3-D EM modelling, which have been studied during my research in order to properly design the presented software for EM field simulations on 3-D areas of the Earth. Through this work, a physical problem formulation based on the secondary Coulomb-gauged EM potentials has been validated, proving that it can be successfully used with the standard nodal FE method to give highly accurate numerical solutions. Also, this work has shown that Krylov subspace iterative methods are the best solution for solving linear systems that arise after FE discretisation of the problem under consideration. More precisely, it has been discovered empirically that the best iterative method for this kind of problems is biconjugate gradient stabilised with an elaborate preconditioner. Since most commonly used preconditioners proved to be either unable to improve the convergence of the implemented solvers to the desired extent, or impractical in the parallel context, I have proposed a preconditioning technique for Krylov methods that is based on algebraic multigrid. Tests for various problems with different conductivity structures and characteristics have shown that the new preconditioner greatly improves the convergence of different Krylov subspace methods, which significantly reduces the total execution time of the program and improves the solution quality. Furthermore, the preconditioner is very practical for parallel implementation. Finally, it has been concluded that there are not any restrictions in employing classical parallel programming models, MPI and OpenMP, for parallelisation of the presented FE solver. Moreover, they have proved to be enough to provide an excellent scalability for it

    Diamond-based models for scientific visualization

    Get PDF
    Hierarchical spatial decompositions are a basic modeling tool in a variety of application domains including scientific visualization, finite element analysis and shape modeling and analysis. A popular class of such approaches is based on the regular simplex bisection operator, which bisects simplices (e.g. line segments, triangles, tetrahedra) along the midpoint of a predetermined edge. Regular simplex bisection produces adaptive simplicial meshes of high geometric quality, while simplifying the extraction of crack-free, or conforming, approximations to the original dataset. Efficient multiresolution representations for such models have been achieved in 2D and 3D by clustering sets of simplices sharing the same bisection edge into structures called diamonds. In this thesis, we introduce several diamond-based approaches for scientific visualization. We first formalize the notion of diamonds in arbitrary dimensions in terms of two related simplicial decompositions of hypercubes. This enables us to enumerate the vertices, simplices, parents and children of a diamond. In particular, we identify the number of simplices involved in conforming updates to be factorial in the dimension and group these into a linear number of subclusters of simplices that are generated simultaneously. The latter form the basis for a compact pointerless representation for conforming meshes generated by regular simplex bisection and for efficiently navigating the topological connectivity of these meshes. Secondly, we introduce the supercube as a high-level primitive on such nested meshes based on the atomic units within the underlying triangulation grid. We propose the use of supercubes to associate information with coherent subsets of the full hierarchy and demonstrate the effectiveness of such a representation for modeling multiresolution terrain and volumetric datasets. Next, we introduce Isodiamond Hierarchies, a general framework for spatial access structures on a hierarchy of diamonds that exploits the implicit hierarchical and geometric relationships of the diamond model. We use an isodiamond hierarchy to encode irregular updates to a multiresolution isosurface or interval volume in terms of regular updates to diamonds. Finally, we consider nested hypercubic meshes, such as quadtrees, octrees and their higher dimensional analogues, through the lens of diamond hierarchies. This allows us to determine the relationships involved in generating balanced hypercubic meshes and to propose a compact pointerless representation of such meshes. We also provide a local diamond-based triangulation algorithm to generate high-quality conforming simplicial meshes
    corecore