3,286 research outputs found

    Analysis of the Spatial Distribution of Galaxies by Multiscale Methods

    Get PDF
    Galaxies are arranged in interconnected walls and filaments forming a cosmic web encompassing huge, nearly empty, regions between the structures. Many statistical methods have been proposed in the past in order to describe the galaxy distribution and discriminate the different cosmological models. We present in this paper results relative to the use of new statistical tools using the 3D isotropic undecimated wavelet transform, the 3D ridgelet transform and the 3D beamlet transform. We show that such multiscale methods produce a new way to measure in a coherent and statistically reliable way the degree of clustering, filamentarity, sheetedness, and voidedness of a datasetComment: 26 pages, 20 figures. Submitted to EURASIP Journal on Applied Signal Processing (special issue on "Applications of Signal Processing in Astrophysics and Cosmology"

    Skeletonization and segmentation of binary voxel shapes

    Get PDF
    Preface. This dissertation is the result of research that I conducted between January 2005 and December 2008 in the Visualization research group of the Technische Universiteit Eindhoven. I am pleased to have the opportunity to thank a number of people that made this work possible. I owe my sincere gratitude to Alexandru Telea, my supervisor and first promotor. I did not consider pursuing a PhD until my Master’s project, which he also supervised. Due to our pleasant collaboration from which I learned quite a lot, I became convinced that becoming a doctoral student would be the right thing to do for me. Indeed, I can say it has greatly increased my knowledge and professional skills. Alex, thank you for our interesting discussions and the freedom you gave me in conducting my research. You made these four years a pleasant experience. I am further grateful to Jack vanWijk, my second promotor. Our monthly discussions were insightful, and he continuously encouraged me to take a more formal and scientific stance. I would also like to thank Prof. Jan de Graaf from the department of mathematics for our discussions on some of my conjectures. His mathematical rigor was inspiring. I am greatly indebted to the Netherlands Organisation for Scientific Research (NWO) for funding my PhD project (grant number 612.065.414). I thank Prof. Kaleem Siddiqi, Prof. Mark de Berg, and Dr. Remco Veltkamp for taking part in the core doctoral committee and Prof. Deborah Silver and Prof. Jos Roerdink for participating in the extended committee. Our Visualization group provides a great atmosphere to do research in. In particular, I would like to thank my fellow doctoral students Frank van Ham, Hannes Pretorius, Lucian Voinea, Danny Holten, Koray Duhbaci, Yedendra Shrinivasan, Jing Li, NielsWillems, and Romain Bourqui. They enabled me to take my mind of research from time to time, by discussing political and economical affairs, and more trivial topics. Furthermore, I would like to thank the senior researchers of our group, Huub van de Wetering, Kees Huizing, and Michel Westenberg. In particular, I thank Andrei Jalba for our fruitful collaboration in the last part of my work. On a personal level, I would like to thank my parents and sister for their love and support over the years, my friends for providing distractions outside of the office, and Michelle for her unconditional love and ability to light up my mood when needed

    Geometric data for testing implementations of point reduction algorithms : case study using Mapshaper v 0.2.28 and previous versions

    Get PDF
    There are several open source and commercial implementations of the Visvalingam algorithm for line generalisation. The algorithm provides scope for implementation-specific interpretations, with different outcomes. This is inevitable and sometimes necessary and, they do not imply that an implementation is flawed. The only restriction is that the output must not be so inconsistent with the intent of the algorithm that it becomes inappropriate. The aim of this paper is to place the algorithm within the literature, and demonstrate the value of the teragon-test for evaluating the appropriateness of implementations; Mapshaper v 0.2.28 and earlier versions are used for illustrative purposes. Data pertaining to natural features, such as coastlines, are insufficient for establishing whether deviations in output are significant. The teragon-test produced an unexpected loss of symmetry from both the Visvalingam and Douglas-Peucker options, making the tested versions unsuitable for some applications outside of cartography. This paper describes the causes, and discusses their implications. Mapshaper 0.3.17 passes the teragon test. Other developers and users should check their implementations using contrived geometric data, such as the teragon data provided in this paper, especially when the source code is not available. The teragon-test is also useful for evaluating other point reduction algorithms

    Geometry Modeling for Unstructured Mesh Adaptation

    Get PDF
    The quantification and control of discretization error is critical to obtaining reliable simulation results. Adaptive mesh techniques have the potential to automate discretization error control, but have made limited impact on production analysis workflow. Recent progress has matured a number of independent implementations of flow solvers, error estimation methods, and anisotropic mesh adaptation mechanics. However, the poor integration of initial mesh generation and adaptive mesh mechanics to typical sources of geometry has hindered adoption of adaptive mesh techniques, where these geometries are often created in Mechanical Computer- Aided Design (MCAD) systems. The difficulty of this coupling is compounded by two factors: the inherent complexity of the model (e.g., large range of scales, bodies in proximity, details not required for analysis) and unintended geometry construction artifacts (e.g., translation, uneven parameterization, degeneracy, self-intersection, sliver faces, gaps, large tolerances be- tween topological elements, local high curvature to enforce continuity). Manual preparation of geometry is commonly employed to enable fixed-grid and adaptive-grid workflows by reducing the severity and negative impacts of these construction artifacts, but manual process interaction inhibits workflow automation. Techniques to permit the use of complex geometry models and reduce the impact of geometry construction artifacts on unstructured grid workflows are models from the AIAA Sonic Boom and High Lift Prediction are shown to demonstrate the utility of the current approach

    Digital Analytical Geometry: How do I define a digital analytical object?

    No full text
    International audienceThis paper is meant as a short survey on analytically de-ned digital geometric objects. We will start by giving some elements on digitizations and its relations to continuous geometry. We will then explain how, from simple assumptions about properties a digital object should have, one can build mathematical sound digital objects. We will end with open problems and challenges for the future

    Automated delineation of roof planes from LIDAR data

    Get PDF
    In this paper, we describe an algorithm for roof line delineation from LIDAR data which aims at achieving models of a high level of detail. Roof planes are initially extracted by segmentation based on the local homogeneity of surface normal vectors of a digital surface model (DSM). A case analysis then reveals which of these roof planes intersect and which of them are separated by a step edge. The positions of the step edges are determined precisely by a new algorithm taking into account domain specific information. Finally, all step edges and intersection lines are combined to form consistent polyhedral models. In all phases of this workflow, decision making is based upon statistical reasoning about geometrical relations between neighbouring entities in order to reduce the number of control parameters and to increase the robustness of the method

    Segmentation-based mesh design for motion estimation

    Get PDF
    Dans la plupart des codec vidéo standard, l'estimation des mouvements entre deux images se fait généralement par l'algorithme de concordance des blocs ou encore BMA pour « Block Matching Algorithm ». BMA permet de représenter l'évolution du contenu des images en décomposant normalement une image par blocs 2D en mouvement translationnel. Cette technique de prédiction conduit habituellement à de sévères distorsions de 1'artefact de bloc lorsque Ie mouvement est important. De plus, la décomposition systématique en blocs réguliers ne dent pas compte nullement du contenu de l'image. Certains paramètres associes aux blocs, mais inutiles, doivent être transmis; ce qui résulte d'une augmentation de débit de transmission. Pour paillier a ces défauts de BMA, on considère les deux objectifs importants dans Ie codage vidéo, qui sont de recevoir une bonne qualité d'une part et de réduire la transmission a très bas débit d'autre part. Dans Ie but de combiner les deux exigences quasi contradictoires, il est nécessaire d'utiliser une technique de compensation de mouvement qui donne, comme transformation, de bonnes caractéristiques subjectives et requiert uniquement, pour la transmission, l'information de mouvement. Ce mémoire propose une technique de compensation de mouvement en concevant des mailles 2D triangulaires a partir d'une segmentation de l'image. La décomposition des mailles est construite a partir des nœuds repartis irrégulièrement Ie long des contours dans l'image. La décomposition résultant est ainsi basée sur Ie contenu de l'image. De plus, étant donné la même méthode de sélection des nœuds appliquée à l'encodage et au décodage, la seule information requise est leurs vecteurs de mouvement et un très bas débit de transmission peut ainsi être réalise. Notre approche, comparée avec BMA, améliore à la fois la qualité subjective et objective avec beaucoup moins d'informations de mouvement. Dans la premier chapitre, une introduction au projet sera présentée. Dans Ie deuxième chapitre, on analysera quelques techniques de compression dans les codec standard et, surtout, la populaire BMA et ses défauts. Dans Ie troisième chapitre, notre algorithme propose et appelé la conception active des mailles a base de segmentation, sera discute en détail. Ensuite, les estimation et compensation de mouvement seront décrites dans Ie chapitre 4. Finalement, au chapitre 5, les résultats de simulation et la conclusion seront présentés.Abstract: In most video compression standards today, the generally accepted method for temporal prediction is motion compensation using block matching algorithm (BMA). BMA represents the scene content evolution with 2-D rigid translational moving blocks. This kind of predictive scheme usually leads to distortions such as block artefacts especially when the motion is important. The two most important aims in video coding are to receive a good quality on one hand and a low bit-rate on the other. This thesis proposes a motion compensation scheme using segmentation-based 2-D triangular mesh design method. The mesh is constructed by irregularly spread nodal points selected along image contour. Based on this, the generated mesh is, to a great extent, image content based. Moreover, the nodes are selected with the same method on the encoder and decoder sides, so that the only information that has to be transmitted are their motion vectors, and thus very low bit-rate can be achieved. Compared with BMA, our approach could improve subjective and objective quality with much less motion information."--Résumé abrégé par UM
    • …
    corecore