2,408 research outputs found

    The persistent cosmic web and its filamentary structure I: Theory and implementation

    Full text link
    We present DisPerSE, a novel approach to the coherent multi-scale identification of all types of astrophysical structures, and in particular the filaments, in the large scale distribution of matter in the Universe. This method and corresponding piece of software allows a genuinely scale free and parameter free identification of the voids, walls, filaments, clusters and their configuration within the cosmic web, directly from the discrete distribution of particles in N-body simulations or galaxies in sparse observational catalogues. To achieve that goal, the method works directly over the Delaunay tessellation of the discrete sample and uses the DTFE density computed at each tracer particle; no further sampling, smoothing or processing of the density field is required. The idea is based on recent advances in distinct sub-domains of computational topology, which allows a rigorous application of topological principles to astrophysical data sets, taking into account uncertainties and Poisson noise. Practically, the user can define a given persistence level in terms of robustness with respect to noise (defined as a "number of sigmas") and the algorithm returns the structures with the corresponding significance as sets of critical points, lines, surfaces and volumes corresponding to the clusters, filaments, walls and voids; filaments, connected at cluster nodes, crawling along the edges of walls bounding the voids. The method is also interesting as it allows for a robust quantification of the topological properties of a discrete distribution in terms of Betti numbers or Euler characteristics, without having to resort to smoothing or having to define a particular scale. In this paper, we introduce the necessary mathematical background and describe the method and implementation, while we address the application to 3D simulated and observed data sets to the companion paper.Comment: A higher resolution version is available at http://www.iap.fr/users/sousbie together with complementary material. Submitted to MNRA

    Coarse-to-fine approximation of range images with bounded error adaptive triangular meshes

    Full text link
    Copyright 2007 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibitedA new technique for approximating range images with adaptive triangular meshes ensuring a user-defined approximation error is presented. This technique is based on an efficient coarse-to-fine refinement algorithm that avoids iterative optimization stages. The algorithm first maps the pixels of the given range image to 3D points defined in a curvature space. Those points are then tetrahedralized with a 3D Delaunay algorithm. Finally, an iterative process starts digging up the convex hull of the obtained tetrahedralization, progressively removing the triangles that do not fulfill the specified approximation error. This error is assessed in the original 3D space. The introduction of the aforementioned curvature space makes it possible for both convex and nonconvex object surfaces to be approximated with adaptive triangular meshes, improving thus the behavior of previous coarse-to-fine sculpturing techniques. The proposed technique is evaluated on real range images and compared to two simplification techniques that also ensure a user-defined approximation error: a fine-to-coarse approximation algorithm based on iterative optimization (Jade) and an optimization-free, fine-to-coarse algorithm (Simplification Envelopes).This work has been partially supported by the Spanish Ministry of Education and Science under projects TRA2004- 06702/AUT and DPI2004-07993-C03-03. The first author was supported by The Ramón y Cajal Program

    Analytical perturbative theories of motion in highly inhomogeneous gravitational fields : Ariadna AO/1-6790/11/NL/CBI

    Get PDF
    In this report we show that modern computer performances and state-of-the-art algebraic manipulator software are sufficiently developed to carry out our generalised analytical perturbative theory. This report addresses three technical aspects to develop a general perturbative theory and illustrates its power by applying it to investigate the inhomogeneous gravitational fields of asteroids

    Alpha, Betti and the Megaparsec Universe: on the Topology of the Cosmic Web

    Full text link
    We study the topology of the Megaparsec Cosmic Web in terms of the scale-dependent Betti numbers, which formalize the topological information content of the cosmic mass distribution. While the Betti numbers do not fully quantify topology, they extend the information beyond conventional cosmological studies of topology in terms of genus and Euler characteristic. The richer information content of Betti numbers goes along the availability of fast algorithms to compute them. For continuous density fields, we determine the scale-dependence of Betti numbers by invoking the cosmologically familiar filtration of sublevel or superlevel sets defined by density thresholds. For the discrete galaxy distribution, however, the analysis is based on the alpha shapes of the particles. These simplicial complexes constitute an ordered sequence of nested subsets of the Delaunay tessellation, a filtration defined by the scale parameter, α\alpha. As they are homotopy equivalent to the sublevel sets of the distance field, they are an excellent tool for assessing the topological structure of a discrete point distribution. In order to develop an intuitive understanding for the behavior of Betti numbers as a function of α\alpha, and their relation to the morphological patterns in the Cosmic Web, we first study them within the context of simple heuristic Voronoi clustering models. Subsequently, we address the topology of structures emerging in the standard LCDM scenario and in cosmological scenarios with alternative dark energy content. The evolution and scale-dependence of the Betti numbers is shown to reflect the hierarchical evolution of the Cosmic Web and yields a promising measure of cosmological parameters. We also discuss the expected Betti numbers as a function of the density threshold for superlevel sets of a Gaussian random field.Comment: 42 pages, 14 figure

    The persistent cosmic web and its filamentary structure II: Illustrations

    Full text link
    The recently introduced discrete persistent structure extractor (DisPerSE, Soubie 2010, paper I) is implemented on realistic 3D cosmological simulations and observed redshift catalogues (SDSS); it is found that DisPerSE traces equally well the observed filaments, walls, and voids in both cases. In either setting, filaments are shown to connect onto halos, outskirt walls, which circumvent voids. Indeed this algorithm operates directly on the particles without assuming anything about the distribution, and yields a natural (topologically motivated) self-consistent criterion for selecting the significance level of the identified structures. It is shown that this extraction is possible even for very sparsely sampled point processes, as a function of the persistence ratio. Hence astrophysicists should be in a position to trace and measure precisely the filaments, walls and voids from such samples and assess the confidence of the post-processed sets as a function of this threshold, which can be expressed relative to the expected amplitude of shot noise. In a cosmic framework, this criterion is comparable to friend of friend for the identifications of peaks, while it also identifies the connected filaments and walls, and quantitatively recovers the full set of topological invariants (Betti numbers) {\sl directly from the particles} as a function of the persistence threshold. This criterion is found to be sufficient even if one particle out of two is noise, when the persistence ratio is set to 3-sigma or more. The algorithm is also implemented on the SDSS catalogue and used to locat interesting configurations of the filamentary structure. In this context we carried the identification of an ``optically faint'' cluster at the intersection of filaments through the recent observation of its X-ray counterpart by SUZAKU. The corresponding filament catalogue will be made available online.Comment: A higher resolution version is available at http://www.iap.fr/users/sousbie together with complementary material (movie and data). Submitted to MNRA

    Polygon Feature Extraction from Satellite Imagery Based on Colour Image Segmentation and Medial Axis

    Get PDF
    Areal features are of great importance in applications like shore line mapping, boundary delineation and change detection. This research work is an attempt to automate the process of extracting feature boundaries from satellite imagery. This process is intended to eventually replace manual digitization by computer assisted boundary detection and conversion to a vector layer in a Geographic Information System. Another potential application is to be able to use the extracted linear features in image matching algorithms. In multi-spectral satellite imagery, various features can be distinguished based on their colour. There has been a good amount of work already done as far as boundary detection and skeletonization is concerned, but this research work is different from the previous ones in the way that it uses the Delaunay graph and the Voronoi tessellation to extract boundary and skeletons that are guaranteed to be topologically equivalent to the segmented objects. The features thus extracted as object border can be stored as vector maps in a Geographic Information System after labelling and editing. Here we present a complete methodology of the skeletonization process from satellite imagery using a colour image segmentation algorithm with examples of road networks and hydrographic networks.

    Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction

    Get PDF
    This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\''s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\''s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve
    corecore