468 research outputs found

    Alpha, Betti and the Megaparsec Universe: on the Topology of the Cosmic Web

    Full text link
    We study the topology of the Megaparsec Cosmic Web in terms of the scale-dependent Betti numbers, which formalize the topological information content of the cosmic mass distribution. While the Betti numbers do not fully quantify topology, they extend the information beyond conventional cosmological studies of topology in terms of genus and Euler characteristic. The richer information content of Betti numbers goes along the availability of fast algorithms to compute them. For continuous density fields, we determine the scale-dependence of Betti numbers by invoking the cosmologically familiar filtration of sublevel or superlevel sets defined by density thresholds. For the discrete galaxy distribution, however, the analysis is based on the alpha shapes of the particles. These simplicial complexes constitute an ordered sequence of nested subsets of the Delaunay tessellation, a filtration defined by the scale parameter, α\alpha. As they are homotopy equivalent to the sublevel sets of the distance field, they are an excellent tool for assessing the topological structure of a discrete point distribution. In order to develop an intuitive understanding for the behavior of Betti numbers as a function of α\alpha, and their relation to the morphological patterns in the Cosmic Web, we first study them within the context of simple heuristic Voronoi clustering models. Subsequently, we address the topology of structures emerging in the standard LCDM scenario and in cosmological scenarios with alternative dark energy content. The evolution and scale-dependence of the Betti numbers is shown to reflect the hierarchical evolution of the Cosmic Web and yields a promising measure of cosmological parameters. We also discuss the expected Betti numbers as a function of the density threshold for superlevel sets of a Gaussian random field.Comment: 42 pages, 14 figure

    Multiresolutional Fault-Tolerant Sensor Integration and Object Recognition in Images.

    Get PDF
    This dissertation applies multiresolution methods to two important problems in signal analysis. The problem of fault-tolerant sensor integration in distributed sensor networks is addressed, and an efficient multiresolutional algorithm for estimating the sensors\u27 effective output is proposed. The problem of object/shape recognition in images is addressed in a multiresolutional setting using pyramidal decomposition of images with respect to an orthonormal wavelet basis. A new approach to efficient template matching to detect objects using computational geometric methods is put forward. An efficient paradigm for object recognition is described

    A Multiscale Framework for Predicting the Performance of Fiber/Matrix Composites

    Get PDF
    To enable higher fidelity studies of laminated and 3D textile composites, a scalable finite element framework was developed for predicting the performance of fiber/matrix composites across scales. Effective design paradigms and lessons learned are presented. Using the developed framework, new insights into the behavior of laminated and woven composites were discovered. For a [0/90]s and [±45/0/90]s laminated composite, the classical free-edge problem was revisited with the heterogeneous microstructure directly modeled, which showed that the local heterogeneity greatly affects the predicted stresses along the ply interface. Accounting for the microscale heterogeneity removed the singularity at the ply interface and dramatically reduced the predicted interlaminar stresses near a free-edge. However, the heterogeneous microstructure was also shown to induce a complex stress distribution away from the free-edge due to the interaction of fibers near the ply interface, since close fibers were shown to induce compressive stress concentrations. The fiber arrangement had a significant effect on the local stresses, with a more uniform fiber arrangement resulting in lower peak stresses. Finally, the region needed to accurately predict the microscale stresses near the ply interface was shown to be much smaller then entire ply. For two types of orthogonally woven textile composites, nonidealized textile models were created and subjected to a variety of loads, providing insight into how load is distributed throughout the complex tow architecture and the locations of critical stresses. By comparing the stresses of a textile model with and without binders, the binders were shown to greatly affect the distributions of stress a tensile load but not in-plane shear. Variations in the local fiber volume fraction within the tows were shown to significantly affect the magnitude of critical stress concentrations but did not change where the critical stresses occurred. Finally, accounting for plasticity in the neat matrix pocket of the textile was shown to only affect the localized region near where binders traverse the thickness of the textile

    Efficient rendering for three-dimensional displays

    Get PDF
    This thesis explores more efficient methods for visualizing point data sets on three-dimensional (3D) displays. Point data sets are used in many scientific applications, e.g. cosmological simulations. Visualizing these data sets in {3D} is desirable because it can more readily reveal structure and unknown phenomena. However, cutting-edge scientific point data sets are very large and producing/rendering even a single image is expensive. Furthermore, current literature suggests that the ideal number of views for 3D (multiview) displays can be in the hundreds, which compounds the costs. The accepted notion that many views are required for {3D} displays is challenged by carrying out a novel human factor trials study. The results suggest that humans are actually surprisingly insensitive to the number of viewpoints with regard to their task performance, when occlusion in the scene is not a dominant factor. Existing stereoscopic rendering algorithms can have high set-up costs which limits their use and none are tuned for uncorrelated {3D} point rendering. This thesis shows that it is possible to improve rendering speeds for a low number of views by perspective reprojection. The novelty in the approach described lies in delaying the reprojection and generation of the viewpoints until the fragment stage of the pipeline and streamlining the rendering pipeline for points only. Theoretical analysis suggests a fragment reprojection scheme will render at least 2.8 times faster than na\"{i}vely re-rendering the scene from multiple viewpoints. Building upon the fragment reprojection technique, further rendering performance is shown to be possible (at the cost of some rendering accuracy) by restricting the amount of reprojection required according to the stereoscopic resolution of the display. A significant benefit is that the scene depth can be mapped arbitrarily to the perceived depth range of the display at no extra cost than a single region mapping approach. Using an average case-study (rendering from a 500k points for a 9-view High Definition 3D display), theoretical analysis suggests that this new approach is capable of twice the performance gains than simply reprojecting every single fragment, and quantitative measures show the algorithm to be 5 times faster than a naïve rendering approach. Further detailed quantitative results, under varying scenarios, are provided and discussed

    COMPOSE: Compacted object sample extraction a framework for semi-supervised learning in nonstationary environments

    Get PDF
    An increasing number of real-world applications are associated with streaming data drawn from drifting and nonstationary distributions. These applications demand new algorithms that can learn and adapt to such changes, also known as concept drift. Proper characterization of such data with existing approaches typically requires substantial amount of labeled instances, which may be difficult, expensive, or even impractical to obtain. In this thesis, compacted object sample extraction (COMPOSE) is introduced - a computational geometry-based framework to learn from nonstationary streaming data - where labels are unavailable (or presented very sporadically) after initialization. The feasibility and performance of the algorithm are evaluated on several synthetic and real-world data sets, which present various different scenarios of initially labeled streaming environments. On carefully designed synthetic data sets, we also compare the performance of COMPOSE against the optimal Bayes classifier, as well as the arbitrary subpopulation tracker algorithm, which addresses a similar environment referred to as extreme verification latency. Furthermore, using the real-world National Oceanic and Atmospheric Administration weather data set, we demonstrate that COMPOSE is competitive even with a well-established and fully supervised nonstationary learning algorithm that receives labeled data in every batch

    On-the-Fly Workspace Visualization for Redundant Manipulators

    Get PDF
    This thesis explores the possibilities of on-line workspace rendering for redundant robotic manipulators via parallelized computation on the graphics card. Several visualization schemes for different workspace types are devised, implemented and evaluated. Possible applications are visual support for the operation of manipulators, fast workspace analyses in time-critical scenarios and interactive workspace exploration for design and comparison of robots and tools

    Real-time hybrid cutting with dynamic fluid visualization for virtual surgery

    Get PDF
    It is widely accepted that a reform in medical teaching must be made to meet today's high volume training requirements. Virtual simulation offers a potential method of providing such trainings and some current medical training simulations integrate haptic and visual feedback to enhance procedure learning. The purpose of this project is to explore the capability of Virtual Reality (VR) technology to develop a training simulator for surgical cutting and bleeding in a general surgery

    Numerical simulation of fracture pattern development and implications for fuid flow

    No full text
    Simulations are instrumental to understanding flow through discrete fracture geometric representations that capture the large-scale permeability structure of fractured porous media. The contribution of this thesis is threefold: an efficient finite-element finite-volume discretisation of the advection/diffusion flow equations, a geomechanical fracture propagation algorithm to create fractured rock analogues, and a study of the effect of growth on hydraulic conductivity. We describe an iterative geomechanics-based finite-element model to simulate quasi-static crack propagation in a linear elastic matrix from an initial set of random flaws. The cornerstones are a failure and propagation criterion as well as a geometric kernel for dynamic shape housekeeping and automatic remeshing. Two-dimensional patterns exhibit connectivity, spacing, and density distributions reproducing en echelon crack linkage, tip hooking, and polygonal shrinkage forms. Differential stresses at the boundaries yield fracture curving. A stress field study shows that curvature can be suppressed by layer interaction effects. Our method is appropriate to model layered media where interaction with neighbouring layers does not dominate deformation. Geomechanically generated fracture patterns are the input to single-phase flow simulations through fractures and matrix. Thus, results are applicable to fractured porous media in addition to crystalline rocks. Stress state and deformation history control emergent local fracture apertures. Results depend on the number of initial flaws, their initial random distribution, and the permeability of the matrix. Straightpath fracture pattern simplifications yield a lower effective permeability in comparison to their curved counterparts. Fixed apertures overestimate the conductivity of the rock by up to six orders of magnitude. Local sample percolation effects are representative of the entire model flow behaviour for geomechanical apertures. Effective permeability in fracture dataset subregions are higher than the overall conductivity of the system. The presented methodology captures emerging patterns due to evolving geometric and flow properties essential to the realistic simulation of subsurface processes

    A statistical approach for fracture property realization and macroscopic failure analysis of brittle materials

    Get PDF
    Lacking the energy dissipative mechanics such as plastic deformation to rebalance localized stresses, similar to their ductile counterparts, brittle material fracture mechanics is associated with catastrophic failure of purely brittle and quasi-brittle materials at immeasurable and measurable deformation scales respectively. This failure, in the form macroscale sharp cracks, is highly dependent on the composition of the material microstructure. Further, the complexity of this relationship and the resulting crack patterns is exacerbated under highly dynamic loading conditions. A robust brittle material model must account for the multiscale inhomogeneity as well as the probabilistic distribution of the constituents which cause material heterogeneity and influence the complex mechanisms of dynamic fracture responses of the material. Continuum-based homogenization is carried out via finite element-based micromechanical analysis of a material neighbor which gives is geometrically described as a sampling windows (i.e., statistical volume elements). These volume elements are well-defined such that they are representative of the material while propagating material randomness from the inherent microscale defects. Homogenization yields spatially defined elastic and fracture related effective properties, utilized to statistically characterize the material in terms of these properties. This spatial characterization is made possible by performing homogenization at prescribed spatial locations which collectively comprise a non-uniform spatial grid which allows the mapping of each effective material properties to an associated spatial location. Through stochastic decomposition of the derived empirical covariance of the sampled effective material property, the Karhunen-Loeve method is used to generate realizations of a continuous and spatially-correlated random field approximation that preserve the statistics of the material from which it is derived. Aspects of modeling both isotropic and anisotropic brittle materials, from a statistical viewpoint, are investigated to determine how each influences the macroscale fracture response of these materials under highly dynamic conditions. The effects of modeling a material both explicitly by representations of discrete multiscale constituents and/or implicitly by continuum representation of material properties is studies to determine how each model influences the resulting material fracture response. For the implicit material representations, both a statistical white noise (i.e., Weibull-based spatially-uncorrelated) and colored noise (i.e., Karhunen-Loeve spatially-correlated model) random fields are employed herein
    corecore