5 research outputs found

    Segmentation of 3D pore space from CT images using curvilinear skeleton: application to numerical simulation of microbial decomposition

    Full text link
    Recent advances in 3D X-ray Computed Tomographic (CT) sensors have stimulated research efforts to unveil the extremely complex micro-scale processes that control the activity of soil microorganisms. Voxel-based description (up to hundreds millions voxels) of the pore space can be extracted, from grey level 3D CT scanner images, by means of simple image processing tools. Classical methods for numerical simulation of biological dynamics using mesh of voxels, such as Lattice Boltzmann Model (LBM), are too much time consuming. Thus, the use of more compact and reliable geometrical representations of pore space can drastically decrease the computational cost of the simulations. Several recent works propose basic analytic volume primitives (e.g. spheres, generalized cylinders, ellipsoids) to define a piece-wise approximation of pore space for numerical simulation of draining, diffusion and microbial decomposition. Such approaches work well but the drawback is that it generates approximation errors. In the present work, we study another alternative where pore space is described by means of geometrically relevant connected subsets of voxels (regions) computed from the curvilinear skeleton. Indeed, many works use the curvilinear skeleton (3D medial axis) for analyzing and partitioning 3D shapes within various domains (medicine, material sciences, petroleum engineering, etc.) but only a few ones in soil sciences. Within the context of soil sciences, most studies dealing with 3D medial axis focus on the determination of pore throats. Here, we segment pore space using curvilinear skeleton in order to achieve numerical simulation of microbial decomposition (including diffusion processes). We validate simulation outputs by comparison with other methods using different pore space geometrical representations (balls, voxels).Comment: preprint, submitted to Computers & Geosciences 202

    Quantification of two Gestalt Laws using curve resconstruction

    Get PDF
    Visual perception is the ability to interpret, process, and comprehend all the information received through the sense of sight by association with earlier experiences. Researchers have long struggled to explain what visual processing does to create what we actually see, and brought many theoretical approaches explaining how human beings see the world. The theoretical approaches of visual perception differ widely and their coverage ranges from early theories such as Gestalt theory to recent computational theory in the field of Artificial Intelligence. According to the characteristics of visual perception, human beings tend to classify the ambient environment objects into different categories described by various symbols or objects. Similar symbols or even quite dissimilar symbols may be perceived as belonging together or belonging to different groups according to people's judgment. It must follow certain rules when human beings set up relationships between those objects and symbols, and finally obtain the unambiguous perceptual results through the process of visual perception. To find out the mechanisms underlying these properties of visual perception, this present thesis conducts experiments on perception using curve reconstructions as test cases. The perception model developed through the experiment is implemented in a curve reconstruction algorithm. It is assumed that a good perception model will reconstruct curves in the same manner as human beings perceive them. In the present thesis, a series of methods from Design of Experiments (DOE), ANOVA and the multivariate nonlinear regression model are applied to investigate the relationships between the points and curves. The results show that our perception model conforms to the pattern human perceives the points

    Identification of Change in a Dynamic Dot Pattern and its use in the Maintenance of Footprints

    Get PDF
    Examples of spatio-temporal data that can be represented as sets of points (called dot patterns) are pervasive in many applications, for example when tracking herds of migrating animals, ships in busy shipping channels and crowds of people in everyday life. The use of this type of data extends beyond the standard remit of Geographic Information Science (GISc), as classification and optimisation problems can often be visualised in the same manner. A common task within these fields is the assignment of a region (called a footprint) that is representative of the underlying pattern. The ways in which this footprint can be generated has been the subject of much research with many algorithms having been produced. Much of this research has focused on the dot patterns and footprints as static entities, however for many of the applications the data is prone to change. This thesis proposes that the footprint need not necessarily be updated each time the dot pattern changes; that the footprint can remain an appropriate representation of the pattern if the amount of change is slight. To ascertain the appropriate times at which to update the footprint, and when to leave it as it is, this thesis introduces the concept of change identifiers as simple measures of change between two dot patterns. Underlying the change identifiers is an in-depth examination of the data inherent in the dot pattern and the creation of descriptors that represent this data. The experimentation performed by this thesis shows that change identifiers are able to distinguish between different types of change across dot patterns from different sources. In doing so the change identifiers reduce the number of updates of the footprint while maintaining a measurably good representation of the dot pattern

    Visual analytics of geo-related multidimensional data

    Get PDF
    In recent years, both the volume and the availability of urban data related to various social issues, such as real estate, crime and population are rapidly increasing. Analysing such urban data can help the government make evidence-based decisions leading to better-informed policies; the citizens can also benefit in many scenarios such as home-seeking. However, the analytic design process can be challenging since (i) the urban data often has multiple attributes (e.g., the distance to supermarket, the distance to work, schools zone in real estate data) that are highly related to geography; and (ii) users might have various analysis/exploration tasks that are hard to define (e.g., different home-buyers might have requirements for housing properties and many of them might not know what they want before they understand the local real estate market). In this thesis, we use visual analytics techniques to study such geo-related multidimensional urban data and answer the following research questions. In the first research question, we propose a visual analytics framework/system for geo-related multidimensional data. Since visual analytics and visualization designs are highly domain-specific, we use the real estate domain as an example to study the problem. Specifically, we first propose a problem abstraction to satisfy the requirements from users (e.g., home buyers, investors). Second, we collect, integrate and clean the last ten year's real estate sold records in Australia as well as their location-related education, facility and transportation profiles, to generate a real multi-dimensional data repository. Third, we propose an interactive visual analytic procedure to help less informed users gradually learn about the local real estate market, upon which users exploit this learned knowledge to specify their personalized requirements in property seeking. Fourth, we propose a series of designs to visualize properties/suburbs in different dimensions and different granularity. Finally, we implement a system prototype for public access (http://115.146.89.158), and present case studies based on real-world datasets and real scenario to demonstrate the usefulness and effectiveness of our system. Our second research question extends the first one and studies the scalability problem to support cluster-based visualization for large-scale geo-related multidimensional data. Particularly, we first propose a design space for cluster-based geographic visualization. To calculate the geographic boundary of each cluster, we propose a concave hull algorithm which can avoid complex shapes, large empty area inside the boundary and overlaps among different clusters. Supported by the concave hull algorithm, we design a cluster-based data structure named ConcaveCubes to efficiently support interactive response to users' visual exploration on large-scale geo-related multidimensional data. Finally, we build a demo system (http://115.146.89.158/ConcaveCubes) to demonstrate the cluster-based geographic visualization, and present extensive experiments using real-world datasets and compare ConcaveCubes with state-of-the-art cube-based structures to verify the efficiency and effectiveness of ConcaveCubes. The last research question studies the problem related to visual analytics of urban areas of interest (AOIs), where we visualize geographic points that satisfy the user query as a limited number of regions (AOIs) instead of a large number of individual points (POIs). After proposing a design space for AOI visualization, we design a parameter-free footprint method named AOI-shapes to effectively capture the region of an AOI based on POIs that satisfy the user query and those that do not. We also propose two incremental methods which generate the AOI-shapes by reusing previous calculations as per users' update of their AOI query. Finally, we implement an online demo (http://www.aoishapes.com) and conduct extensive experiments to demonstrate the efficiency and effectiveness of the proposed AOI-shapes

    An Incremental Algorithm for Betti Numbers of Simplicial Complexes

    No full text
    A general and direct method for computing the betti numbers of the homology groups of a finite simplicial complex is given. For subcomplexes of a triangulation of S³ this method has implementations that run in time 0(’na(n)) and O(n), where n is the number of simplices in the triangulation. If app!ied to the family of a-shapes of a finite point set in R³ ittakes time O(ncz(n)) to compute the betti numbers of all cr-shapes
    corecore