1,847 research outputs found

    Reducing artifacts in surface meshes extracted from binary volumes

    Get PDF
    We present a mesh filtering method for surfaces extracted from binary volume data which guarantees a smooth and correct representation of the original binary sampled surface, even if the original volume data is inaccessible or unknown. This method reduces the typical block and staircase artifacts but adheres to the underlying binary volume data yielding an accurate and smooth representation. The proposed method is closest to the technique of Constrained Elastic Surface Nets (CESN). CESN is a specialized surface extraction method with a subsequent iterative smoothing process, which uses the binary input data as a set of constraints. In contrast to CESN, our method processes surface meshes extracted by means of Marching Cubes and does not require the binary volume. It acts directly and solely on the surface mesh and is thus feasible even for surface meshes of inaccessible or unknown volume data. This is possible by reconstructing information concerning the binary volume from artifacts in the extracted mesh and applying a relaxation method constrained to the reconstructed information

    Particle-based Sampling and Meshing of Surfaces in Multimaterial Volumes

    Full text link

    Noninvasive optical estimation of CSF thickness for brain-atrophy monitoring

    Get PDF
    Dementia disorders are increasingly becoming sources of a broad range of problems, strongly interfering with normal daily tasks of a growing number of individuals. Such neurodegenerative diseases are often accompanied with progressive brain atrophy that, at late stages, leads to drastically reduced brain dimensions. At the moment, this structural involution can be followed with XCT or MRI measurements that share numerous disadvantages in terms of usability, invasiveness and costs. In this work, we aim to retrieve information concerning the brain atrophy stage and its evolution, proposing a novel approach based on non-invasive time-resolved Near Infra-Red (tr-NIR) measurements. For this purpose, we created a set of human-head atlases, in which we eroded the brain as it would happen in a clinical brain-atrophy progression. With these realistic meshes, we reproduced a longitudinal tr-NIR study exploiting a Monte-Carlo photon propagation algorithm to model the varying cerebral spinal fluid (CSF). The study of the time-resolved reflectance curve at late photon arrival times exhibited peculiar slope-changes upon CSF layer increase that were confirmed under several measurement conditions. The performance of the technique suggests good sensitivity to CSF variation, useful for a fast and non-invasive observation of the dementia progression.Comment: 32 pages, double spaced, 11 figure

    Solid modelling for manufacturing: from Voelcker's boundary evaluation to discrete paradigms

    Get PDF
    Herb Voelcker and his research team laid the foundations of Solid Modelling, on which Computer-Aided Design is based. He founded the ambitious Production Automation Project, that included Constructive Solid Geometry (CSG) as the basic 3D geometric representation. CSG trees were compact and robust, saving a memory space that was scarce in those times. But the main computational problem was Boundary Evaluation: the process of converting CSG trees to Boundary Representations (BReps) with explicit faces, edges and vertices for manufacturing and visualization purposes. This paper presents some glimpses of the history and evolution of some ideas that started with Herb Voelcker. We briefly describe the path from “localization and boundary evaluation” to “localization and printing”, with many intermediate steps driven by hardware, software and new mathematical tools: voxel and volume representations, triangle meshes, and many others, observing also that in some applications, voxel models no longer require Boundary Evaluation. In this last case, we consider the current research challenges and discuss several avenues for further research.Project TIN2017-88515-C2-1-R funded by MCIN/AEI/10.13039/501100011033/FEDER‘‘A way to make Europe’’Peer ReviewedPostprint (published version

    Single-picture reconstruction and rendering of trees for plausible vegetation synthesis

    Get PDF
    State-of-the-art approaches for tree reconstruction either put limiting constraints on the input side (requiring multiple photographs, a scanned point cloud or intensive user input) or provide a representation only suitable for front views of the tree. In this paper we present a complete pipeline for synthesizing and rendering detailed trees from a single photograph with minimal user effort. Since the overall shape and appearance of each tree is recovered from a single photograph of the tree crown, artists can benefit from georeferenced images to populate landscapes with native tree species. A key element of our approach is a compact representation of dense tree crowns through a radial distance map. Our first contribution is an automatic algorithm for generating such representations from a single exemplar image of a tree. We create a rough estimate of the crown shape by solving a thin-plate energy minimization problem, and then add detail through a simplified shape-from-shading approach. The use of seamless texture synthesis results in an image-based representation that can be rendered from arbitrary view directions at different levels of detail. Distant trees benefit from an output-sensitive algorithm inspired on relief mapping. For close-up trees we use a billboard cloud where leaflets are distributed inside the crown shape through a space colonization algorithm. In both cases our representation ensures efficient preservation of the crown shape. Major benefits of our approach include: it recovers the overall shape from a single tree image, involves no tree modeling knowledge and minimal authoring effort, and the associated image-based representation is easy to compress and thus suitable for network streaming.Peer ReviewedPostprint (author's final draft
    corecore