728 research outputs found

    SurfelMeshing: Online Surfel-Based Mesh Reconstruction

    Full text link
    We address the problem of mesh reconstruction from live RGB-D video, assuming a calibrated camera and poses provided externally (e.g., by a SLAM system). In contrast to most existing approaches, we do not fuse depth measurements in a volume but in a dense surfel cloud. We asynchronously (re)triangulate the smoothed surfels to reconstruct a surface mesh. This novel approach enables to maintain a dense surface representation of the scene during SLAM which can quickly adapt to loop closures. This is possible by deforming the surfel cloud and asynchronously remeshing the surface where necessary. The surfel-based representation also naturally supports strongly varying scan resolution. In particular, it reconstructs colors at the input camera's resolution. Moreover, in contrast to many volumetric approaches, ours can reconstruct thin objects since objects do not need to enclose a volume. We demonstrate our approach in a number of experiments, showing that it produces reconstructions that are competitive with the state-of-the-art, and we discuss its advantages and limitations. The algorithm (excluding loop closure functionality) is available as open source at https://github.com/puzzlepaint/surfelmeshing .Comment: Version accepted to IEEE Transactions on Pattern Analysis and Machine Intelligenc

    A parallel Heap-Cell Method for Eikonal equations

    Full text link
    Numerous applications of Eikonal equations prompted the development of many efficient numerical algorithms. The Heap-Cell Method (HCM) is a recent serial two-scale technique that has been shown to have advantages over other serial state-of-the-art solvers for a wide range of problems. This paper presents a parallelization of HCM for a shared memory architecture. The numerical experiments in R3R^3 show that the parallel HCM exhibits good algorithmic behavior and scales well, resulting in a very fast and practical solver. We further explore the influence on performance and scaling of data precision, early termination criteria, and the hardware architecture. A shorter version of this manuscript (omitting these more detailed tests) has been submitted to SIAM Journal on Scientific Computing in 2012.Comment: (a minor update to address the reviewers' comments) 31 pages; 15 figures; this is an expanded version of a paper accepted by SIAM Journal on Scientific Computin

    Isosurface Extraction in the Visualization Toolkit Using the Extrema Skeleton Algorithm

    Get PDF
    Generating isosurfaces is a very useful technique in data visualization for understanding the distribution of scalar data. Often, when the size of the data set is really large, as in the case with data produced by medical imaging applications, engineering simulations or geographic information systems applications, the use of traditional methods like marching cubes makes repeated generation of isosurfaces a very time consuming task. This thesis investigated the use of the Extrema Skeleton algorithm to speed up repeated isosurface generation in the visualization package, Visualization Toolkit (VTK). The objective was to reduce the number of non-isosurface cells visited to generate isosurfaces, and to compare the Extrema Skeleton method with the Marching Cubes method by monitoring parameters like time taken for the isosurfacing process and number of cells visited. The results of this investigation showed that the Extrema Skeleton method was faster for most of the datasets tested. For simple datasets with less than 10% isosurface cells and complex datasets with less than 5% isosurface cells, the Extrema Skeleton method was found to be significantly faster than the Marching Cubes method. The time gained by the Extrema Skeleton method for datasets with greater than 15% isosurface cells was found to be insignificant. Based on the results of this study, implementing the Extrema Skeleton method for the VTK software is a change worth making because typical VTK users deal with datasets for which the Extrema Skeleton method is significantly faster and also with datasets for which it is marginally faster than the Marching Cubes method

    Discrete curvature approximations and segmentation of polyhedral surfaces

    Get PDF
    The segmentation of digitized data to divide a free form surface into patches is one of the key steps required to perform a reverse engineering process of an object. To this end, discrete curvature approximations are introduced as the basis of a segmentation process that lead to a decomposition of digitized data into areas that will help the construction of parametric surface patches. The approach proposed relies on the use of a polyhedral representation of the object built from the digitized data input. Then, it is shown how noise reduction, edge swapping techniques and adapted remeshing schemes can participate to different preparation phases to provide a geometry that highlights useful characteristics for the segmentation process. The segmentation process is performed with various approximations of discrete curvatures evaluated on the polyhedron produced during the preparation phases. The segmentation process proposed involves two phases: the identification of characteristic polygonal lines and the identification of polyhedral areas useful for a patch construction process. Discrete curvature criteria are adapted to each phase and the concept of invariant evaluation of curvatures is introduced to generate criteria that are constant over equivalent meshes. A description of the segmentation procedure is provided together with examples of results for free form object surfaces

    Surface Shape Perception in Volumetric Stereo Displays

    Get PDF
    In complex volume visualization applications, understanding the displayed objects and their spatial relationships is challenging for several reasons. One of the most important obstacles is that these objects can be translucent and can overlap spatially, making it difficult to understand their spatial structures. However, in many applications, for example medical visualization, it is crucial to have an accurate understanding of the spatial relationships among objects. The addition of visual cues has the potential to help human perception in these visualization tasks. Descriptive line elements, in particular, have been found to be effective in conveying shape information in surface-based graphics as they sparsely cover a geometrical surface, consistently following the geometry. We present two approaches to apply such line elements to a volume rendering process and to verify their effectiveness in volume-based graphics. This thesis reviews our progress to date in this area and discusses its effects and limitations. Specifically, it examines the volume renderer implementation that formed the foundation of this research, the design of the pilot study conducted to investigate the effectiveness of this technique, the results obtained. It further discusses improvements designed to address the issues revealed by the statistical analysis. The improved approach is able to handle visualization targets with general shapes, thus making it more appropriate to real visualization applications involving complex objects

    A low complexity algorithm for non-monotonically evolving fronts

    Full text link
    A new algorithm is proposed to describe the propagation of fronts advected in the normal direction with prescribed speed function F. The assumptions on F are that it does not depend on the front itself, but can depend on space and time. Moreover, it can vanish and change sign. To solve this problem the Level-Set Method [Osher, Sethian; 1988] is widely used, and the Generalized Fast Marching Method [Carlini et al.; 2008] has recently been introduced. The novelty of our method is that its overall computational complexity is predicted to be comparable to that of the Fast Marching Method [Sethian; 1996], [Vladimirsky; 2006] in most instances. This latter algorithm is O(N^n log N^n) if the computational domain comprises N^n points. Our strategy is to use it in regions where the speed is bounded away from zero -- and switch to a different formalism when F is approximately 0. To this end, a collection of so-called sideways partial differential equations is introduced. Their solutions locally describe the evolving front and depend on both space and time. The well-posedness of those equations, as well as their geometric properties are addressed. We then propose a convergent and stable discretization of those PDEs. Those alternative representations are used to augment the standard Fast Marching Method. The resulting algorithm is presented together with a thorough discussion of its features. The accuracy of the scheme is tested when F depends on both space and time. Each example yields an O(1/N) global truncation error. We conclude with a discussion of the advantages and limitations of our method.Comment: 30 pages, 12 figures, 1 tabl

    Cells in Silico – introducing a high-performance framework for large-scale tissue modeling

    Get PDF
    Background Discoveries in cellular dynamics and tissue development constantly reshape our understanding of fundamental biological processes such as embryogenesis, wound-healing, and tumorigenesis. High-quality microscopy data and ever-improving understanding of single-cell effects rapidly accelerate new discoveries. Still, many computational models either describe few cells highly detailed or larger cell ensembles and tissues more coarsely. Here, we connect these two scales in a joint theoretical model. Results We developed a highly parallel version of the cellular Potts model that can be flexibly applied and provides an agent-based model driving cellular events. The model can be modular extended to a multi-model simulation on both scales. Based on the NAStJA framework, a scaling implementation running efficiently on high-performance computing systems was realized. We demonstrate independence of bias in our approach as well as excellent scaling behavior. Conclusions Our model scales approximately linear beyond 10,000 cores and thus enables the simulation of large-scale three-dimensional tissues only confined by available computational resources. The strict modular design allows arbitrary models to be configured flexibly and enables applications in a wide range of research questions. Cells in Silico (CiS) can be easily molded to different model assumptions and help push computational scientists to expand their simulations to a new area in tissue simulations. As an example we highlight a 10003^{3} voxel-sized cancerous tissue simulation at sub-cellular resolution

    Ground truth determination for segmentation of tomographic volumes using interpolation

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia BiomédicaOptical projection tomographic microscopy allows for a 3D analysis of individual cells, making it possible to study its morphology. The 3D imagining technique used in this thesis uses white light excitation to image stained cells, and is referred to as single-cell optical computed tomography (cell CT). Studies have shown that morphological characteristics of the cell and its nucleus are deterministic in cancer diagnoses. For a more complete and accurate analysis of these characteristics, a fully-automated analysis of the single-cell 3D tomographic images can be done. The first step is segmenting the image into the different cell components. To assess how accurate the segmentation is, there is a need to determine ground truth of the automated segmentation. This dissertation intends to expose a method of obtaining ground truth for 3D segmentation of single cells. This was achieved by developing a software in CSharp. The software allows the user to input a visual segmentation of each 2D slice of a 3D volume by using a pen to trace the visually identified boundary of a cell component on a tablet. With this information, the software creates a segmentation of a 3D tomographic image that is a result of human visual segmentation. To increase the speed of this process, interpolation algorithms can be used. Since it is very time consuming to draw on every slice the user can skip slices. Interpolation algorithms are used to interpolate on the skipped slices. Five different interpolation algorithms were written: Linear Interpolation, Gaussian splat, Marching Cubes, Unorganized Points, and Delaunay Triangulation. To evaluate the performance of each interpolation algorithm the following evaluation metrics were used: Jaccard Similarity, Dice Coefficient, Specificity and Sensitivity.After evaluating each interpolation method we concluded that linear interpolation was the most accurate interpolation method, producing the best segmented volume for a faster ground truth determination method

    Particlization in hybrid models

    Full text link
    In hybrid models, which combine hydrodynamical and transport approaches to describe different stages of heavy-ion collisions, conversion of fluid to individual particles, particlization, is a non-trivial technical problem. We describe in detail how to find the particlization hypersurface in a 3+1 dimensional model, and how to sample the particle distributions evaluated using the Cooper-Frye procedure to create an ensemble of particles as an initial state for the transport stage. We also discuss the role and magnitude of the negative contributions in the Cooper-Frye procedure.Comment: 18 pages, 28 figures, EPJA: Topical issue on "Relativistic Hydro- and Thermodynamics"; version accepted for publication, typos and error in Eq.(1) corrected, the purpose of sampling and change from UrQMD to fluid clarified, added discussion why attempts to cancel negative contributions of Cooper-Frye are not applicable her
    • …
    corecore