11,080 research outputs found

    Distributed Connected Component Filtering and Analysis in 2-D and 3-D Tera-Scale Data Sets

    Get PDF
    Connected filters and multi-scale tools are region-based operators acting on the connected components of an image. Component trees are image representations to efficiently perform these operations as they represent the inclusion relationship of the connected components hierarchically. This paper presents disccofan (DIStributed Connected COmponent Filtering and ANalysis), a new method that extends the previous 2-D implementation of the Distributed Component Forests (DCFs) to handle 3-D processing and higher dynamic range data sets. disccofan combines shared and distributed memory techniques to efficiently compute component trees, user-defined attributes filters, and multi-scale analysis. Compared to similar methods, disccofan is faster and scales better on low and moderate dynamic range images, and is the only method with a speed-up larger than 1 on a realistic, astronomical floating-point data set. It achieves a speed-up of 11.20 using 48 processes to compute the DCF of a 162 Gigapixels, single-precision floating-point 3-D data set, while reducing the memory used by a factor of 22. This approach is suitable to perform attribute filtering and multi-scale analysis on very large 2-D and 3-D data sets, up to single-precision floating-point value

    A Fast, Memory-Efficient Alpha-Tree Algorithm using Flooding and Tree Size Estimation

    Get PDF
    The alpha-tree represents an image as hierarchical set of alpha-connected components. Computation of alpha-trees suffers from high computational and memory requirements compared with similar component tree algorithms such as max-tree. Here we introduce a novel alpha-tree algorithm using 1) a flooding algorithm for computational efficiency and 2) tree size estimation (TSE) for memory efficiency. In TSE, an exponential decay model was fitted to normalized tree sizes as a function of the normalized root mean squared deviation (NRMSD) of edge-dissimilarity distributions, and the model was used to estimate the optimum memory allocation size for alpha-tree construction. An experiment on 1256 images shows that our algorithm runs 2.27 times faster than Ouzounis and Soille's thanks to the flooding algorithm, and TSE reduced the average memory allocation of the proposed algorithm by 40.4%, eliminating unused allocated memory by 86.0% with a negligible computational cost

    Distributed Component Forests in 2-D:Hierarchical Image Representations Suitable for Tera-Scale Images

    Get PDF
    The standard representations known as component trees, used in morphological connected attribute filtering and multi-scale analysis, are unsuitable for cases in which either the image itself or the tree do not fit in the memory of a single compute node. Recently, a new structure has been developed which consists of a collection of modified component trees, one for each image tile. It has to-date only been applied to fairly simple image filtering based on area. In this paper, we explore other applications of these distributed component forests, in particular to multi-scale analysis such as pattern spectra, and morphological attribute profiles and multi-scale leveling segmentations

    Time-varying volume visualization

    Get PDF
    Volume rendering is a very active research field in Computer Graphics because of its wide range of applications in various sciences, from medicine to flow mechanics. In this report, we survey a state-of-the-art on time-varying volume rendering. We state several basic concepts and then we establish several criteria to classify the studied works: IVR versus DVR, 4D versus 3D+time, compression techniques, involved architectures, use of parallelism and image-space versus object-space coherence. We also address other related problems as transfer functions and 2D cross-sections computation of time-varying volume data. All the papers reviewed are classified into several tables based on the mentioned classification and, finally, several conclusions are presented.Preprin

    Parallel Attribute Computation for Distributed Component Forests

    Get PDF
    Component trees are powerful image processing tools to analyze the connected components of an image. One attractive strategy consists in building the nested relations at first and then deriving the components' attributes afterward, such that the user can switch between different attribute functions without having to re-compute the entire tree. Only sequential algorithms allow such an approach, while no parallel algorithm is available. In this paper, we extend a recent method using distributed memory techniques to enable posterior attribute computation in a parallel or distributed manner. This novel approach significantly reduces the computational time needed for combining several attribute functions interactively in Giga and Tera-Scale data sets

    Towards Real-Time Detection and Tracking of Spatio-Temporal Features: Blob-Filaments in Fusion Plasma

    Full text link
    A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes to detect and track blob-filaments in real time in fusion plasma. On a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.Comment: 14 pages, 40 figure

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences

    Get PDF
    Results: We present an application that enables the quantitative analysis of multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. Conclusions: By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. There is a pressing need for visualization and analysis tools for 5-D live cell image data. We combine accurate unsupervised processes with an intuitive visualization of the results. Our validation interface allows for each data set to be corrected to 100% accuracy, ensuring that downstream data analysis is accurate and verifiable. Our tool is the first to combine all of these aspects, leveraging the synergies obtained by utilizing validation information from stereo visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc
    • …
    corecore