546 research outputs found
Reducing artifacts in surface meshes extracted from binary volumes
We present a mesh filtering method for surfaces extracted from binary volume data which guarantees a smooth
and correct representation of the original binary sampled surface, even if the original volume data is inaccessible
or unknown. This method reduces the typical block and staircase artifacts but adheres to the underlying binary
volume data yielding an accurate and smooth representation. The proposed method is closest to the technique of
Constrained Elastic Surface Nets (CESN). CESN is a specialized surface extraction method with a subsequent
iterative smoothing process, which uses the binary input data as a set of constraints. In contrast to CESN, our
method processes surface meshes extracted by means of Marching Cubes and does not require the binary volume.
It acts directly and solely on the surface mesh and is thus feasible even for surface meshes of inaccessible
or unknown volume data. This is possible by reconstructing information concerning the binary volume from
artifacts in the extracted mesh and applying a relaxation method constrained to the reconstructed information
Tool for 3D analysis and segmentation of retinal layers in volumetric SD-OCT images
With the development of optical coherence tomography in the spectral domain
(SD-OCT), it is now possible to quickly acquire large volumes of images. Typically
analyzed by a specialist, the processing of the images is quite slow, consisting
on the manual marking of features of interest in the retina, including the determination
of the position and thickness of its different layers. This process is not
consistent, the results are dependent on the clinician perception and do not take
advantage of the technology, since the volumetric information that it currently
provides is ignored.
Therefore is of medical and technological interest to make a three-dimensional
and automatic processing of images resulting from OCT technology. Only then we
will be able to collect all the information that these images can give us and thus
improve the diagnosis and early detection of eye pathologies. In addition to the
3D analysis, it is also important to develop visualization tools for the 3D data.
This thesis proposes to apply 3D graphical processing methods to SD-OCT
retinal images, in order to segment retinal layers. Also, to analyze the 3D retinal
images and the segmentation results, a visualization interface that allows displaying
images in 3D and from different perspectives is proposed. The work was based
on the use of the Medical Imaging Interaction Toolkit (MITK), which includes
other open-source toolkits.
For this study a public database of SD-OCT retinal images will be used, containing
about 360 volumetric images of healthy and pathological subjects.
The software prototype allows the user to interact with the images, apply 3D
filters for segmentation and noise reduction and render the volume. The detection
of three surfaces of the retina is achieved through intensity-based edge detection
methods with a mean error in the overall retina thickness of 3.72 0.3 pixels
3D time series analysis of cell shape using Laplacian approaches
Background:
Fundamental cellular processes such as cell movement, division or food uptake critically depend on cells being able to change shape. Fast acquisition of three-dimensional image time series has now become possible, but we lack efficient tools for analysing shape deformations in order to understand the real three-dimensional nature of shape changes.
Results:
We present a framework for 3D+time cell shape analysis. The main contribution is three-fold: First, we develop a fast, automatic random walker method for cell segmentation. Second, a novel topology fixing method is proposed to fix segmented binary volumes without spherical topology. Third, we show that algorithms used for each individual step of the analysis pipeline (cell segmentation, topology fixing, spherical parameterization, and shape representation) are closely related to the Laplacian operator. The framework is applied to the shape analysis of neutrophil cells.
Conclusions:
The method we propose for cell segmentation is faster than the traditional random walker method or the level set method, and performs better on 3D time-series of neutrophil cells, which are comparatively noisy as stacks have to be acquired fast enough to account for cell motion. Our method for topology fixing outperforms the tools provided by SPHARM-MAT and SPHARM-PDM in terms of their successful fixing rates. The different tasks in the presented pipeline for 3D+time shape analysis of cells can be solved using Laplacian approaches, opening the possibility of eventually combining individual steps in order to speed up computations
Validating Stereoscopic Volume Rendering
The evaluation of stereoscopic displays for surface-based renderings is well established in terms of accurate depth perception and tasks that require an understanding of the spatial layout of the scene. In comparison direct volume rendering (DVR) that typically produces images with a high number of low opacity, overlapping features is only beginning to be critically studied on stereoscopic displays. The properties of the specific images and the choice of parameters for DVR algorithms make assessing the effectiveness of stereoscopic displays for DVR particularly challenging and as a result existing literature is sparse with inconclusive results.
In this thesis stereoscopic volume rendering is analysed for tasks that require depth perception including: stereo-acuity tasks, spatial search tasks and observer preference ratings. The evaluations focus on aspects of the DVR rendering pipeline and assess how the parameters of volume resolution, reconstruction filter and transfer function may alter task performance and the perceived quality of the produced images.
The results of the evaluations suggest that the transfer function and choice of recon- struction filter can have an effect on the performance on tasks with stereoscopic displays when all other parameters are kept consistent. Further, these were found to affect the sensitivity and bias response of the participants. The studies also show that properties of the reconstruction filters such as post-aliasing and smoothing do not correlate well with either task performance or quality ratings.
Included in the contributions are guidelines and recommendations on the choice of pa- rameters for increased task performance and quality scores as well as image based methods of analysing stereoscopic DVR images
Recommended from our members
Particle-Based Sampling and Meshing of Surfaces in Multimaterial Volumes
Methods that faithfully and robustly capture the geometry of complex material interfaces in labeled volume data are important for generating realistic and accurate visualizations and simulations of real-world objects. The generation of such multimaterial models from measured data poses two unique challenges: first, the surfaces must be well-sampled with regular, efficient tessellations that are consistent across material boundaries; and second, the resulting meshes must respect the nonmanifold geometry of the multimaterial interfaces. This paper proposes a strategy for sampling and meshing multimaterial volumes using dynamic particle systems, including a novel, differentiable representation of the material junctions that allows the particle system to explicitly sample corners, edges, and surfaces of material intersections. The distributions of particles are controlled by fundamental sampling constraints, allowing Delaunay-based meshing algorithms to reliably extract watertight meshes of consistently high-quality.Engineering and Applied Science
Improving Filtering for Computer Graphics
When drawing images onto a computer screen, the information in the scene is typically
more detailed than can be displayed. Most objects, however, will not be close to the
camera, so details have to be filtered out, or anti-aliased, when the objects are drawn on
the screen. I describe new methods for filtering images and shapes with high fidelity while
using computational resources as efficiently as possible.
Vector graphics are everywhere, from drawing 3D polygons to 2D text and maps for
navigation software. Because of its numerous applications, having a fast, high-quality
rasterizer is important. I developed a method for analytically rasterizing shapes using
wavelets. This approach allows me to produce accurate 2D rasterizations of images and
3D voxelizations of objects, which is the first step in 3D printing. I later improved my
method to handle more filters. The resulting algorithm creates higher-quality images than
commercial software such as Adobe Acrobat and is several times faster than the most
highly optimized commercial products.
The quality of texture filtering also has a dramatic impact on the quality of a rendered
image. Textures are images that are applied to 3D surfaces, which typically cannot be
mapped to the 2D space of an image without introducing distortions. For situations in
which it is impossible to change the rendering pipeline, I developed a method for precomputing
image filters over 3D surfaces. If I can also change the pipeline, I show that it
is possible to improve the quality of texture sampling significantly in real-time rendering
while using the same memory bandwidth as used in traditional methods
- …