1,142 research outputs found
Recommended from our members
EWA Splatting
In this paper, we present a framework for high quality splatting based on elliptical Gaussian kernels. To avoid aliasing artifacts, we introduce the concept of a resampling filter, combining a reconstruction kernel with a low-pass filter. Because of the similarity to Heckbert's EWA (elliptical weighted average) filter for texture mapping, we call our technique EWA splatting. Our framework allows us to derive EWA splat primitives for volume data and for point-sampled surface data. It provides high image quality without aliasing artifacts or excessive blurring for volume data and, additionally, features anisotropic texture filtering for point-sampled surfaces. It also handles nonspherical volume kernels efficiently; hence, it is suitable for regular, rectilinear, and irregular volume datasets. Moreover, our framework introduces a novel approach to compute the footprint function, facilitating efficient perspective projection of arbitrary elliptical kernels at very little additional cost. Finally, we show that EWA volume reconstruction kernels can be reduced to surface reconstruction kernels. This makes our splat primitive universal in rendering surface and volume data.Engineering and Applied Science
Image interpolation using Shearlet based iterative refinement
This paper proposes an image interpolation algorithm exploiting sparse
representation for natural images. It involves three main steps: (a) obtaining
an initial estimate of the high resolution image using linear methods like FIR
filtering, (b) promoting sparsity in a selected dictionary through iterative
thresholding, and (c) extracting high frequency information from the
approximation to refine the initial estimate. For the sparse modeling, a
shearlet dictionary is chosen to yield a multiscale directional representation.
The proposed algorithm is compared to several state-of-the-art methods to
assess its objective as well as subjective performance. Compared to the cubic
spline interpolation method, an average PSNR gain of around 0.8 dB is observed
over a dataset of 200 images
Tool for 3D analysis and segmentation of retinal layers in volumetric SD-OCT images
With the development of optical coherence tomography in the spectral domain
(SD-OCT), it is now possible to quickly acquire large volumes of images. Typically
analyzed by a specialist, the processing of the images is quite slow, consisting
on the manual marking of features of interest in the retina, including the determination
of the position and thickness of its different layers. This process is not
consistent, the results are dependent on the clinician perception and do not take
advantage of the technology, since the volumetric information that it currently
provides is ignored.
Therefore is of medical and technological interest to make a three-dimensional
and automatic processing of images resulting from OCT technology. Only then we
will be able to collect all the information that these images can give us and thus
improve the diagnosis and early detection of eye pathologies. In addition to the
3D analysis, it is also important to develop visualization tools for the 3D data.
This thesis proposes to apply 3D graphical processing methods to SD-OCT
retinal images, in order to segment retinal layers. Also, to analyze the 3D retinal
images and the segmentation results, a visualization interface that allows displaying
images in 3D and from different perspectives is proposed. The work was based
on the use of the Medical Imaging Interaction Toolkit (MITK), which includes
other open-source toolkits.
For this study a public database of SD-OCT retinal images will be used, containing
about 360 volumetric images of healthy and pathological subjects.
The software prototype allows the user to interact with the images, apply 3D
filters for segmentation and noise reduction and render the volume. The detection
of three surfaces of the retina is achieved through intensity-based edge detection
methods with a mean error in the overall retina thickness of 3.72 0.3 pixels
Uni-COAL: A Unified Framework for Cross-Modality Synthesis and Super-Resolution of MR Images
Cross-modality synthesis (CMS), super-resolution (SR), and their combination
(CMSR) have been extensively studied for magnetic resonance imaging (MRI).
Their primary goals are to enhance the imaging quality by synthesizing the
desired modality and reducing the slice thickness. Despite the promising
synthetic results, these techniques are often tailored to specific tasks,
thereby limiting their adaptability to complex clinical scenarios. Therefore,
it is crucial to build a unified network that can handle various image
synthesis tasks with arbitrary requirements of modality and resolution
settings, so that the resources for training and deploying the models can be
greatly reduced. However, none of the previous works is capable of performing
CMS, SR, and CMSR using a unified network. Moreover, these MRI reconstruction
methods often treat alias frequencies improperly, resulting in suboptimal
detail restoration. In this paper, we propose a Unified Co-Modulated Alias-free
framework (Uni-COAL) to accomplish the aforementioned tasks with a single
network. The co-modulation design of the image-conditioned and stochastic
attribute representations ensures the consistency between CMS and SR, while
simultaneously accommodating arbitrary combinations of input/output modalities
and thickness. The generator of Uni-COAL is also designed to be alias-free
based on the Shannon-Nyquist signal processing framework, ensuring effective
suppression of alias frequencies. Additionally, we leverage the semantic prior
of Segment Anything Model (SAM) to guide Uni-COAL, ensuring a more authentic
preservation of anatomical structures during synthesis. Experiments on three
datasets demonstrate that Uni-COAL outperforms the alternatives in CMS, SR, and
CMSR tasks for MR images, which highlights its generalizability to wide-range
applications
- …