3,243 research outputs found
Adaptive transfer functions: improved multiresolution visualization of medical models
The final publication is available at Springer via http://dx.doi.org/10.1007/s00371-016-1253-9Medical datasets are continuously increasing in size. Although larger models may be available for certain research purposes, in the common clinical practice the models are usually of up to 512x512x2000 voxels. These resolutions exceed the capabilities of conventional GPUs, the ones usually found in the medical doctors’ desktop PCs. Commercial solutions typically reduce the data by downsampling the dataset iteratively until it fits the available target specifications. The data loss reduces the visualization quality and this is not commonly compensated with other actions that might alleviate its effects. In this paper, we propose adaptive transfer functions, an algorithm that improves the transfer function in downsampled multiresolution models so that the quality of renderings is highly improved. The technique is simple and lightweight, and it is suitable, not only to visualize huge models that would not fit in a GPU, but also to render not-so-large models in mobile GPUs, which are less capable than their desktop counterparts. Moreover, it can also be used to accelerate rendering frame rates using lower levels of the multiresolution hierarchy while still maintaining high-quality results in a focus and context approach. We also show an evaluation of these results based on perceptual metrics.Peer ReviewedPostprint (author's final draft
Multiresolution hierarchy co-clustering for semantic segmentation in sequences with small variations
This paper presents a co-clustering technique that, given a collection of
images and their hierarchies, clusters nodes from these hierarchies to obtain a
coherent multiresolution representation of the image collection. We formalize
the co-clustering as a Quadratic Semi-Assignment Problem and solve it with a
linear programming relaxation approach that makes effective use of information
from hierarchies. Initially, we address the problem of generating an optimal,
coherent partition per image and, afterwards, we extend this method to a
multiresolution framework. Finally, we particularize this framework to an
iterative multiresolution video segmentation algorithm in sequences with small
variations. We evaluate the algorithm on the Video Occlusion/Object Boundary
Detection Dataset, showing that it produces state-of-the-art results in these
scenarios.Comment: International Conference on Computer Vision (ICCV) 201
Quality assessment by region in spot images fused by means dual-tree complex wavelet transform
This work is motivated in providing and evaluating a fusion algorithm of remotely sensed images, i.e. the fusion of a high spatial resolution panchromatic image with a multi-spectral image (also known as pansharpening) using the dual-tree complex wavelet transform (DT-CWT), an effective approach for conducting an analytic and oversampled wavelet transform to reduce aliasing, and in turn reduce shift dependence of the wavelet transform. The proposed scheme includes the definition of a model to establish how information will be extracted from the PAN band and how that information will be injected into the MS bands with low spatial resolution. The approach was applied to Spot 5 images where there are bands falling outside PAN’s spectrum. We propose an optional step in the quality evaluation protocol, which is to study the quality of the merger by regions, where each region represents a specific feature of the image. The results show that DT-CWT based approach offers good spatial quality while retaining the spectral information of original images, case SPOT 5. The additional step facilitates the identification of the most affected regions by the fusion process
Autoencoding the Retrieval Relevance of Medical Images
Content-based image retrieval (CBIR) of medical images is a crucial task that
can contribute to a more reliable diagnosis if applied to big data. Recent
advances in feature extraction and classification have enormously improved CBIR
results for digital images. However, considering the increasing accessibility
of big data in medical imaging, we are still in need of reducing both memory
requirements and computational expenses of image retrieval systems. This work
proposes to exclude the features of image blocks that exhibit a low encoding
error when learned by a autoencoder (). We examine the
histogram of autoendcoding errors of image blocks for each image class to
facilitate the decision which image regions, or roughly what percentage of an
image perhaps, shall be declared relevant for the retrieval task. This leads to
reduction of feature dimensionality and speeds up the retrieval process. To
validate the proposed scheme, we employ local binary patterns (LBP) and support
vector machines (SVM) which are both well-established approaches in CBIR
research community. As well, we use IRMA dataset with 14,410 x-ray images as
test data. The results show that the dimensionality of annotated feature
vectors can be reduced by up to 50% resulting in speedups greater than 27% at
expense of less than 1% decrease in the accuracy of retrieval when validating
the precision and recall of the top 20 hits.Comment: To appear in proceedings of The 5th International Conference on Image
Processing Theory, Tools and Applications (IPTA'15), Nov 10-13, 2015,
Orleans, Franc
Wavelets and their use
This review paper is intended to give a useful guide for those who want to
apply discrete wavelets in their practice. The notion of wavelets and their use
in practical computing and various applications are briefly described, but
rigorous proofs of mathematical statements are omitted, and the reader is just
referred to corresponding literature. The multiresolution analysis and fast
wavelet transform became a standard procedure for dealing with discrete
wavelets. The proper choice of a wavelet and use of nonstandard matrix
multiplication are often crucial for achievement of a goal. Analysis of various
functions with the help of wavelets allows to reveal fractal structures,
singularities etc. Wavelet transform of operator expressions helps solve some
equations. In practical applications one deals often with the discretized
functions, and the problem of stability of wavelet transform and corresponding
numerical algorithms becomes important. After discussing all these topics we
turn to practical applications of the wavelet machinery. They are so numerous
that we have to limit ourselves by some examples only. The authors would be
grateful for any comments which improve this review paper and move us closer to
the goal proclaimed in the first phrase of the abstract.Comment: 63 pages with 22 ps-figures, to be published in Physics-Uspekh
Enhancement of Single and Composite Images Based on Contourlet Transform Approach
Image enhancement is an imperative step in almost every image processing algorithms.
Numerous image enhancement algorithms have been developed for gray scale images
despite their absence in many applications lately. This thesis proposes hew image
enhancement techniques of 8-bit single and composite digital color images. Recently, it
has become evident that wavelet transforms are not necessarily best suited for images.
Therefore, the enhancement approaches are based on a new 'true' two-dimensional
transform called contourlet transform. The proposed enhancement techniques discussed
in this thesis are developed based on the understanding of the working mechanisms of the
new multiresolution property of contourlet transform. This research also investigates the
effects of using different color space representations for color image enhancement
applications. Based on this investigation an optimal color space is selected for both single
image and composite image enhancement approaches. The objective evaluation steps
show that the new method of enhancement not only superior to the commonly used
transformation method (e.g. wavelet transform) but also to various spatial models (e.g.
histogram equalizations). The results found are encouraging and the enhancement
algorithms have proved to be more robust and reliable
- …