264 research outputs found

    Four-Dimensional Wavelet Compression of Arbitrarily Sized Echocardiographic Data

    Get PDF
    Wavelet-based methods have become most popular for the compression of two-dimensional medical images and sequences. The standard implementations consider data sizes that are powers of two. There is also a large body of literature treating issues such as the choice of the "optimal" wavelets and the performance comparison of competing algorithms. With the advent of telemedicine, there is a strong incentive to extend these techniques to higher dimensional data such as dynamic three-dimensional (3-D) echocardiography [four-dimensional (4-D) datasets]. One of the practical difficulties is that the size of this data is often not a multiple of a power of two, which can lead to increased computational complexity and impaired compression power. Our contribution in this paper is to present a genuine 4-D extension of the well-known zerotree algorithm for arbitrarily sized data. The key component of our method is a one-dimensional wavelet algorithm that can handle arbitrarily sized input signals. The method uses a pair of symmetric/antisymmetric wavelets (10⁄6) together with some appropriate midpoint symmetry boundary conditions that reduce border artifacts. The zerotree structure is also adapted so that it can accommodate noneven data splitting. We have applied our method to the compression of real 3-D dynamic sequences from clinical cardiac ultrasound examinations. Our new algorithm compares very favorably with other more ad hoc adaptations (image extension and tiling) of the standard powers-of-two methods, in terms of both compression performance and computational cost. It is vastly superior to slice-by-slice wavelet encoding. This was seen not only in numerical image quality parameters but also in expert ratings, where significant improvement using the new approach could be documented. Our validation experiments show that one can safely compress 4-D data sets at ratios of 128:1 without compromising the diagnostic value of the images. We also display some more extreme compression results at ratios of 2000:1 where some key diagnostically relevant key features are preserved

    Ultrafast Ultrasound Imaging

    Get PDF
    Among medical imaging modalities, such as computed tomography (CT) and magnetic resonance imaging (MRI), ultrasound imaging stands out due to its temporal resolution. Owing to the nature of medical ultrasound imaging, it has been used for not only observation of the morphology of living organs but also functional imaging, such as blood flow imaging and evaluation of the cardiac function. Ultrafast ultrasound imaging, which has recently become widely available, significantly increases the opportunities for medical functional imaging. Ultrafast ultrasound imaging typically enables imaging frame-rates of up to ten thousand frames per second (fps). Due to the extremely high temporal resolution, this enables visualization of rapid dynamic responses of biological tissues, which cannot be observed and analyzed by conventional ultrasound imaging. This Special Issue includes various studies of improvements to the performance of ultrafast ultrasoun

    Scalable video compression with optimized visual performance and random accessibility

    Full text link
    This thesis is concerned with maximizing the coding efficiency, random accessibility and visual performance of scalable compressed video. The unifying theme behind this work is the use of finely embedded localized coding structures, which govern the extent to which these goals may be jointly achieved. The first part focuses on scalable volumetric image compression. We investigate 3D transform and coding techniques which exploit inter-slice statistical redundancies without compromising slice accessibility. Our study shows that the motion-compensated temporal discrete wavelet transform (MC-TDWT) practically achieves an upper bound to the compression efficiency of slice transforms. From a video coding perspective, we find that most of the coding gain is attributed to offsetting the learning penalty in adaptive arithmetic coding through 3D code-block extension, rather than inter-frame context modelling. The second aspect of this thesis examines random accessibility. Accessibility refers to the ease with which a region of interest is accessed (subband samples needed for reconstruction are retrieved) from a compressed video bitstream, subject to spatiotemporal code-block constraints. We investigate the fundamental implications of motion compensation for random access efficiency and the compression performance of scalable interactive video. We demonstrate that inclusion of motion compensation operators within the lifting steps of a temporal subband transform incurs a random access penalty which depends on the characteristics of the motion field. The final aspect of this thesis aims to minimize the perceptual impact of visible distortion in scalable reconstructed video. We present a visual optimization strategy based on distortion scaling which raises the distortion-length slope of perceptually significant samples. This alters the codestream embedding order during post-compression rate-distortion optimization, thus allowing visually sensitive sites to be encoded with higher fidelity at a given bit-rate. For visual sensitivity analysis, we propose a contrast perception model that incorporates an adaptive masking slope. This versatile feature provides a context which models perceptual significance. It enables scene structures that otherwise suffer significant degradation to be preserved at lower bit-rates. The novelty in our approach derives from a set of "perceptual mappings" which account for quantization noise shaping effects induced by motion-compensated temporal synthesis. The proposed technique reduces wavelet compression artefacts and improves the perceptual quality of video

    Efficient Learning-based Image Enhancement : Application to Compression Artifact Removal and Super-resolution

    Get PDF
    Many computer vision and computational photography applications essentially solve an image enhancement problem. The image has been deteriorated by a specific noise process, such as aberrations from camera optics and compression artifacts, that we would like to remove. We describe a framework for learning-based image enhancement. At the core of our algorithm lies a generic regularization framework that comprises a prior on natural images, as well as an application-specific conditional model based on Gaussian processes. In contrast to prior learning-based approaches, our algorithm can instantly learn task-specific degradation models from sample images which enables users to easily adapt the algorithm to a specific problem and data set of interest. This is facilitated by our efficient approximation scheme of large-scale Gaussian processes. We demonstrate the efficiency and effectiveness of our approach by applying it to example enhancement applications including single-image super-resolution, as well as artifact removal in JPEG- and JPEG 2000-encoded images

    Acoustic Interrogations Of Complex Seabeds

    Get PDF

    Decision-based data fusion of complementary features for the early diagnosis of Alzheimer\u27s disease

    Get PDF
    As the average life expectancy increases, particularly in developing countries, the prevalence of Alzheimer\u27s disease (AD), which is the most common form of dementia worldwide, has increased dramatically. As there is no cure to stop or reverse the effects of AD, the early diagnosis and detection is of utmost concern. Recent pharmacological advances have shown the ability to slow the progression of AD; however, the efficacy of these treatments is dependent on the ability to detect the disease at the earliest stage possible. Many patients are limited to small community clinics, by geographic and/or financial constraints. Making diagnosis possible at these clinics through an accurate, inexpensive, and noninvasive tool is of great interest. Many tools have been shown to be effective at the early diagnosis of AD. Three in particular are focused upon in this study: event-related potentials (ERPs) in electroencephalogram (EEG) recordings, magnetic resonance imaging (MRI), as well as positron emission tomography (PET). These biomarkers have been shown to contain diagnostically useful information regarding the development of AD in an individual. The combination of these biomarkers, if they provide complementary information, can boost overall diagnostic accuracy of an automated system. EEG data acquired from an auditory oddball paradigm, along with volumetric T2 weighted MRI data and PET imagery representative of metabolic glucose activity in the brain was collected from a cohort of 447 patients, along with other biomarkers and metrics relating to neurodegenerative disease. This study in particular focuses on AD versus control diagnostic ability from the cohort, in addition to AD severity analysis. An assortment of feature extraction methods were employed to extract diagnostically relevant information from raw data. EEG signals were decomposed into frequency bands of interest hrough the discrete wavelet transform (DWT). MRI images were reprocessed to provide volumetric representations of specific regions of interest in the cranium. The PET imagery was segmented into regions of interest representing glucose metabolic rates within the brain. Multi-layer perceptron neural networks were used as the base classifier for the augmented stacked generalization algorithm, creating three overall biomarker experts for AD diagnosis. The features extracted from each biomarker were used to train classifiers on various subsets of the cohort data; the decisions from these classifiers were then combined to achieve decision-based data fusion. This study found that EEG, MRI and PET data each hold complementary information for the diagnosis of AD. The use of all three in tandem provides greater diagnostic accuracy than using any single biomarker alone. The highest accuracy obtained through the EEG expert was 86.1 ±3.2%, with MRI and PET reaching 91.1 +3.2% and 91.2 ±3.9%, respectively. The maximum diagnostic accuracy of these systems averaged 95.0 ±3.1% when all three biomarkers were combined through the decision fusion algorithm described in this study. The severity analysis for AD showed similar results, with combination performance exceeding that of any biomarker expert alone

    Exploring scatterer anisotrophy in synthetic aperture radar via sub-aperture analysis

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 189-193).Scattering from man-made objects in SAR imagery exhibits aspect and frequency dependencies which are not always well modeled by standard SAR imaging techniques based on the ideal point scattering model. This is particularly the case for highresolution wide-band and wide-aperture data where model deviations are even more pronounced. If ignored, these deviations will reduce recognition performance due to the model mismatch, but when appropriately accounted for, these deviations from the ideal point scattering model can be exploited as attributes to better distinguish scatterers and their respective targets. With this in mind, this thesis develops an efficient modeling framework based on a sub-aperture pyramid to utilize scatterer anisotropy for the purpose of target classification. Two approaches are presented to exploit scatterer anisotropy using the sub-aperture pyramid. The first is a nonparametric classifier that learns the azimuthal dependencies within an image and makes a classification decision based on the learned dependencies. The second approach is a parametric attribution of the observed anisotropy characterizing the azimuthal location and concentration of the scattering response. Working from the sub-aperture scattering model, we develop a hypothesis test to characterize anisotropy. We start with an isolated scatterer model which produces a test with an intuitive interpretation. We then address the problem of robustness to interfering scatterers by extending the model to account for neighboring scatterers which corrupt the anisotropy attribution.(cont.) The development of the anisotropy attribution culminates with an iterative attribution approach that identifies and compensates for neighboring scatterers. In the course of the development of the anisotropy attribution, we also study the relationship between scatterer phenomenology and our anisotropy attribution. This analysis reveals the information provided by the anisotropy attribution for two common sources of anisotropy. Furthermore, the analysis explicitly demonstrates the benefit of using wide-aperture data to produce more stable and more descriptive characterizations of scatterer anisotropy.y Andrew J. Kim.Ph.D

    Improving reconstructions of digital holograms

    Get PDF
    Digital holography is a two step process of recording a hologram on an electronic sensor and reconstructing it numerically. This thesis makes a number of contri- butions to the second step of this process. These can be split into two distinct parts: A) speckle reduction in reconstructions of digital holograms (DHs), and B) modeling and overcoming partial occlusion e®ects in reconstructions of DHs, and using occlusions to reduce the effects of the twin image in reconstructions of DHs. Part A represents the major part of this thesis. Speckle reduction forms an important step in many digital holographic applications and we have developed a number of techniques that can be used to reduce its corruptive effect in recon- structions of DHs. These techniques range from 3D filtering of DH reconstructions to a technique that filters in the Fourier domain of the reconstructed DH. We have also investigated the most commonly used industrial speckle reduction technique - wavelet filters. In Part B, we investigate the nature of opaque and non-opaque partial occlusions. We motivate this work by trying to ¯nd a subset of pixels that overcome the effects of a partial occlusion, thus revealing otherwise hidden features on an object captured using digital holography. Finally, we have used an occlusion at the twin image plane to completely remove the corrupting effect of the out-of-focus twin image on reconstructions of DHs
    corecore