223 research outputs found

    The quest for "diagnostically lossless" medical image compression using objective image quality measures

    Get PDF
    Given the explosive growth of digital image data being generated, medical communities worldwide have recognized the need for increasingly efficient methods of storage, display and transmission of medical images. For this reason lossy image compression is inevitable. Furthermore, it is absolutely essential to be able to determine the degree to which a medical image can be compressed before its “diagnostic quality” is compromised. This work aims to achieve “diagnostically lossless compression”, i.e., compression with no loss in visual quality nor diagnostic accuracy. Recent research by Koff et al. has shown that at higher compression levels lossy JPEG is more effective than JPEG2000 in some cases of brain and abdominal CT images. We have investigated the effects of the sharp skull edges in CT neuro images on JPEG and JPEG 2000 lossy compression. We provide an explanation why JPEG performs better than JPEG2000 for certain types of CT images. Another aspect of this study is primarily concerned with improved methods of assessing the diagnostic quality of compressed medical images. In this study, we have compared the performances of structural similarity (SSIM) index, mean squared error (MSE), compression ratio and JPEG quality factor, based on the data collected in a subjective experiment involving radiologists. An receiver operating characteristic (ROC) curve and Kolmogorov-Smirnov analyses indicate that compression ratio is not always a good indicator of visual quality. Moreover, SSIM demonstrates the best performance. We have also shown that a weighted Youden index can provide SSIM and MSE thresholds for acceptable compression. We have also proposed two approaches of modifying L2-based approximations so that they conform to Weber’s model of perception. We show that the imposition of a condition of perceptual invariance in greyscale space according to Weber’s model leads to the unique (unnormalized) measure with density function ρ(t) = 1/t. This result implies that the logarithmic L1 distance is the most natural “Weberized” image metric. We provide numerical implementations of the intensity-weighted approximation methods for natural and medical images

    Scalable three-dimensional intact human organs labeling and clearing by tissue clearing technologies

    Get PDF

    Acoustic modelling of bat pinnae utilising the TLM method

    Get PDF
    This thesis describes the numerical modelling of bioacoustic structures, the focus being the outer ear or pinnae of the Rufous Horseshoe bat (Rhinolophus rouxii). There have been several novel developments derived from this work including: • A method of calculating directionality based on the sphere with a distribution of measuring points such that each lies in an equal area segment. • Performance estimation of the pinna by considering the directionality of an equivalent radiating aperture. • A simple synthetic geometry that appears to give similar performance to a bat pinna. The outcome of applying the methods have yielded results that agree with measurements, indeed, this work is the first time TLM has been applied to a structure of this kind. It paves the way towards a greater understanding of bioacoustics and ultimately towards generating synthetic structures that can perform as well as those found in the natural world

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    3D multiresolution statistical approaches for accelerated medical image and volume segmentation

    Get PDF
    Medical volume segmentation got the attraction of many researchers; therefore, many techniques have been implemented in terms of medical imaging including segmentations and other imaging processes. This research focuses on an implementation of segmentation system which uses several techniques together or on their own to segment medical volumes, the system takes a stack of 2D slices or a full 3D volumes acquired from medical scanners as a data input. Two main approaches have been implemented in this research for segmenting medical volume which are multi-resolution analysis and statistical modeling. Multi-resolution analysis has been mainly employed in this research for extracting the features. Higher dimensions of discontinuity (line or curve singularity) have been extracted in medical images using a modified multi-resolution analysis transforms such as ridgelet and curvelet transforms. The second implemented approach in this thesis is the use of statistical modeling in medical image segmentation; Hidden Markov models have been enhanced here to segment medical slices automatically, accurately, reliably and with lossless results. But the problem with using Markov models here is the computational time which is too long. This has been addressed by using feature reduction techniques which has also been implemented in this thesis. Some feature reduction and dimensionality reduction techniques have been used to accelerate the slowest block in the proposed system. This includes Principle Components Analysis, Gaussian Pyramids and other methods. The feature reduction techniques have been employed efficiently with the 3D volume segmentation techniques such as 3D wavelet and 3D Hidden Markov models. The system has been tested and validated using several procedures starting at a comparison with the predefined results, crossing the specialists’ validations, and ending by validating the system using a survey filled by the end users explaining the techniques and the results. This concludes that Markovian models segmentation results has overcome all other techniques in most patients’ cases. Curvelet transform has been also proved promising segmentation results; the end users rate it better than Markovian models due to the long time required with Hidden Markov models.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Fast Near-Lossless or Lossless Compression of Large 3D Neuro-Anatomical Images

    Get PDF
    3D neuro-anatomical images and other volumetric data sets are important in many scientific and biomedical fields. Since such sets may be extremely large, a scalable compression method is critical to store, process and transmit them. To achieve a high compression rate, most of the existing volume compression methods are lossy, which is usually unacceptable in biomedical applications. Our near-lossless or lossless compression algorithm uses a Hilbert traversal to produce a data stream from the original image. This data stream enjoys relatively slow image context change, which helps the subsequent DPCM prediction to reduce the source entropy. An extremely fast linear DPCM is used; the prediction error is further encoded using Huffman code. In order to provide efficient data access, the source image is divided into blocks and indexed by an octree data structure. The Huffman coding overhead is effectively reduced using a novel binning algorithm. Our compression method is designed for performance-critical digital brain atlas applications, which often require very fast data access without prior decompression and for which a modest compression rate is acceptable

    Reconstructing neural circuits using multiresolution correlated light and electron microscopy

    Get PDF
    Correlated light and electron microscopy (CLEM) can be used to combine functional and molecular characterizations of neurons with detailed anatomical maps of their synaptic organization. Here we describe a multiresolution approach to CLEM (mrCLEM) that efficiently targets electron microscopy (EM) imaging to optically characterized cells while maintaining optimal tissue preparation for high-throughput EM reconstruction. This approach hinges on the ease with which arrays of sections collected on a solid substrate can be repeatedly imaged at different scales using scanning electron microscopy. We match this multiresolution EM imaging with multiresolution confocal mapping of the aldehyde-fixed tissue. Features visible in lower resolution EM correspond well to features visible in densely labeled optical maps of fixed tissue. Iterative feature matching, starting with gross anatomical correspondences and ending with subcellular structure, can then be used to target high-resolution EM image acquisition and annotation to cells of interest. To demonstrate this technique and range of images used to link live optical imaging to EM reconstructions, we provide a walkthrough of a mouse retinal light to EM experiment as well as some examples from mouse brain slices
    • …
    corecore