129 research outputs found

    Perceptually lossless coding of medical images - from abstraction to reality

    Get PDF
    This work explores a novel vision model based coding approach to encode medical images at a perceptually lossless quality, within the framework of the JPEG 2000 coding engine. Perceptually lossless encoding offers the best of both worlds, delivering images free of visual distortions and at the same time providing significantly greater compression ratio gains over its information lossless counterparts. This is achieved through a visual pruning function, embedded with an advanced model of the human visual system to accurately identify and to efficiently remove visually irrelevant/insignificant information. In addition, it maintains bit-stream compliance with the JPEG 2000 coding framework and subsequently is compliant with the Digital Communications in Medicine standard (DICOM). Equally, the pruning function is applicable to other Discrete Wavelet Transform based image coders, e.g., The Set Partitioning in Hierarchical Trees. Further significant coding gains are exploited through an artificial edge segmentatio n algorithm and a novel arithmetic pruning algorithm. The coding effectiveness and qualitative consistency of the algorithm is evaluated through a double-blind subjective assessment with 31 medical experts, performed using a novel 2-staged forced choice assessment that was devised for medical experts, offering the benefits of greater robustness and accuracy in measuring subjective responses. The assessment showed that no differences of statistical significance were perceivable between the original images and the images encoded by the proposed coder

    Wavelets and Imaging Informatics: A Review of the Literature

    Get PDF
    AbstractModern medicine is a field that has been revolutionized by the emergence of computer and imaging technology. It is increasingly difficult, however, to manage the ever-growing enormous amount of medical imaging information available in digital formats. Numerous techniques have been developed to make the imaging information more easily accessible and to perform analysis automatically. Among these techniques, wavelet transforms have proven prominently useful not only for biomedical imaging but also for signal and image processing in general. Wavelet transforms decompose a signal into frequency bands, the width of which are determined by a dyadic scheme. This particular way of dividing frequency bands matches the statistical properties of most images very well. During the past decade, there has been active research in applying wavelets to various aspects of imaging informatics, including compression, enhancements, analysis, classification, and retrieval. This review represents a survey of the most significant practical and theoretical advances in the field of wavelet-based imaging informatics

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems

    Entropy in Image Analysis III

    Get PDF
    Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future

    Progressive transmission of medical images

    Get PDF
    A novel adaptive source-channel coding scheme for progressive transmission of medical images with a feedback system is therefore proposed in this dissertation. The overall design includes Discrete Wavelet Transform (DWT), Embedded Zerotree Wavelet (EZW) coding, Joint Source-Channel Coding (JSCC), prioritization of region of interest (RoI), variability of parity length based on feedback, and the corresponding hardware design utilising Simulink. The JSCC can achieve an efficient transmission by incorporating unequal error projection (UEP) and rate allocation. An algorithm is also developed to estimate the number of erroneous data in the receiver. The algorithm detects the address in which the number of symbols for each subblock is indicated, and reassigns an estimated correct data according to a decision making criterion, if error data is detected. The proposed system has been designed based on Simulink which can be used to generate netlist for portable devices. A new compression method called Compressive Sensing (CS) is also revisited in this work. CS exhibits many advantages in comparison with EZW based on our experimental results. DICOM JPEG2000 is an efficient coding standard for lossy or lossless multi-component image coding. However, it does not provide any mechanism for automatic RoI definition, and is more complex compared to our proposed scheme. The proposed system significantly reduces the transmission time, lowers computation cost, and maintains an error-free state in the RoI with regards to the above provided features. A MATLAB-based TCP/IP connection is established to demonstrate the efficacy of the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary and symmetric channel (BSC) and Rayleigh channel. The experimental results confirm the effectiveness of the design.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Assessing the effects of data compression in simulations using physically motivated metrics

    Get PDF
    Abstract not provide

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Progressive transmission of medical images

    Get PDF
    A novel adaptive source-channel coding scheme for progressive transmission of medical images with a feedback system is therefore proposed in this dissertation. The overall design includes Discrete Wavelet Transform (DWT), Embedded Zerotree Wavelet (EZW) coding, Joint Source-Channel Coding (JSCC), prioritization of region of interest (RoI), variability of parity length based on feedback, and the corresponding hardware design utilising Simulink. The JSCC can achieve an efficient transmission by incorporating unequal error projection (UEP) and rate allocation. An algorithm is also developed to estimate the number of erroneous data in the receiver. The algorithm detects the address in which the number of symbols for each subblock is indicated, and reassigns an estimated correct data according to a decision making criterion, if error data is detected. The proposed system has been designed based on Simulink which can be used to generate netlist for portable devices. A new compression method called Compressive Sensing (CS) is also revisited in this work. CS exhibits many advantages in comparison with EZW based on our experimental results. DICOM JPEG2000 is an efficient coding standard for lossy or lossless multi-component image coding. However, it does not provide any mechanism for automatic RoI definition, and is more complex compared to our proposed scheme. The proposed system significantly reduces the transmission time, lowers computation cost, and maintains an error-free state in the RoI with regards to the above provided features. A MATLAB-based TCP/IP connection is established to demonstrate the efficacy of the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary and symmetric channel (BSC) and Rayleigh channel. The experimental results confirm the effectiveness of the desig

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Transformées basées graphes pour la compression de nouvelles modalités d’image

    Get PDF
    Due to the large availability of new camera types capturing extra geometrical information, as well as the emergence of new image modalities such as light fields and omni-directional images, a huge amount of high dimensional data has to be stored and delivered. The ever growing streaming and storage requirements of these new image modalities require novel image coding tools that exploit the complex structure of those data. This thesis aims at exploring novel graph based approaches for adapting traditional image transform coding techniques to the emerging data types where the sampled information are lying on irregular structures. In a first contribution, novel local graph based transforms are designed for light field compact representations. By leveraging a careful design of local transform supports and a local basis functions optimization procedure, significant improvements in terms of energy compaction can be obtained. Nevertheless, the locality of the supports did not permit to exploit long term dependencies of the signal. This led to a second contribution where different sampling strategies are investigated. Coupled with novel prediction methods, they led to very prominent results for quasi-lossless compression of light fields. The third part of the thesis focuses on the definition of rate-distortion optimized sub-graphs for the coding of omni-directional content. If we move further and give more degree of freedom to the graphs we wish to use, we can learn or define a model (set of weights on the edges) that might not be entirely reliable for transform design. The last part of the thesis is dedicated to theoretically analyze the effect of the uncertainty on the efficiency of the graph transforms.En raison de la grande disponibilité de nouveaux types de caméras capturant des informations géométriques supplémentaires, ainsi que de l'émergence de nouvelles modalités d'image telles que les champs de lumière et les images omnidirectionnelles, il est nécessaire de stocker et de diffuser une quantité énorme de hautes dimensions. Les exigences croissantes en matière de streaming et de stockage de ces nouvelles modalités d’image nécessitent de nouveaux outils de codage d’images exploitant la structure complexe de ces données. Cette thèse a pour but d'explorer de nouvelles approches basées sur les graphes pour adapter les techniques de codage de transformées d'image aux types de données émergents où les informations échantillonnées reposent sur des structures irrégulières. Dans une première contribution, de nouvelles transformées basées sur des graphes locaux sont conçues pour des représentations compactes des champs de lumière. En tirant parti d’une conception minutieuse des supports de transformées locaux et d’une procédure d’optimisation locale des fonctions de base , il est possible d’améliorer considérablement le compaction d'énergie. Néanmoins, la localisation des supports ne permettait pas d'exploiter les dépendances à long terme du signal. Cela a conduit à une deuxième contribution où différentes stratégies d'échantillonnage sont étudiées. Couplés à de nouvelles méthodes de prédiction, ils ont conduit à des résultats très importants en ce qui concerne la compression quasi sans perte de champs de lumière statiques. La troisième partie de la thèse porte sur la définition de sous-graphes optimisés en distorsion de débit pour le codage de contenu omnidirectionnel. Si nous allons plus loin et donnons plus de liberté aux graphes que nous souhaitons utiliser, nous pouvons apprendre ou définir un modèle (ensemble de poids sur les arêtes) qui pourrait ne pas être entièrement fiable pour la conception de transformées. La dernière partie de la thèse est consacrée à l'analyse théorique de l'effet de l'incertitude sur l'efficacité des transformées basées graphes
    • …
    corecore