287 research outputs found

    JPEG2000 Image Compression on Solar EUV Images

    Get PDF
    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, e.g., which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bit rates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disk and off-limb coronal-loop oscillation time-series observed by AIA/SDO.Comment: 25 pages, published in Solar Physic

    A reduced-reference perceptual image and video quality metric based on edge preservation

    Get PDF
    In image and video compression and transmission, it is important to rely on an objective image/video quality metric which accurately represents the subjective quality of processed images and video sequences. In some scenarios, it is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one. For instance, for quality improvement of video transmission through closed-loop optimisation, the video quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image/video sequence-prior to compression and transmission-is not usually available at the receiver side, and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art RR metric. © 2012 Martini et al

    Algorithms for compression of high dynamic range images and video

    Get PDF
    The recent advances in sensor and display technologies have brought upon the High Dynamic Range (HDR) imaging capability. The modern multiple exposure HDR sensors can achieve the dynamic range of 100-120 dB and LED and OLED display devices have contrast ratios of 10^5:1 to 10^6:1. Despite the above advances in technology the image/video compression algorithms and associated hardware are yet based on Standard Dynamic Range (SDR) technology, i.e. they operate within an effective dynamic range of up to 70 dB for 8 bit gamma corrected images. Further the existing infrastructure for content distribution is also designed for SDR, which creates interoperability problems with true HDR capture and display equipment. The current solutions for the above problem include tone mapping the HDR content to fit SDR. However this approach leads to image quality associated problems, when strong dynamic range compression is applied. Even though some HDR-only solutions have been proposed in literature, they are not interoperable with current SDR infrastructure and are thus typically used in closed systems. Given the above observations a research gap was identified in the need for efficient algorithms for the compression of still images and video, which are capable of storing full dynamic range and colour gamut of HDR images and at the same time backward compatible with existing SDR infrastructure. To improve the usability of SDR content it is vital that any such algorithms should accommodate different tone mapping operators, including those that are spatially non-uniform. In the course of the research presented in this thesis a novel two layer CODEC architecture is introduced for both HDR image and video coding. Further a universal and computationally efficient approximation of the tone mapping operator is developed and presented. It is shown that the use of perceptually uniform colourspaces for internal representation of pixel data enables improved compression efficiency of the algorithms. Further proposed novel approaches to the compression of metadata for the tone mapping operator is shown to improve compression performance for low bitrate video content. Multiple compression algorithms are designed, implemented and compared and quality-complexity trade-offs are identified. Finally practical aspects of implementing the developed algorithms are explored by automating the design space exploration flow and integrating the high level systems design framework with domain specific tools for synthesis and simulation of multiprocessor systems. The directions for further work are also presented

    Influence of study design on digital pathology image quality evaluation : the need to define a clinical task

    Get PDF
    Despite the current rapid advance in technologies for whole slide imaging, there is still no scientific consensus on the recommended methodology for image quality assessment of digital pathology slides. For medical images in general, it has been recommended to assess image quality in terms of doctors’ success rates in performing a specific clinical task while using the images (clinical image quality, cIQ). However, digital pathology is a new modality, and already identifying the appropriate task is difficult. In an alternative common approach, humans are asked to do a simpler task such as rating overall image quality (perceived image quality, pIQ), but that involves the risk of nonclinically relevant findings due to an unknown relationship between the pIQ and cIQ. In this study, we explored three different experimental protocols: (1) conducting a clinical task (detecting inclusion bodies), (2) rating image similarity and preference, and (3) rating the overall image quality. Additionally, within protocol 1, overall quality ratings were also collected (task-aware pIQ). The experiments were done by diagnostic veterinary pathologists in the context of evaluating the quality of hematoxylin and eosin-stained digital pathology slides of animal tissue samples under several common image alterations: additive noise, blurring, change in gamma, change in color saturation, and JPG compression. While the size of our experiments was small and prevents drawing strong conclusions, the results suggest the need to define a clinical task. Importantly, the pIQ data collected under protocols 2 and 3 did not always rank the image alterations the same as their cIQ from protocol 1, warning against using conventional pIQ to predict cIQ. At the same time, there was a correlation between the cIQ and task-aware pIQ ratings from protocol 1, suggesting that the clinical experiment context (set by specifying the clinical task) may affect human visual attention and bring focus to their criteria of image quality. Further research is needed to assess whether and for which purposes (e.g., preclinical testing) task-aware pIQ ratings could substitute cIQ for a given clinical task

    Compresión Digital en Imágenes Médicas

    Get PDF
    Imaging technology has long played a principal role in the medical domain, and as such, its use is widespread in the diagnosis and treatment of numerous health conditions. Concurrently, new developments in imaging techniques and sensor technology make possible the acquisition of increasingly detailed images of several organs of the human body. This improvement is indeed advantageous for medical practitioners. However, it comes to a cost in the form of storage and telecommunication infrastructures needed to handle high-resolution images reliably. Ordinarily, digital compression is a mainstay in the efficient management of digital media, including still images and video. From a technical point of view, medical imaging could take full advantage of digital compression technology. However, nuances unique to medical data impose constraints to the application of digital compression in medical images. This paper presents an overview of digital compression in the context of still medical images, along with a brief discussion on related regulatory and legal implications.La Imagenología desempeña un papel protagónico en el campo médico, siendo su uso ampliamente generalizado en el diagnóstico y tratamiento de numerosos trastornos de la salud.Nuevos desarrollos en la adquisición de imágenes y en la tecnología de sensores hacen posible obtener imágenes más detalladas de varios órganos del cuerpo humano. Tal mejora es ciertamente ventajosa para la práctica médica, pero supone un encarecimiento de los recursos tecnológicos necesarios para manejar imágenes de alta resolución de manera confiable. Comúnmente, el manejo eficiente de medios digitales se apoya principalmente en la compresión digital. Desde un punto de vista técnico, las imágenes médicas podrían aprovechar las ventajas de la compresión digital. Sin embargo, peculiaridades de los datos médicos imponen restricciones a su uso. Este artículo presenta un vistazo a la compresión digital en el contexto de las imágenes médicas, y una breve discusión de los aspectos regulatorios y legales asociados a su uso

    A New Watermarking Algorithm Based on Human Visual System for Content Integrity Verification of Region of Interest

    Get PDF
    This paper proposes a semi-fragile, robust-to-JPEG2000 compression watermarking method which is based on the Human Visual System (HVS). This method is designed to verify the content integrity of Region of Interest (ROI) in tele-radiology images. The design of watermarking systems based on HVS leads to the possibility of embedding watermarks in places that are not obvious to the human eye. In this way, notwithstanding increased capacity and robustness, it becomes possible to hide more watermarks. Based on perceptual model of HVS, we propose a new watermarking scheme that embeds the watermarks using a replacement method. Thus, the proposed method not only detects the watermarks but also extracts them. The novelty of our ROI-based method is in the way that we interpret the obtained coefficients of the HVS perceptual model: instead of interpreting these coefficients as weights, we assume them to be embedding locations. In our method, the information to be embedded is extracted from inter-subband statistical relations of ROI. Then, the semi-fragile watermarks are embedded in the obtained places in level 3 of the DWT decomposition of the Region of Background (ROB). The compatibility of the embedded signatures and extracted watermarks is used to verify the content of ROI. Our simulations confirm improved fidelity and robustness

    Multiple image watermarking using the SILE approach

    Get PDF
    Digital copyright protection has attracted a great spectrum of studies. One of the optimistic techniques is digital watermarking. Many digital watermarking algorithms were proposed in recent literature. One of the highly addressed issues within the watermarking literature is robustness against attacks. Considering this major issue, we propose a new robust image watermarking scheme. The proposed watermarking scheme achieves robustness by watermarking several images simultaneously. It firstly splits the watermark (which is a binary logo) into multiple pieces and then embeds each piece in a separate image, hence, this technique is termed 'Multiple Images Watermarking'. The binary logo is generated by extracting unique features from all the images which have to be watermarked. This watermark is first permuted and then embedded using SILE algorithm [7]. Permutation is important step to uniformly distribute the unique characteristics acquired from multiple logos. The proposed watermarking scheme is robust against a variety of attacks including Gamma Correction, JPEG, JPEG2000, Blur, Median, Histogram Equalization, Contrast, Salt and Pepper, Resize, Crop, Rotation 90, Rotation 180, Projective, Row Column Blanking and Row Column Copying and Counterfeit attack

    A human visual system based image coder

    Get PDF
    Over the years, society has changed considerably due to technological changes, and digital images have become part and parcel of our everyday lives. Irrespective of applications (i.e., digital camera) and services (information sharing, e.g., Youtube, archive / storage), there is the need for high image quality with high compression ratios. Hence, considerable efforts have been invested in the area of image compression. The traditional image compression systems take into account of statistical redundancies inherent in the image data. However, the development and adaptation of vision models, which take into account the properties of the human visual system (HVS), into picture coders have since shown promising results. The objective of the thesis is to propose the implementation of a vision model in two different manners in the JPEG2000 coding system: (a) a Perceptual Colour Distortion Measure (PCDM) for colour images in the encoding stage, and (b) a Perceptual Post Filtering (PPF) algorithm for colour images in the decoding stage. Both implementations are embedded into the JPEG2000 coder. The vision model here exploits the contrast sensitivity, the inter-orientation masking and intra-band masking visual properties of the HVS. Extensive calibration work has been undertaken to fine-tune the 42 model parameters of the PCDM and Just-Noticeable-Difference thresholds of the PPF for colour images. Evaluation with subjective assessments of PCDM based coder has shown perceived quality improvement over the JPEG2000 benchmark with the MSE (mean square error) and CVIS criteria. For the PPF adapted JPEG2000 decoder, performance evaluation has also shown promising results against the JPEG2000 benchmarks. Based on subjective evaluation, when both PCDM and PPF are used in the JPEG2000 coding system, the overall perceived image quality is superior to the stand-alone JPEG2000 with the PCDM
    • …
    corecore