161 research outputs found

    An Adaptive Coding Pass Scanning Algorithm for Optimal Rate Control in Biomedical Images

    Get PDF
    High-efficiency, high-quality biomedical image compression is desirable especially for the telemedicine applications. This paper presents an adaptive coding pass scanning (ACPS) algorithm for optimal rate control. It can identify the significant portions of an image and discard insignificant ones as early as possible. As a result, waste of computational power and memory space can be avoided. We replace the benchmark algorithm known as postcompression rate distortion (PCRD) by ACPS. Experimental results show that ACPS is preferable to PCRD in terms of the rate distortion curve and computation time

    Image Quality Estimation: Soft-ware for Objective Evaluation

    Get PDF
    Digital images are widely used in our daily lives and the quality of images is important to the viewing experience. Low quality images may be blurry or contain noise or compression artifacts. Humans can easily estimate image quality, but it is not practical to use human subjects to measure image quality in real applications. Image Quality Estimators (QE) are algorithms that evaluate image qualities automatically. These QEs compute scores of any input images to represent their qualities. This thesis mainly focuses on evaluating the performance of QEs. Two approaches used in this work are objective software analysis and the subjective database design. For the first, we create a software consisting of functional modules to test QE performances. These modules can load images from subjective databases or generate distortion images from any input images. Their QE scores are computed and analyzed by the statistical method module so that they can be easily interpreted and reported. Some modules in this software are combined and formed into a published software package: Stress Testing Image Quality Estimators (STIQE). In addition to the QE analysis software, a new subjective database is designed and implemented using both online and in-lab subjective tests. The database is designed using the pairwise comparison method and the subjective quality scores are computed using the Bradley-Terry model and Maximum Likelihood Estimation (MLE). While four testing phases are designed for this databases, only phase 1 is reported in this work

    Statistical Atmospheric Parameter Retrieval Largely Benefits from Spatial-Spectral Image Compression

    Get PDF
    The Infrared Atmospheric Sounding Interferometer (IASI) is flying on board of the Metop satellite series, which is part of the EUMETSAT Polar System (EPS). Products obtained from IASI data represent a significant improvement in the accuracy and quality of the measurements used for meteorological models. Notably, IASI collects rich spectral information to derive temperature and moisture profiles –among other relevant trace gases–, essential for atmospheric forecasts and for the understanding of weather. Here, we investigate the impact of near-lossless and lossy compression on IASI L1C data when statistical retrieval algorithms are later applied. We search for those compression ratios that yield a positive impact on the accuracy of the statistical retrievals. The compression techniques help reduce certain amount of noise on the original data and, at the same time, incorporate spatial-spectral feature relations in an indirect way without increasing the computational complexity. We observed that compressing images, at relatively low bitrates, improves results in predicting temperature and dew point temperature, and we advocate that some amount of compression prior to model inversion is beneficial. This research can benefit the development of current and upcoming retrieval chains in infrared sounding and hyperspectral sensors

    No-reference image and video quality assessment: a classification and review of recent approaches

    Get PDF

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Statistical Tools for Digital Image Forensics

    Get PDF
    A digitally altered image, often leaving no visual clues of having been tampered with, can be indistinguishable from an authentic image. The tampering, however, may disturb some underlying statistical properties of the image. Under this assumption, we propose five techniques that quantify and detect statistical perturbations found in different forms of tampered images: (1) re-sampled images (e.g., scaled or rotated); (2) manipulated color filter array interpolated images; (3) double JPEG compressed images; (4) images with duplicated regions; and (5) images with inconsistent noise patterns. These techniques work in the absence of any embedded watermarks or signatures. For each technique we develop the theoretical foundation, show its effectiveness on credible forgeries, and analyze its sensitivity and robustness to simple counter-attacks
    corecore