367 research outputs found

    Simulated Annealing for JPEG Quantization

    Full text link
    JPEG is one of the most widely used image formats, but in some ways remains surprisingly unoptimized, perhaps because some natural optimizations would go outside the standard that defines JPEG. We show how to improve JPEG compression in a standard-compliant, backward-compatible manner, by finding improved default quantization tables. We describe a simulated annealing technique that has allowed us to find several quantization tables that perform better than the industry standard, in terms of both compressed size and image fidelity. Specifically, we derive tables that reduce the FSIM error by over 10% while improving compression by over 20% at quality level 95 in our tests; we also provide similar results for other quality levels. While we acknowledge our approach can in some images lead to visible artifacts under large magnification, we believe use of these quantization tables, or additional tables that could be found using our methodology, would significantly reduce JPEG file sizes with improved overall image quality.Comment: Appendix not included in arXiv version due to size restrictions. For full paper go to: http://www.eecs.harvard.edu/~michaelm/SimAnneal/PAPER/simulated-annealing-jpeg.pd

    A mobile image enhancement technology for visually impaired

    Get PDF
    In this thesis, an image enhancement application is developed for low-vision patients when they use iPhones to see images/watch videos. The thesis has two contributions. The first contribution is the new image enhancement algorithm which combines human vision features. The new image enhancement algorithm is modified from a wavelet transform based image enhancement algorithm developed by Dr. Jinshan Tang. Different from the original algorithm, the new image enhancement algorithm combines human visual feature into the algorithm and thus can make the new algorithm more effective. Experimental simulation results show that the proposed algorithm has better visual results than the algorithm without combining visual features. The second contribution of this thesis is the development of a mobile image enhancement application. In this application, users with low-vision can see clearer images on an iPhone which is installed with the application I have developed

    Proper autofocus for better particle measurements

    Get PDF

    Automated framework for robust content-based verification of print-scan degraded text documents

    Get PDF
    Fraudulent documents frequently cause severe financial damages and impose security breaches to civil and government organizations. The rapid advances in technology and the widespread availability of personal computers has not reduced the use of printed documents. While digital documents can be verified by many robust and secure methods such as digital signatures and digital watermarks, verification of printed documents still relies on manual inspection of embedded physical security mechanisms.The objective of this thesis is to propose an efficient automated framework for robust content-based verification of printed documents. The principal issue is to achieve robustness with respect to the degradations and increased levels of noise that occur from multiple cycles of printing and scanning. It is shown that classic OCR systems fail under such conditions, moreover OCR systems typically rely heavily on the use of high level linguistic structures to improve recognition rates. However inferring knowledge about the contents of the document image from a-priori statistics is contrary to the nature of document verification. Instead a system is proposed that utilizes specific knowledge of the document to perform highly accurate content verification based on a Print-Scan degradation model and character shape recognition. Such specific knowledge of the document is a reasonable choice for the verification domain since the document contents are already known in order to verify them.The system analyses digital multi font PDF documents to generate a descriptive summary of the document, referred to as \Document Description Map" (DDM). The DDM is later used for verifying the content of printed and scanned copies of the original documents. The system utilizes 2-D Discrete Cosine Transform based features and an adaptive hierarchical classifier trained with synthetic data generated by a Print-Scan degradation model. The system is tested with varying degrees of Print-Scan Channel corruption on a variety of documents with corruption produced by repetitive printing and scanning of the test documents. Results show the approach achieves excellent accuracy and robustness despite the high level of noise

    Study of machine learning techniques for image compression

    Get PDF
    In the age of the Internet and cloud-based applications, image compression has become increasingly important. Moreover, image processing has recently sparked the interest of technology companies as autonomous machines powered by artificial intelligence using images as input are rapidly growing. Reducing the amount of information needed to represent an image is key to reducing the amount of storage space, transmission bandwidth, and computation time required to process the image, which in turn saves resources, energy, and money. This study aims to investigate machine learning techniques (Fourier, wavelets, and PCA) for image compression. Several Fourier and wavelet methods are included, such as the wellknown Cooley-Tukey algorithm, the discrete cosine transform, and the Mallart algorithm, among others. To comprehend each step of image compression, an object-oriented Matlab code has been developed in-house. To do so, extensive research in machine learning techniques, not only in terms of theoretical understanding, but also in the mathematics that support it. The developed code is used to compare the performance of the different compression techniques studied. The findings of this study are consistent with the advances in image compression technologies in recent years, with the dominance of the JPEG compression method (Fourier) and later JPEG2000 (wavelets) reigning supreme

    Rotation Invariant on Harris Interest Points for Exposing Image Region Duplication Forgery

    Get PDF
    Nowadays, image forgery has become common because only an editing package software and a digital camera are required to counterfeit an image. Various fraud detection systems have been developed in accordance with the requirements of numerous applications and to address different types of image forgery. However, image fraud detection is a complicated process given that is necessary to identify the image processing tools used to counterfeit an image. Here, we describe recent developments in image fraud detection. Conventional techniques for detecting duplication forgeries have difficulty in detecting postprocessing falsification, such as grading and joint photographic expert group compression. This study proposes an algorithm that detects image falsification on the basis of Hessian features
    • …
    corecore