383 research outputs found

    Map online system using internet-based image catalogue

    Get PDF
    Digital maps carry along its geodata information such as coordinate that is important in one particular topographic and thematic map. These geodatas are meaningful especially in military field. Since the maps carry along this information, its makes the size of the images is too big. The bigger size, the bigger storage is required to allocate the image file. It also can cause longer loading time. These conditions make it did not suitable to be applied in image catalogue approach via internet environment. With compression techniques, the image size can be reduced and the quality of the image is still guaranteed without much changes. This report is paying attention to one of the image compression technique using wavelet technology. Wavelet technology is much batter than any other image compression technique nowadays. As a result, the compressed images applied to a system called Map Online that used Internet-based Image Catalogue approach. This system allowed user to buy map online. User also can download the maps that had been bought besides using the searching the map. Map searching is based on several meaningful keywords. As a result, this system is expected to be used by Jabatan Ukur dan Pemetaan Malaysia (JUPEM) in order to make the organization vision is implemented

    Performance Analysis of Set Partitioning in Hierarchical Trees (spiht) Algorithm for a Family of Wavelets Used in Color Image Compression

    Get PDF
    With the spurt in the amount of data (Image, video, audio, speech, & text) available on the net, there is a huge demand for memory & bandwidth savings. One has to achieve this, by maintaining the quality & fidelity of the data acceptable to the end user. Wavelet transform is an important and practical tool for data compression. Set partitioning in hierarchal trees (SPIHT) is a widely used compression algorithm for wavelet transformed images. Among all wavelet transform and zero-tree quantization based image compression algorithms SPIHT has become the benchmark state-of-the-art algorithm because it is simple to implement & yields good results. In this paper we present a comparative study of various wavelet families for image compression with SPIHT algorithm. We have conducted experiments with Daubechies, Coiflet, Symlet, Bi-orthogonal, Reverse Bi-orthogonal and Demeyer wavelet types. The resulting image quality is measured objectively, using peak signal-to-noise ratio (PSNR), and subjectively, using perceived image quality (human visual perception, HVP for short). The resulting reduction in the image size is quantified by compression ratio (CR)

    Contextual biometric watermarking of fingerprint images

    Get PDF
    This research presents contextual digital watermarking techniques using face and demographic text data as multiple watermarks for protecting the evidentiary integrity of fingerprint image. The proposed techniques embed the watermarks into selected regions of fingerprint image in MDCT and DWT domains. A general image watermarking algorithm is developed to investigate the application of MDCT in the elimination of blocking artifacts. The application of MDCT has improved the performance of the watermarking technique compared to DCT. Experimental results show that modifications to fingerprint image are visually imperceptible and maintain the minutiae detail. The integrity of the fingerprint image is verified through high matching score obtained from the AFIS system. There is also a high degree of correlation between the embedded and extracted watermarks. The degree of similarity is computed using pixel-based metrics and human visual system metrics. It is useful for personal identification and establishing digital chain of custody. The results also show that the proposed watermarking technique is resilient to common image modifications that occur during electronic fingerprint transmission

    Study of machine learning techniques for image compression

    Get PDF
    In the age of the Internet and cloud-based applications, image compression has become increasingly important. Moreover, image processing has recently sparked the interest of technology companies as autonomous machines powered by artificial intelligence using images as input are rapidly growing. Reducing the amount of information needed to represent an image is key to reducing the amount of storage space, transmission bandwidth, and computation time required to process the image, which in turn saves resources, energy, and money. This study aims to investigate machine learning techniques (Fourier, wavelets, and PCA) for image compression. Several Fourier and wavelet methods are included, such as the wellknown Cooley-Tukey algorithm, the discrete cosine transform, and the Mallart algorithm, among others. To comprehend each step of image compression, an object-oriented Matlab code has been developed in-house. To do so, extensive research in machine learning techniques, not only in terms of theoretical understanding, but also in the mathematics that support it. The developed code is used to compare the performance of the different compression techniques studied. The findings of this study are consistent with the advances in image compression technologies in recent years, with the dominance of the JPEG compression method (Fourier) and later JPEG2000 (wavelets) reigning supreme

    Graph-Based Detection of Seams In 360-Degree Images

    Get PDF
    In this paper, we propose an algorithm to detect a specific kind of distortions, referred to as seams, which commonly oc- cur when a 360-degree image is represented in planar domain by projecting the sphere to a polyhedron, e.g, via the Cube Map (CM) projection, and undergoes lossy compression. The proposed algorithm exploits a graph-based representation to account for the actual sampling density of the 360-degree sig- nal in the native spherical domain. The CM image is con- sidered as a signal lying on a graph defined on the spherical surface. The spectra of the processed and the original sig- nals, computed by applying the Graph Fourier Transform, are compared to detect the seams. To test our method a dataset of compressed CM 360-degree images, annotated by experts, has been created. The performance of the proposed algorithm is compared to those achieved by baseline metrics, as well as to the same approach based on spectral comparison but ignor- ing the spherical nature of the signal. The experimental results show that the proposed method has the best performance and can successfully detect up to approximately 90% of visible seams on our dataset

    Image statistical frameworks for digital image forensics

    Get PDF
    The advances of digital cameras, scanners, printers, image editing tools, smartphones, tablet personal computers as well as high-speed networks have made a digital image a conventional medium for visual information. Creation, duplication, distribution, or tampering of such a medium can be easily done, which calls for the necessity to be able to trace back the authenticity or history of the medium. Digital image forensics is an emerging research area that aims to resolve the imposed problem and has grown in popularity over the past decade. On the other hand, anti-forensics has emerged over the past few years as a relatively new branch of research, aiming at revealing the weakness of the forensic technology. These two sides of research move digital image forensic technologies to the next higher level. Three major contributions are presented in this dissertation as follows. First, an effective multi-resolution image statistical framework for digital image forensics of passive-blind nature is presented in the frequency domain. The image statistical framework is generated by applying Markovian rake transform to image luminance component. Markovian rake transform is the applications of Markov process to difference arrays which are derived from the quantized block discrete cosine transform 2-D arrays with multiple block sizes. The efficacy and universality of the framework is then evaluated in two major applications of digital image forensics: 1) digital image tampering detection; 2) classification of computer graphics and photographic images. Second, a simple yet effective anti-forensic scheme is proposed, capable of obfuscating double JPEG compression artifacts, which may vital information for image forensics, for instance, digital image tampering detection. Shrink-and-zoom (SAZ) attack, the proposed scheme, is simply based on image resizing and bilinear interpolation. The effectiveness of SAZ has been evaluated over two promising double JPEG compression schemes and the outcome reveals that the proposed scheme is effective, especially in the cases that the first quality factor is lower than the second quality factor. Third, an advanced textural image statistical framework in the spatial domain is proposed, utilizing local binary pattern (LBP) schemes to model local image statistics on various kinds of residual images including higher-order ones. The proposed framework can be implemented either in single- or multi-resolution setting depending on the nature of application of interest. The efficacy of the proposed framework is evaluated on two forensic applications: 1) steganalysis with emphasis on HUGO (Highly Undetectable Steganography), an advanced steganographic scheme embedding hidden data in a content-adaptive manner locally into some image regions which are difficult for modeling image statics; 2) image recapture detection (IRD). The outcomes of the evaluations suggest that the proposed framework is effective, not only for detecting local changes which is in line with the nature of HUGO, but also for detecting global difference (the nature of IRD)
    corecore