13 research outputs found

    Securing Audio Watermarking System using Discrete Fourier Transform for Copyright Protection

    Get PDF
    The recent growth in pc networks, and a lot of specifically, the planet Wide internet, copyright protection of digital audio becomes a lot of and a lot of necessary. Digital audio watermarking has drawn in depth attention for copyright protection of audio information. A digital audio watermarking may be a method of embedding watermarks into audio signal to point out genuineness and possession. Our technique supported the embedding watermark into audio signal and extraction of watermark sequence. We tend to propose a brand new watermarking system victimization separate Fourier remodel (DFT) for audio copyright protection. The watermarks area unit embedded into the best outstanding peak of the magnitude spectrum of every non-overlapping frame. This watermarking system can provides robust lustiness against many styles of attacks like noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values starting from thirteen sound unit to twenty sound unit. Additionally, planned systems attempting to realize SNR (signal-to-noise ratio) values starting from twenty sound unit to twenty-eight sound unit. DOI: 10.17762/ijritcc2321-8169.15055

    An Invisible Logo Watermarking Using Arnold Transform

    Get PDF
    AbstractDigital watermarking is the process of hiding information into the digital content. The method of embedding a smaller logo image into the host image is called logo watermarking. The system proposes an invisible and secure watermarking. The key entered initially determine the location of embedding and thus classified the host image to white and black textured regions. The logo image is then transformed using Arnold transform. Discrete Wavelet Transform (DWT) technique is employed for embedding the transformed logo into the white textured regions. Watermark extraction is done by entering the same key which was already entered during embedding. The system is secure and the logo is imperceptible within the host image. Finally for analysis, PSNR value has been used as a metric for determining the quality of the recovered image

    Forensik Digital Berdasarkan Citra Mikroskop untuk Autentikasi Arsip Tercetak

    Get PDF
    Pembuktian keaslian arsip tercetak dari suatu printer merupakan suatu kebutuhan untuk menentukan keabsahan suatu produk teknologi informasi terutama untuk mesin cetak. Tujuan penelitian ini adalah untuk melakukan autentikasi arsip tercetak berdasarkan bentuk partikel toner yang menempel pada huruf tercetak dari setiap jenis dan merk printer. Studi ini menggunakan metode pendekatan forensik digital melalui citra digital mikroskop dari arsip tercetak yang dianalisis menggunakan FIJI/ ImageJ dengan pendekatan histogram dan dikombinasikan dengan analisis jumlah partikel huruf dari setiap merk printer. Hasil eksperimen menunjukkan perbedaan yang signifikan dari setiap jenis printer untuk menentukan sumber arsip asli atau palsu yang dikeluarkan oleh suatu institusi atau lembaga tertentu

    A novel image authenticationand rightful ownership detection framework based on DWT watermarking in cloud environment

    Get PDF
    Cloud computing has been highlighted by many organizations because of its benefits to use it anywhere. Efficiency, Easy access information, quick deployment, and a huge reduce of cost of using it, are some of the cloud advantages. While cost reduction is one of the great benefits of cloud, privacy protection of the users‘ data is also a significant issue of the cloud that cloud providers have to consider about. This is a vital component of the cloud‘s critical infrastructure. Cloud users use this environment to enable numerous online transactions crossways a widespread range of sectors and to exchange information. Especially, misuse of the users‘ data and private information are some of the important problems of using cloud environment. Cloud untrustworthy environment is a good area for hackers to steal user‘s stored data by Phishing and Pharming techniques. Therefore, cloud vendors should utilize easy- to-use, secure, and efficient environment. Besides they should prepare a way to access cloud services that promote data privacy and ownership protection. The more data privacy and ownership protection in cloud environment, the more users will attract to use this environment to put their important private data. In this study, a rightful ownership detection framework has been proposed to mitigate the ownership protection in cloud environment. Best methods for data privacy protection such as image authentication methods, watermarking methods and cryptographic methods, for mitigating the ownership protection problem to use in cloud environment, have been explored. Finally, efficiency and reliability of the proposed framework have been evaluated and analyzed

    Laser scanner jitter characterization, page content analysis for optimal rendering, and understanding image graininess

    Get PDF
    In Chapter 1, the electrophotographic (EP) process is widely used in imaging systems such as laser printers and office copiers. In the EP process, laser scanner jitter is a common artifact that mainly appears along the scan direction due to the condition of polygon facets. Prior studies have not focused on the periodic characteristic of laser scanner jitter in terms of the modeling and analysis. This chapter addresses the periodic characteristic of laser scanner jitter in the mathematical model. In the Fourier domain, we derive an analytic expression for laser scanner jitter in general, and extend the expression assuming a sinusoidal displacement. This leads to a simple closed-form expression in terms of Bessel functions of the first kind. We further examine the relationship between the continuous-space halftone image and the periodic laser scanner jitter. The simulation results show that our proposed mathematical model predicts the phenomenon of laser scanner jitter effectively, when compared to the characterization using a test pattern, which consists of a flat field with 25% dot coverage However, there is some mismatches between the analytical spectrum and spectrum of the processed scanned test target. We improve experimental results by directly estimating the displacement instead of assuming a sinusoidal displacement. This gives a better prediction of the phenomenon of laser scanner jitter. ^ In Chapter 2, we describe a segmentation-based object map correction algorithm, which can be integrated in a new imaging pipeline for laser electrophotographic (EP) printers. This new imaging pipeline incorporates the idea of object-oriented halftoning, which applies different halftone screens to different regions of the page, to improve the overall print quality. In particular, smooth areas are halftoned with a low-frequency screen to provide more stable printing; whereas detail areas are halftoned with a high-frequency screen, since this will better reproduce the object detail. In this case, the object detail also serves to mask any print defects that arise from the use of a high frequency screen. These regions are defined by the initial object map, which is translated from the page description language (PDL). However, the information of object type obtained from the PDL may be incorrect. Some smooth areas may be labeled as raster causing them to be halftoned with a high frequency screen, rather than being labeled as vector, which would result in them being rendered with a low frequency screen. To correct the misclassification, we propose an object map correction algorithm that combines information from the incorrect object map with information obtained by segmentation of the continuous-tone RGB rasterized page image. Finally, the rendered image can be halftoned by the object-oriented halftoning approach, based on the corrected object map. Preliminary experimental results indicate the benefits of our algorithm combined with the new imaging pipeline, in terms of correction of misclassification errors. ^ In Chapter 3, we describe a study to understand image graininess. With the emergence of the high-end digital printing technologies, it is of interest to analyze the nature and causes of image graininess in order to understand the factors that prevent high-end digital presses from achieving the same print quality as commercial offset presses. We want to understand how image graininess relates to the halftoning technology and marking technology. This chapter provides three different approaches to understand image graininess. First, we perform a Fourier-based analysis of regular and irregular periodic, clustered-dot halftone textures. With high-end digital printing technology, irregular screens can be considered since they can achieve a better approximation to the screen sets used for commercial offset presses. This is due to the fact that the elements of the periodicity matrix of an irregular screen are rational numbers, rather than integers, which would be the case for a regular screen. From the analytical results, we show that irregular halftone textures generate new frequency components near the spectrum origin; and these frequency components are low enough to be visible to the human viewer. However, regular halftone textures do not have these frequency components. In addition, we provide a metric to measure the nonuniformity of a given halftone texture. The metric indicates that the nonuniformity of irregular halftone textures is higher than the nonuniformity of regular halftone textures. Furthermore, a method to visualize the nonuniformity of given halftone textures is described. The analysis shows that irregular halftone textures are grainier than regular halftone textures. Second, we analyze the regular and irregular periodic, clustered-dot halftone textures by calculating three spatial statistics. First, the disparity between lattice points generated by the periodicity matrix, and centroids of dot clusters are considered. Next, the area of dot clusters in regular and irregular halftone textures is considered. Third, the compactness of dot clusters in the regular and irregular halftone textures is calculated. The disparity of between centroids of irregular dot clusters and lattices points generated by the irregular screen is larger than the disparity of between centroids of regular dot clusters and lattices points generated by the regular screen. Irregular halftone textures have higher variance in the histogram of dot-cluster area. In addition, the compactness measurement shows that irregular dot clusters are less compact than regular dot clusters. But, a clustered-dot halftone algorithm wants to produce clustered-dot as compact as possible. Lastly, we exam the current marking technology by printing the same halftone pattern on different substrates, glossy and polyester media. The experimental results show that the current marking technology provides better print quality on glossy media than on polyester media. With above three different approaches, we conclude that the current halftoning technology introduces image graininess in the spatial domain because of the non-integer elements in the periodicity matrix of the irregular screen and the finite addressability of the marking engine. In addition, the geometric characteristics of irregular dot clusters is more irregular than the geometric characteristics of regular dot clusters. Finally, the marking technology provides inconsistency of print quality between substrates

    Estimating toner usage with laser electrophotographic printers, and object map generation from raster input image

    Get PDF
    Accurate estimation of toner usage is an area of on-going importance for laser, electrophotographic (EP) printers. In Part 1, we propose a new two-stage approach in which we first predict on a pixel-by-pixel basis, the absorptance from printed and scanned pages. We then form a weighted sum of these pixel values to predict overall toner usage on the printed page. The weights are chosen by least-squares regression to toner usage measured with a set of printed test pages. Our two-stage predictor significantly outperforms existing methods that are based on a simple pixel counting strategy in terms of both accuracy and robustness of the predictions.^ In Part 2, we describe a raster-input-based object map generation algorithm (OMGA) for laser, electrophotographic (EP) printers. The object map is utilized in the object-oriented halftoning approach, where different halftone screens and color maps are applied to different types of objects on the page in order to improve the overall printing quality. The OMGA generates object map from the raster input directly. It solves problems such as the object map obtained from the page description language (PDL) is incorrect, and an initial object map is unavailable from the processing pipeline. A new imaging pipeline for the laser EP printer incorporating both the OMGA and the object-oriented halftoning approach is proposed. The OMGA is a segmentation-based classification approach. It first detects objects according to the edge information, and then classifies the objects by analyzing the feature values extracted from the contour and the interior of each object. The OMGA is designed to be hardware-friendly, and can be implemented within two passes through the input document

    Verification of Authenticity of Stamps in Documents

    Get PDF
    Klasická inkoustová razítka, která se používají k autorizaci dokumentů, se dnes díky rozšíření moderních technologií dají relativně snadno padělat metodou oskenování a vytištění. V rámci diplomové práce je vyvíjen automatický nástroj pro ověření pravosti razítek, který najde využití zejména v prostředích, kde je nutné zpracovávat velké množství dokumentů. Procesu ověření pravosti razítka musí přirozeně předcházet jeho detekce v dokumentu - úloha zpracování obrazu, která zatím nemá přesvědčivé řešení. V této diplomové práci je navržena zcela nová metoda detekce a ověření pravosti razítka v barevných obrazech dokumentů. Tato metoda zahrnuje plnou segmentaci stránky za účelem určení kandidátních řešení, dále extrakci příznaků a následnou klasifikaci kandidátů za pomoci algoritmu podpůrných vektorů (SVM). Evaluace ukázala, že algoritmus umožňuje rozlišovat razítka od jiných barevných objektů v dokumentu jako jsou například loga a barevné nápisy. Kromě toho algoritmus dokáže rozlišit pravá razítka od kopií.Classical ink stamps and seals used for authentication of a document content have become relatively easy to forge by the scan & print technique since the technology is available to general public. For environments where a huge volume of documents is processed, an automatic system for verification of authenticity of stamps is being developed in the scope of this master's thesis. The process of stamp authenticity verification naturally must be preceded by the phase of stamp detection and segmentation - a difficult task of Document Image Analysis (DIA). In this master's thesis, a novel method for detection and verification of stamps in color document images is proposed. It involves a full segmentation of the page to identify candidate solutions, extraction of features, and further classification of the candidates by means of support vector machines. The evaluation has shown that the algorithm is capable of differentiating stamps from other color objects in the document such as logos or text and also genuine stamps from copied ones.

    Enriching unstructured media content about events to enable semi-automated summaries, compilations, and improved search by leveraging social networks

    Get PDF
    (i) Mobile devices and social networks are omnipresent Mobile devices such as smartphones, tablets, or digital cameras together with social networks enable people to create, share, and consume enormous amounts of media items like videos or photos both on the road or at home. Such mobile devices "by pure definition" accompany their owners almost wherever they may go. In consequence, mobile devices are omnipresent at all sorts of events to capture noteworthy moments. Exemplary events can be keynote speeches at conferences, music concerts in stadiums, or even natural catastrophes like earthquakes that affect whole areas or countries. At such events" given a stable network connection" part of the event-related media items are published on social networks both as the event happens or afterwards, once a stable network connection has been established again. (ii) Finding representative media items for an event is hard Common media item search operations, for example, searching for the official video clip for a certain hit record on an online video platform can in the simplest case be achieved based on potentially shallow human-generated metadata or based on more profound content analysis techniques like optical character recognition, automatic speech recognition, or acoustic fingerprinting. More advanced scenarios, however, like retrieving all (or just the most representative) media items that were created at a given event with the objective of creating event summaries or media item compilations covering the event in question are hard, if not impossible, to fulfill at large scale. The main research question of this thesis can be formulated as follows. (iii) Research question "Can user-customizable media galleries that summarize given events be created solely based on textual and multimedia data from social networks?" (iv) Contributions In the context of this thesis, we have developed and evaluated a novel interactive application and related methods for media item enrichment, leveraging social networks, utilizing the Web of Data, techniques known from Content-based Image Retrieval (CBIR) and Content-based Video Retrieval (CBVR), and fine-grained media item addressing schemes like Media Fragments URIs to provide a scalable and near realtime solution to realize the abovementioned scenario of event summarization and media item compilation. (v) Methodology For any event with given event title(s), (potentially vague) event location(s), and (arbitrarily fine-grained) event date(s), our approach can be divided in the following six steps. 1) Via the textual search APIs (Application Programming Interfaces) of different social networks, we retrieve a list of potentially event-relevant microposts that either contain media items directly, or that provide links to media items on external media item hosting platforms. 2) Using third-party Natural Language Processing (NLP) tools, we recognize and disambiguate named entities in microposts to predetermine their relevance. 3) We extract the binary media item data from social networks or media item hosting platforms and relate it to the originating microposts. 4) Using CBIR and CBVR techniques, we first deduplicate exact-duplicate and near-duplicate media items and then cluster similar media items. 5) We rank the deduplicated and clustered list of media items and their related microposts according to well-defined ranking criteria. 6) In order to generate interactive and user-customizable media galleries that visually and audially summarize the event in question, we compile the top-n ranked media items and microposts in aesthetically pleasing and functional ways
    corecore