31 research outputs found

    A Hybrid Digital Watermarking Approach Using Wavelets and LSB

    Get PDF
    The present paper proposed a novel approach called Wavelet based Least Significant Bit Watermarking (WLSBWM) for high authentication, security and copyright protection. Alphabet Pattern (AP) approach is used to generate shuffled image in the first stage and Pell’s Cat Map (PCM) is used for providing more security and strong protection from attacks. PCM applied on each 5×5 sub images. A wavelet concept is used to reduce the dimensionality of the image until it equals to the size of the watermark image. Discrete Cosign Transform is applied in the first stage; later N level Discrete Wavelet Transform (DWT) is applied for reducing up to the size of the watermark image. The water mark image is inserted in LHn Sub band of the wavelet image using LSB concept. Simulation results show that the proposed technique produces better PSNR and similarity measure. The experimental results indicate that the present approach is more reliable and secure efficient.The robustness of the proposed scheme is evaluated against various image-processing attacks

    A novel perceptually adaptive image watermarking scheme by selecting adaptive threshold in dht domain

    Get PDF
    This paper proposed a novel image watermarking technique by applying the characteristics of the human visual system, in Hadamard transform domain. Statistical information measures were used to select proper blocks for data embedding. Watermark was embedded by the modification of Discrete Hadamard transform (DHT) coefficients of selected blocks. Threshold and modification value were selected adaptively for each image block, which improved robustness and transparency. The proposed algorithm was able to withstand a variety of attacks and image processing operations like rotation, cropping, noise addition, resizing, lossy compression and etc. The experimental results showed good performance of the proposed scheme in comparison with some of the recently reported watermarking techniques.Keywords: Digital image watermarking, Hadamard transform, Entropy, Lossy compression, Adaptive Threshol

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    A robust region-adaptive digital image watermarking system

    Get PDF
    Digital image watermarking techniques have drawn the attention of researchers and practitioners as a means of protecting copyright in digital images. The technique involves a subset of information-hiding technologies, which work by embedding information into a host image without perceptually altering the appearance of the host image. Despite progress in digital image watermarking technology, the main objectives of the majority of research in this area remain improvements in the imperceptibility and robustness of the watermark to attacks. Watermark attacks are often deliberately applied to a watermarked image in order to remove or destroy any watermark signals in the host data. The purpose of the attack is. aimed at disabling the copyright protection system offered by watermarking technology. Our research in the area of watermark attacks found a number of different types, which can be classified into a number of categories including removal attacks, geometry attacks, cryptographic attacks and protocol attacks. Our research also found that both pixel domain and transform domain watermarking techniques share similar levels of sensitivity to these attacks. The experiment conducted to analyse the effects of different attacks on watermarked data provided us with the conclusion that each attack affects the high and low frequency part of the watermarked image spectrum differently. Furthermore, the findings also showed that the effects of an attack can be alleviated by using a watermark image with a similar frequency spectrum to that of the host image. The results of this experiment led us to a hypothesis that would be proven by applying a watermark embedding technique which takes into account all of the above phenomena. We call this technique 'region-adaptive watermarking'. Region-adaptive watermarking is a novel embedding technique where the watermark data is embedded in different regions of the host image. The embedding algorithms use discrete wavelet transforms and a combination of discrete wavelet transforms and singular value decomposition, respectively. This technique is derived from the earlier hypothesis that the robustness of a watermarking process can be improved by using watermark data in the frequency spectrum that are not too dissimilar to that of the host data. To facilitate this, the technique utilises dual watermarking technologies and embeds parts of the watermark images into selected regions of the host image. Our experiment shows that our technique improves the robustness of the watermark data to image processing and geometric attacks, thus validating the earlier hypothesis. In addition to improving the robustness of the watermark to attacks, we can also show a novel use for the region-adaptive watermarking technique as a means of detecting whether certain types of attack have occurred. This is a unique feature of our watermarking algorithm, which separates it from other state-of-the-art techniques. The watermark detection process uses coefficients derived from the region-adaptive watermarking algorithm in a linear classifier. The experiment conducted to validate this feature shows that, on average, 94.5% of all watermark attacks can be correctly detected and identified

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    Axmedis 2005

    Get PDF
    The AXMEDIS conference aims to promote discussions and interactions among researchers, practitioners, developers and users of tools, technology transfer experts, and project managers, to bring together a variety of participants. The conference focuses on the challenges in the cross-media domain (which include production, protection, management, representation, formats, aggregation, workflow, distribution, business and transaction models), and the integration of content management systems and distribution chains, with particular emphasis on cost reduction and effective solutions for complex cross-domain problems

    Cyber Law and Espionage Law as Communicating Vessels

    Get PDF
    Professor Lubin\u27s contribution is Cyber Law and Espionage Law as Communicating Vessels, pp. 203-225. Existing legal literature would have us assume that espionage operations and “below-the-threshold” cyber operations are doctrinally distinct. Whereas one is subject to the scant, amorphous, and under-developed legal framework of espionage law, the other is subject to an emerging, ever-evolving body of legal rules, known cumulatively as cyber law. This dichotomy, however, is erroneous and misleading. In practice, espionage and cyber law function as communicating vessels, and so are better conceived as two elements of a complex system, Information Warfare (IW). This paper therefore first draws attention to the similarities between the practices – the fact that the actors, technologies, and targets are interchangeable, as are the knee-jerk legal reactions of the international community. In light of the convergence between peacetime Low-Intensity Cyber Operations (LICOs) and peacetime Espionage Operations (EOs) the two should be subjected to a single regulatory framework, one which recognizes the role intelligence plays in our public world order and which adopts a contextual and consequential method of inquiry. The paper proceeds in the following order: Part 2 provides a descriptive account of the unique symbiotic relationship between espionage and cyber law, and further explains the reasons for this dynamic. Part 3 places the discussion surrounding this relationship within the broader discourse on IW, making the claim that the convergence between EOs and LICOs, as described in Part 2, could further be explained by an even larger convergence across all the various elements of the informational environment. Parts 2 and 3 then serve as the backdrop for Part 4, which details the attempt of the drafters of the Tallinn Manual 2.0 to compartmentalize espionage law and cyber law, and the deficits of their approach. The paper concludes by proposing an alternative holistic understanding of espionage law, grounded in general principles of law, which is more practically transferable to the cyber realmhttps://www.repository.law.indiana.edu/facbooks/1220/thumbnail.jp

    Multimedia

    Get PDF
    The nowadays ubiquitous and effortless digital data capture and processing capabilities offered by the majority of devices, lead to an unprecedented penetration of multimedia content in our everyday life. To make the most of this phenomenon, the rapidly increasing volume and usage of digitised content requires constant re-evaluation and adaptation of multimedia methodologies, in order to meet the relentless change of requirements from both the user and system perspectives. Advances in Multimedia provides readers with an overview of the ever-growing field of multimedia by bringing together various research studies and surveys from different subfields that point out such important aspects. Some of the main topics that this book deals with include: multimedia management in peer-to-peer structures & wireless networks, security characteristics in multimedia, semantic gap bridging for multimedia content and novel multimedia applications

    Content Control: The Motion Picture Association of America's Patrolling of Internet Piracy in America, 1996-2008

    Get PDF
    This historical and political economic investigation aims to illustrate the ways in which the Motion Picture Association of America radically revised their methods of patrolling and fighting film piracy from 1996-2008. Overall, entertainment companies discovered the World Wide Web to be a powerful distribution outlet for cultural works, but were suspicious that the Internet was a Wild West frontier requiring regulation. The entertainment industry's guiding belief in regulation and strong protection were prompted by convictions that once the copyright industries lose control, companies quickly submerge like floundering ships. Guided by fears regarding film piracy, the MPAA instituted a sophisticated and seemingly impenetrable "trusted system" to secure its cultural products online by crafting relationships and interlinking the technological, legal, institutional, and rhetorical in order to carefully direct consumer activity according to particular agendas. The system created a scenario in which legislators and courts of law consented to play a supportive role with privately organized arrangements professing to serve the public interest, but the arrangements were not designed for those ends. Additionally, as cultural products became digitized consumers experienced a paradigm shift that challenged the concept of property altogether. In the digital world the Internet gives a consumer access to, rather than ownership of, cultural products in cyberspace. The technology granting consumers, on impulse, access to enormous amounts of music and films has been called, among many things, the "celestial jukebox." Regardless of what the technology is called, behind the eloquent veneer is the case in point of a systematic corrosion of consumer rights that, in the end, results in an unfair exchange between the content producers and consumers. What is the relationship of the MPAA to current piracy practices in America? How will Hollywood's enormous economic investment in content control affect future film distribution, exhibition, and consumer reception? Through historical analysis regarding the MPAA's campaign against film piracy along with interviews from key media industry personnel and the pirate underground, this contemporary illustration depicts how the MPAA secures its content for Internet distribution, and defines and criticizes the legal and technological controls that collide with consumer freedoms
    corecore