16 research outputs found

    Contextual biometric watermarking of fingerprint images

    Get PDF
    This research presents contextual digital watermarking techniques using face and demographic text data as multiple watermarks for protecting the evidentiary integrity of fingerprint image. The proposed techniques embed the watermarks into selected regions of fingerprint image in MDCT and DWT domains. A general image watermarking algorithm is developed to investigate the application of MDCT in the elimination of blocking artifacts. The application of MDCT has improved the performance of the watermarking technique compared to DCT. Experimental results show that modifications to fingerprint image are visually imperceptible and maintain the minutiae detail. The integrity of the fingerprint image is verified through high matching score obtained from the AFIS system. There is also a high degree of correlation between the embedded and extracted watermarks. The degree of similarity is computed using pixel-based metrics and human visual system metrics. It is useful for personal identification and establishing digital chain of custody. The results also show that the proposed watermarking technique is resilient to common image modifications that occur during electronic fingerprint transmission

    Beyond the pixels: learning and utilising video compression features for localisation of digital tampering.

    Get PDF
    Video compression is pervasive in digital society. With rising usage of deep convolutional neural networks (CNNs) in the fields of computer vision, video analysis and video tampering detection, it is important to investigate how patterns invisible to human eyes may be influencing modern computer vision techniques and how they can be used advantageously. This work thoroughly explores how video compression influences accuracy of CNNs and shows how optimal performance is achieved when compression levels in the training set closely match those of the test set. A novel method is then developed, using CNNs, to derive compression features directly from the pixels of video frames. It is then shown that these features can be readily used to detect inauthentic video content with good accuracy across multiple different video tampering techniques. Moreover, the ability to explain these features allows predictions to be made about their effectiveness against future tampering methods. The problem is motivated with a novel investigation into recent video manipulation methods, which shows that there is a consistent drive to produce convincing, photorealistic, manipulated or synthetic video. Humans, blind to the presence of video tampering, are also blind to the type of tampering. New detection techniques are required and, in order to compensate for human limitations, they should be broadly applicable to multiple tampering types. This thesis details the steps necessary to develop and evaluate such techniques

    A Programmable Display-Layer Architecture for Virtual-Reality Applications

    Get PDF
    Two important technical objectives of virtual-reality systems are to provide compelling visuals and effective 3D user interaction. In this respect, modern virtual reality system architectures suffer from a number of short-comings. The reduction of end-to-end latency, crosstalk and judder are especially difficult challenges, each of which negatively affects visual quality or user interaction. In order to provide higher quality visuals, complex scenes consisting of large models are often used. Rendering such a complex scene is a time-consuming process resulting in high end-to-end latency, thereby hampering user interaction. Classic virtual-reality architectures can not adequately address these challenges due to their inherent design principles. In particular, the tight coupling between input devices, the rendering loop and the display system inhibits these systems from addressing all the aforementioned challenges simultaneously. In this thesis, a virtual-reality architecture design is introduced that is based on the addition of a new logical layer: the Programmable Display Layer (PDL). The governing idea is that an extra layer is inserted between the rendering system and the display. In this way, the display can be updated at a fast rate and in a custom manner independent of the other components in the architecture, including the rendering system. To generate intermediate display updates at a fast rate, the PDL performs per-pixel depth-image warping by utilizing the application data. Image warping is the process of computing a new image by transforming individual depth-pixels from a closely matching previous image to their updated locations. The PDL architecture can be used for a range of algorithms and to solve problems that are not easily solved using classic architectures. In particular, techniques to reduce crosstalk, judder and latency are examined using algorithms implemented on top of the PDL. Concerning user interaction techniques, several six-degrees-of-freedom input methods exists, of which optical tracking is a popular option. However, optical tracking methods also introduce several constraints that depend on the camera setup, such as line-of-sight requirements, the volume of the interaction space and the achieved tracking accuracy. These constraints generally cause a decline in the effectiveness of user interaction. To investigate the effectiveness of optical tracking methods, an optical tracker simulation framework has been developed, including a novel optical tracker to test this framework. In this way, different optical tracking algorithms can be simulated and quantitatively evaluated under a wide range of conditions. A common approach in virtual reality is to implement an algorithm and then to evaluate the efficacy of that algorithm by either subjective, qualitative metrics or quantitative user experiments, after which an updated version of the algorithm may be implemented and the cycle repeated. A different approach is followed here. Throughout this thesis, an attempt is made to automatically detect and quantify errors using completely objective and automated quantitative methods and to subsequently attempt to resolve these errors dynamically

    Watermarking via zero assigned filter banks

    Get PDF
    Cataloged from PDF version of article.A watermarking scheme for audio and image files is proposed based on wavelet decomposition via zero assigned filter banks. Zero assigned filter banks are perfect reconstruction, conjugate quadrature mirror filter banks with assigned zeros in low pass and high pass filters. They correspond to a generalization of filter banks that yield Daubechies wavelets. The watermarking method consists of partitioning a given time or space signal into frames of fixed size, wavelet decomposing each frame via one of two filter banks with different assigned zeros, compressing a suitable set of coefficients in the wavelet decomposition, and reconstructing the signal from the compressed coefficients of frames. In effect, this method encodes the bit ‘0’ or ‘1’ in each frame depending on the filter bank that is used in the wavelet decomposition of that frame. The method is shown to be perceptually transparent and robust against channel noise as well as against various attacks to remove the watermark such as denoising, estimation, and compression. Moreover, the original signal is not needed for detection and the bandwidth requirement of the multiple authentication keys that are used in this method is very modest.Yücel, ZeynepM.S

    An Analysis of VP8, a new video codec for the web

    Get PDF
    Video is an increasingly ubiquitous part of our lives. Fast and efficient video codecs are necessary to satisfy the increasing demand for video on the web and mobile devices. However, open standards and patent grants are paramount to the adoption of video codecs across different platforms and browsers. Google On2 released VP8 in May 2010 to compete with H.264, the current standard of video codecs, complete with source code, specification and a perpetual patent grant. As the amount of video being created every day is growing rapidly, the decision of which codec to encode this video with is paramount; if a low quality codec or a restrictively licensed codec is used, the video recorded might be of little to no use. We sought to study VP8 and its quality versus its resource consumption compared to H.264 -- the most popular current video codec -- so that reader may make an informed decision for themselves or for their organizations about whether to use H.264 or VP8, or something else entirely. We examined VP8 in detail, compared its theoretical complexity to H.264 and measured the efficiency of its current implementation. VP8 shares many facets of its design with H.264 and other Discrete Cosine Transform (DCT) based video codecs. However, VP8 is both simpler and less feature rich than H.264, which may allow for rapid hardware and software implementations. As it was designed for the Internet and newer mobile devices, it contains fewer legacy features, such as interlacing, than H.264 supports. To perform quality measurements, the open source VP8 implementation libvpx was used. This is the reference implementation. For H.264, the open source H.264 encoder x264 was used. This encoder has very high performance, and is often rated at the top of its field in efficiency. The JM reference encoder was used to establish a baseline quality for H.264. Our findings indicate that VP8 performs very well at low bitrates, at resolutions at and below CIF. VP8 may be able to successfully displace H.264 Baseline in the mobile streaming video domain. It offers higher quality at a lower bitrate for low resolution images due to its high performing entropy coder and non-contiguous macroblock segmentation. At higher resolutions, VP8 still outperforms H.264 Baseline, but H.264 High profile leads. At HD resolution (720p and above), H.264 is significantly better than VP8 due to its superior motion estimation and adaptive coding. There is little significant difference between the intra-coding performance between H.264 and VP8. VP8\u27s in-loop deblocking filter outperforms H.264\u27s version. H.264\u27s inter-coding, with full support for B frames and weighting outperforms VP8\u27s alternate reference scheme, although this may improve in the future. On average, VP8\u27s feature set is less complex than H.264\u27s equivalents, which, along with its open source implementation, may spur development in the future. These findings indicate that VP8 has strong fundamentals when compared with H.264, but that it lacks optimization and maturity. It will likely improve as engineers optimize VP8\u27s reference implementation, or when a competing implementation is developed. We recommend several areas that the VP8 developers should focus on in the future

    Image synthesis based on a model of human vision

    Get PDF
    Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision

    Invisible watermarking of digital signals

    Get PDF
    Cílem téhle práce je navrhnutí nových technik pro robustní neviditelné značení digitálních signálů. Nejdříve je prezentován současný stav tohoto odvětví a dostupné softwarové řešení. Poté následuje návrh několika algoritmů pro neviditelné značení, přičemž každý z nich je založen na jiném principu. Dále je připravena sada digitálních testovacích signálů společně s testovacím softwarem pro otestování navržených řešení a jejích porovnání s vybraným dostupným softwarem. Poté následuje srovnání naměřených výsledků, výkonu a jejích diskuze.The aim of this thesis is to propose new techniques for robust invisible watermarking of digital signals. Firstly, state of the art and existing available software solutions are discussed. Then, design of multiple algorithms for invisible watermarking follows, each based on different principle. In order to enable benchmarking, suitable digital signals dataset is prepared. Also, testing benchmark is introduced, empowering multiple known attacks. Each presented solution is then benchmarked, as well as the introduced existing available software solutions for invisible watermarking. Results are then compared and discussed.

    A new video quality metric for compressed video.

    Get PDF
    Video compression enables multimedia applications such as mobile video messaging and streaming, video conferencing and more recently online social video interactions to be possible. Since most multimedia applications are meant for the human observer, measuring perceived video quality during the designing and testing of these applications is important. Performance of existing perceptual video quality measurement techniques is limited due to poor correlation with subjective quality and implementation complexity. Therefore, this thesis presents new techniques for measuring perceived quality of compressed multimedia video using computationally simple and efficient algorithms. A new full reference perceptual video quality metric called the MOSp metric for measuring subjective quality of multimedia video sequences compressed using block-based video coding algorithms is developed. The metric predicts subjective quality of compressed video using the mean squared error between original and compressed sequences, and video content. Factors which influence the visibility of compression-induced distortion such as spatial texture masking, temporal masking and cognition, are considered for quantifying video content. The MOSp metric is simple to implement and can be integrated into block-based video coding algorithms for real time quality estimations. Performance results presented for a variety of multimedia content compressed to a large range of bitrates show that the metric has high correlation with subjective quality and performs better than popular video quality metrics. As an application of the MOSp metric to perceptual video coding, a new MOSpbased mode selection algorithm for a H264/AVC video encoder is developed. Results show that, by integrating the MOSp metric into the mode selection process, it is possible to make coding decisions based on estimated visual quality rather than mathematical error measures and to achieve visual quality gain in content that is identified as visually important by the MOSp metric. The novel algorithms developed in this research work are particularly useful for integrating into block based video encoders such as the H264/AVC standard for making real time visual quality estimations and coding decisions based on estimated visual quality rather than the currently used mathematical error measures
    corecore