128 research outputs found

    Spread spectrum-based video watermarking algorithms for copyright protection

    Get PDF
    Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can now benefit from hardware and software which was considered state-of-the-art several years ago. The advantages offered by the digital technologies are major but the same digital technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly possible and relatively easy, in spite of various forms of protection, but due to the analogue environment, the subsequent copies had an inherent loss in quality. This was a natural way of limiting the multiple copying of a video material. With digital technology, this barrier disappears, being possible to make as many copies as desired, without any loss in quality whatsoever. Digital watermarking is one of the best available tools for fighting this threat. The aim of the present work was to develop a digital watermarking system compliant with the recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark can be inserted in either spatial domain or transform domain, this aspect was investigated and led to the conclusion that wavelet transform is one of the best solutions available. Since watermarking is not an easy task, especially considering the robustness under various attacks several techniques were employed in order to increase the capacity/robustness of the system: spread-spectrum and modulation techniques to cast the watermark, powerful error correction to protect the mark, human visual models to insert a robust mark and to ensure its invisibility. The combination of these methods led to a major improvement, but yet the system wasn't robust to several important geometrical attacks. In order to achieve this last milestone, the system uses two distinct watermarks: a spatial domain reference watermark and the main watermark embedded in the wavelet domain. By using this reference watermark and techniques specific to image registration, the system is able to determine the parameters of the attack and revert it. Once the attack was reverted, the main watermark is recovered. The final result is a high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen

    ИНТЕЛЛЕКТУАЛЬНЫЙ числовым программным ДЛЯ MIMD-компьютер

    Get PDF
    For most scientific and engineering problems simulated on computers the solving of problems of the computational mathematics with approximately given initial data constitutes an intermediate or a final stage. Basic problems of the computational mathematics include the investigating and solving of linear algebraic systems, evaluating of eigenvalues and eigenvectors of matrices, the solving of systems of non-linear equations, numerical integration of initial- value problems for systems of ordinary differential equations.Для більшості наукових та інженерних задач моделювання на ЕОМ рішення задач обчислювальної математики з наближено заданими вихідними даними складає проміжний або остаточний етап. Основні проблеми обчислювальної математики відносяться дослідження і рішення лінійних алгебраїчних систем оцінки власних значень і власних векторів матриць, рішення систем нелінійних рівнянь, чисельного інтегрування початково задач для систем звичайних диференціальних рівнянь.Для большинства научных и инженерных задач моделирования на ЭВМ решение задач вычислительной математики с приближенно заданным исходным данным составляет промежуточный или окончательный этап. Основные проблемы вычислительной математики относятся исследования и решения линейных алгебраических систем оценки собственных значений и собственных векторов матриц, решение систем нелинейных уравнений, численного интегрирования начально задач для систем обыкновенных дифференциальных уравнений

    Rotation Invariant on Harris Interest Points for Exposing Image Region Duplication Forgery

    Get PDF
    Nowadays, image forgery has become common because only an editing package software and a digital camera are required to counterfeit an image. Various fraud detection systems have been developed in accordance with the requirements of numerous applications and to address different types of image forgery. However, image fraud detection is a complicated process given that is necessary to identify the image processing tools used to counterfeit an image. Here, we describe recent developments in image fraud detection. Conventional techniques for detecting duplication forgeries have difficulty in detecting postprocessing falsification, such as grading and joint photographic expert group compression. This study proposes an algorithm that detects image falsification on the basis of Hessian features

    Offline signature verification with user-based and global classifiers of local features

    Get PDF
    Signature verification deals with the problem of identifying forged signatures of a user from his/her genuine signatures. The difficulty lies in identifying allowed variations in a user’s signatures, in the presence of high intra-class and low interclass variability (the forgeries may be more similar to a user’s genuine signature, compared to his/her other genuine signatures). The problem can be seen as a nonrigid object matching where classes are very similar. In the field of biometrics, signature is considered a behavioral biometric and the problem possesses further difficulties compared to other modalities (e.g. fingerprints) due to the added issue of skilled forgeries. A novel offline (image-based) signature verification system is proposed in this thesis. In order to capture the signature’s stable parts and alleviate the difficulty of global matching, local features (histogram of oriented gradients, local binary patterns) are used, based on gradient information and neighboring information inside local regions. Discriminative power of extracted features is analyzed using support vector machine (SVM) classifiers and their fusion gave better results compared to state-of-the-art. Scale invariant feature transform (SIFT) matching is also used as a complementary approach. Two different approaches for classifier training are investigated, namely global and user-dependent SVMs. User-dependent SVMs, trained separately for each user, learn to differentiate a user’s (genuine) reference signatures from other signatures. On the other hand, a single global SVM trained with difference vectors of query and reference signatures’ features of all users in the training set, learns how to weight the importance of different types of dissimilarities. The fusion of all classifiers achieves a 6.97% equal error rate in skilled forgery tests using the public GPDS-160 signature database. Former versions of the system have won several signature verification competitions such as first place in 4NSigComp2010 and 4NSigComp2012 (the task without disguised signatures); first place in 4NSigComp2011 for Chinese signatures category; first place in SigWiComp2013 for all categories. Obtained results are better than those reported in the literature. One of the major benefits of the proposed method is that user enrollment does not require skilled forgeries of the enrolling user, which is essential for real life applications

    Digital Filters and Signal Processing

    Get PDF
    Digital filters, together with signal processing, are being employed in the new technologies and information systems, and are implemented in different areas and applications. Digital filters and signal processing are used with no costs and they can be adapted to different cases with great flexibility and reliability. This book presents advanced developments in digital filters and signal process methods covering different cases studies. They present the main essence of the subject, with the principal approaches to the most recent mathematical models that are being employed worldwide

    A NEW TECHNIQUE IN MOBILE ROBOT SIMULTANEOUS LOCALIZATION AND MAPPING

    Get PDF
    ABSTRACT In field or indoor environments it is usually not possible to provide service robots with detailed a priori environment and task models. In such environments, robots will need to create a dimensionally accurate geometric model by moving around and scanning the surroundings with their sensors, while minimizing the complexity of the required sensing hardware. In this work, an iterative algorithm is proposed to plan the visual exploration strategy of service robots, enabling them to efficiently build a graph model of their environment without the need of costly sensors. In this algorithm, the information content present in sub-regions of a 2-D panoramic image of the environment is determined from the robot's current location using a single camera fixed on the mobile robot. Using a metric based on Shannon's information theory, the algorithm determines, from the 2-D image, potential locations of nodes from which to further image the environment. Using a feature tracking process, the algorithm helps navigate the robot to each new node, where the imaging process is repeated. A Mellin transform and tracking process is used to guide the robot back to a previous node. This imaging, evaluation, branching and retracing its steps continues until the robot has mapped the environment to a pre-specified level of detail. The effectiveness of this algorithm is verified experimentally through the exploration of an indoor environment by a single mobile robot agent using a limited sensor suite. KEYWORDS: Service robots, visual mapping, selflocalization, information theory, Mellin transform. RESUMO Usualmente não é possível fornecer a priori a robôs móveis autônomos um mapa detalhado de seu ambiente de trabalho. Nestes casos, o robô precisa criar um modelo geométrico preciso movendo-se pelo ambiente e utilizando seus sensores. Neste trabalho, um algoritmo iterativo é proposto para planejar a estratégia de exploração de robôs móveis autôno-mos, permitindo-os construir de forma eficiente um modelo do ambiente em forma de grafo sem a necessidade de sensores de alto custo. Neste algoritmo, o conteúdo de informação presente em sub-regiões de uma imagem panorâmica 2-D do ambiente é determinada a partir da posição atual do robô usando uma única câmera fixada em sua estrutura. Usando uma métrica baseada na teoria da informação de Shannon, o algoritmo determina, a partir da imagem 2-D, localizações potenciais para novos nós do grafo, a partir dos quais serão tomadas novas imagens panorâmicas para prosseguir com a exploração. Uma transformada de Mellin é usada para guiar o robô de volta a um nó previamente explorado. Este processo continua até que todo o ambiente tenha sido explorado em um nível de detalhes pré-especificado. A eficácia do algoritmo é verificada experimentalmente através da exploração de um ambiente interno por um agente robótico móvel dispondo apenas de um conjunto limitado de sensores. PALAVRAS-CHAVE: Robôs móveis, mapeamento visual, auto-localização, teoria da informação, transformada de Mellin

    Automatic Alignment of 3D Multi-Sensor Point Clouds

    Get PDF
    Automatic 3D point cloud alignment is a major research topic in photogrammetry, computer vision and computer graphics. In this research, two keypoint feature matching approaches have been developed and proposed for the automatic alignment of 3D point clouds, which have been acquired from different sensor platforms and are in different 3D conformal coordinate systems. The first proposed approach is based on 3D keypoint feature matching. First, surface curvature information is utilized for scale-invariant 3D keypoint extraction. Adaptive non-maxima suppression (ANMS) is then applied to retain the most distinct and well-distributed set of keypoints. Afterwards, every keypoint is characterized by a scale, rotation and translation invariant 3D surface descriptor, called the radial geodesic distance-slope histogram. Similar keypoints descriptors on the source and target datasets are then matched using bipartite graph matching, followed by a modified-RANSAC for outlier removal. The second proposed method is based on 2D keypoint matching performed on height map images of the 3D point clouds. Height map images are generated by projecting the 3D point clouds onto a planimetric plane. Afterwards, a multi-scale wavelet 2D keypoint detector with ANMS is proposed to extract keypoints on the height maps. Then, a scale, rotation and translation-invariant 2D descriptor referred to as the Gabor, Log-Polar-Rapid Transform descriptor is computed for all keypoints. Finally, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour matching, together with the modified-RANSAC for outlier removal. Each method is assessed on multi-sensor, urban and non-urban 3D point cloud datasets. Results show that unlike the 3D-based method, the height map-based approach is able to align source and target datasets with differences in point density, point distribution and missing point data. Findings also show that the 3D-based method obtained lower transformation errors and a greater number of correspondences when the source and target have similar point characteristics. The 3D-based approach attained absolute mean alignment differences in the range of 0.23m to 2.81m, whereas the height map approach had a range from 0.17m to 1.21m. These differences meet the proximity requirements of the data characteristics and the further application of fine co-registration approaches
    corecore