99 research outputs found

    Copyright Protection of 3D Digitized Sculptures by Use of Haptic Device for Adding Local-Imperceptible Bumps

    Get PDF
    This research aims to improve some approaches for protecting digitized 3D models of cultural heritage objects such as the approach shown in the authors\u27 previous research on this topic. This technique can be used to protect works of art such as 3D models of sculptures, pottery, and 3D digital characters for animated film and gaming. It can also be used to preserve architectural heritage. In the research presented here adding protection to the scanned 3D model of the original sculpture was achieved using the digital sculpting technique with a haptic device. The original 3D model and the model with added protection were after that printed at the 3D printer, and then such 3D printed models were scanned. In order to measure the thickness of added protection, the original 3D model and the model with added protection were compared. Also, two scanned models of the printed sculptures were compared to define the amount of added material. The thickness of the added protection is up to 2 mm, whereas the highest difference detected between a matching scan of the original sculpture (or protected 3D model) and a scan of its printed version (or scan of the protected printed version) is about 1 mm

    WAVELET BASED DATA HIDING OF DEM IN THE CONTEXT OF REALTIME 3D VISUALIZATION (Visualisation 3D Temps-Réel à Distance de MNT par Insertion de Données Cachées Basée Ondelettes)

    No full text
    The use of aerial photographs, satellite images, scanned maps and digital elevation models necessitates the setting up of strategies for the storage and visualization of these data. In order to obtain a three dimensional visualization it is necessary to drape the images, called textures, onto the terrain geometry, called Digital Elevation Model (DEM). Practically, all these information are stored in three different files: DEM, texture and position/projection of the data in a geo-referential system. In this paper we propose to stock all these information in a single file for the purpose of synchronization. For this we have developed a wavelet-based embedding method for hiding the data in a colored image. The texture images containing hidden DEM data can then be sent from the server to a client in order to effect 3D visualization of terrains. The embedding method is integrable with the JPEG2000 coder to accommodate compression and multi-resolution visualization. Résumé L'utilisation de photographies aériennes, d'images satellites, de cartes scannées et de modèles numériques de terrains amène à mettre en place des stratégies de stockage et de visualisation de ces données. Afin d'obtenir une visualisation en trois dimensions, il est nécessaire de lier ces images appelées textures avec la géométrie du terrain nommée Modèle Numérique de Terrain (MNT). Ces informations sont en pratiques stockées dans trois fichiers différents : MNT, texture, position et projection des données dans un système géo-référencé. Dans cet article, nous proposons de stocker toutes ces informations dans un seul fichier afin de les synchroniser. Nous avons développé pour cela une méthode d'insertion de données cachées basée ondelettes dans une image couleur. Les images de texture contenant les données MNT cachées peuvent ensuite être envoyées du serveur au client afin d'effectuer une visualisation 3D de terrains. Afin de combiner une visualisation en multirésolution et une compression, l'insertion des données cachées est intégrable dans le codeur JPEG 2000

    A review and open issues of diverse text watermarking techniques in spatial domain

    Get PDF
    Nowadays, information hiding is becoming a helpful technique and fetches more attention due to the fast growth of using the internet; it is applied for sending secret information by using different techniques. Watermarking is one of major important technique in information hiding. Watermarking is of hiding secret data into a carrier media to provide the privacy and integrity of information so that no one can recognize and detect it's accepted the sender and receiver. In watermarking, many various carrier formats can be used such as an image, video, audio, and text. The text is most popular used as a carrier files due to its frequency on the internet. There are many techniques variables for the text watermarking; each one has its own robust and susceptible points. In this study, we conducted a review of text watermarking in the spatial domain to explore the term text watermarking by reviewing, collecting, synthesizing and analyze the challenges of different studies which related to this area published from 2013 to 2018. The aims of this paper are to provide an overview of text watermarking and comparison between approved studies as discussed according to the Arabic text characters, payload capacity, Imperceptibility, authentication, and embedding technique to open important research issues in the future work to obtain a robust method

    Robust digital watermarking techniques for multimedia protection

    Get PDF
    The growing problem of the unauthorized reproduction of digital multimedia data such as movies, television broadcasts, and similar digital products has triggered worldwide efforts to identify and protect multimedia contents. Digital watermarking technology provides law enforcement officials with a forensic tool for tracing and catching pirates. Watermarking refers to the process of adding a structure called a watermark to an original data object, which includes digital images, video, audio, maps, text messages, and 3D graphics. Such a watermark can be used for several purposes including copyright protection, fingerprinting, copy protection, broadcast monitoring, data authentication, indexing, and medical safety. The proposed thesis addresses the problem of multimedia protection and consists of three parts. In the first part, we propose new image watermarking algorithms that are robust against a wide range of intentional and geometric attacks, flexible in data embedding, and computationally fast. The core idea behind our proposed watermarking schemes is to use transforms that have different properties which can effectively match various aspects of the signal's frequencies. We embed the watermark many times in all the frequencies to provide better robustness against attacks and increase the difficulty of destroying the watermark. The second part of the thesis is devoted to a joint exploitation of the geometry and topology of 3D objects and its subsequent application to 3D watermarking. The key idea consists of capturing the geometric structure of a 3D mesh in the spectral domain by computing the eigen-decomposition of the mesh Laplacian matrix. We also use the fact that the global shape features of a 3D model may be reconstructed using small low-frequency spectral coefficients. The eigen-analysis of the mesh Laplacian matrix is, however, prohibitively expensive. To lift this limitation, we first partition the 3D mesh into smaller 3D sub-meshes, and then we repeat the watermark embedding process as much as possible in the spectral coefficients of the compressed 3D sub-meshes. The visual error of the watermarked 3D model is evaluated by computing a nonlinear visual error metric between the original 3D model and the watermarked model obtained by our proposed algorithm. The third part of the thesis is devoted to video watermarking. We propose robust, hybrid scene-based MPEG video watermarking techniques based on a high-order tensor singular value decomposition of the video image sequences. The key idea behind our approaches is to use the scene change analysis to embed the watermark repeatedly in a fixed number of the intra-frames. These intra-frames are represented as 3D tensors with two dimensions in space and one dimension in time. We embed the watermark information in the singular values of these high-order tensors, which have good stability and represent the video properties. Illustration of numerical experiments with synthetic and real data are provided to demonstrate the potential and the much improved performance of the proposed algorithms in multimedia watermarking

    Information embedding and retrieval in 3D printed objects

    Get PDF
    Deep learning and convolutional neural networks have become the main tools of computer vision. These techniques are good at using supervised learning to learn complex representations from data. In particular, under limited settings, the image recognition model now performs better than the human baseline. However, computer vision science aims to build machines that can see. It requires the model to be able to extract more valuable information from images and videos than recognition. Generally, it is much more challenging to apply these deep learning models from recognition to other problems in computer vision. This thesis presents end-to-end deep learning architectures for a new computer vision field: watermark retrieval from 3D printed objects. As it is a new area, there is no state-of-the-art on many challenging benchmarks. Hence, we first define the problems and introduce the traditional approach, Local Binary Pattern method, to set our baseline for further study. Our neural networks seem useful but straightfor- ward, which outperform traditional approaches. What is more, these networks have good generalization. However, because our research field is new, the problems we face are not only various unpredictable parameters but also limited and low-quality training data. To address this, we make two observations: (i) we do not need to learn everything from scratch, we know a lot about the image segmentation area, and (ii) we cannot know everything from data, our models should be aware what key features they should learn. This thesis explores these ideas and even explore more. We show how to use end-to-end deep learning models to learn to retrieve watermark bumps and tackle covariates from a few training images data. Secondly, we introduce ideas from synthetic image data and domain randomization to augment training data and understand various covariates that may affect retrieve real-world 3D watermark bumps. We also show how the illumination in synthetic images data to effect and even improve retrieval accuracy for real-world recognization applications

    The fair dealing doctrine in respect of digital books

    Get PDF
    Copyright is essentially the right of the rightsholder of an original work to prohibit others from making or distributing unauthorised copies of his or her work. More specifically for this dissertation, when an end user deals with digital content, one of the aims of copyright becomes the balancing of the conflicting interests in ‘exclusivity’ on the one hand, and in ‘access to information’ on the other. Exclusivity is achieved by the rightsholders through technological protection measures to protect their commercial interests. Access to information is achieved where works are available to the general public without payment and technological protection measures and where the digital content is not directly marketed for commercial gain. Exclusivity and access to information are two conflicting cultures surrounding copyright in the digital era. It is submitted that unless we find a socio-economic-legal way for the dynamic coexistence of these two conflicting cultures by means of fair dealing, the culture of exclusivity will eventually dominate fair access to information. The transient nature of digital content means that rightsholders have little or no control over their works once the end user has obtained a legal digital copy of the work. The right ‘to prohibit’ end users from copying and distributing unauthorised copies is, therefore, largely meaningless unless a legal or other solution can be found to discourage end users from the unauthorised reproduction and distribution of unauthorised copies of the work. Currently, technological protection measures are used to manage such digital rights because legal permissions within the doctrine of fair dealing for works in printed (analogue) format are inadequate. It is, however, submitted that a legal solution to discourage end users from copying and distributing unauthorised copies rests on two pillars. Firstly, the solution must be embedded in state-of-the-art digital rights management systems and secondly the business model used by publishers, and academic publishers in particular, should change fundamentally from a business-to-consumer model to a business-to-business model. Empirical evidence shows that the printing of e-content will continue to be relevant far into the future. Therefore, the management of fair dealing to allow for the printing of digital content will become increasingly important at educational institutions that use e-books as prescribed course material. It is submitted that although the origination cost of print editions and e-books correspond, the relatively high retail price of e-books appears to be based on the fact that academic publishers of digital content do not have the legal or digital rights management tools to manage the challenges arising from the fair dealing doctrine. The observation that academic publishers are reluctant to grant collecting societies mandates to manage the distribution of digital content, and/or the right to manage the authorised reproduction (printing) of the digital content, supports this hypothesis. Ultimately, with technologies at our disposal, the fair use of content in digital and print format can be achieved because it should simply be cheaper to comply with copyright laws than to make unauthorised digital or printed copies of content that our society desperately needs to make South Africa a winning nation.Mercantile LawLL. M

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Fair use and file sharing in research and education

    No full text
    This work was inspired by the well-ventilated current problems around the use of digital file sharing technologies and their promotion of infringement of copyright leading to the alleged destruction of entertainment industries. Different legal systems have applied different analyses to such problems, and there is no clear and coherent answer to the question of whether file sharing, especially in the form of peer-to-peer (P2P), is legal. The particular focus of this thesis flows from the realisation that litigation around file sharing has uniformly explored it from the perspective of users downloading entertainment materials such as music and videos. Comparatively little attention has been paid to whether research and educational users have, or should have, rights to use the same digital file sharing technologies to access copyright materials important to their work. If digital file sharing is declared illegal by the courts at the behest of the entertainment industries, then what will happen to research and educational users of these networks?To explore this key problem, this thesis focuses on how fair use doctrine, the most important exception and limitation to copyright, has transferred from the traditional copyright environment into the context of digital file sharing. By undertaking a study of relevant legislation and cases, such as the well known Napster, Grokster and MP3.com, the “who” issue, namely, who is the party entitled to benefit from a fair use defence will be highlighted.Having established that fair use as a defence operates ineffectively in the digital file sharing environment, the thesis then looks at existing alternative or “fared” use models, and particularly the disadvantages of “fared” use system in serving research and educational file sharing. Finally the thesis turns to what is termed the “voluntary model”: a model in which copyright owners make their works available to academic users for free, via an institutional repository, the authors gaining non-pecuniary benefits while the commercial publisher being cut out as a “middleman”. Although future work to develop the details of this approach would be required, the thesis asserts this is a promising way towards ensuring access to copyright works in research and education thus benefiting society, whilst at the same time establishing fair compensation to authors for their efforts

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences
    corecore