94 research outputs found

    Digital Copyright Protection: Focus on Some Relevant Solutions

    Get PDF
    Copyright protection of digital content is considered a relevant problem of the current Internet since content digitalization and high performance interconnection networks have greatly increased the possibilities to reproduce and distribute digital content. Digital Rights Management (DRM) systems try to prevent the inappropriate or illegal use of copyrighted digital content. They are promoted by the major global media players, but they are also perceived as proprietary solutions that give rise to classic problems of privacy and fair use. On the other hand, watermarking protocols have become a possible solution to the problem of copyright protection. They have evolved during the last decade, and interesting proposals have been designed. This paper first presents current trends concerning the most significant solutions to the problem of copyright protection based on DRM systems and then focuses on the most promising approaches in the field of watermarking protocols. In this regard, the examined protocols are discussed in order to individuate which of them can better represent the right trade-off between opposite goals, such as, for example, security and easy of use, so as to prove that it is possible to implement open solutions compatible with the current web context without resorting to proprietary architectures or impairing the protection of copyrighted digital content

    Secure Watermarking for Multimedia Content Protection: A Review of its Benefits and Open Issues

    Get PDF
    Distribution channels such as digital music downloads, video-on-demand, multimedia social networks, pose new challenges to the design of content protection measures aimed at preventing copyright violations. Digital watermarking has been proposed as a possible brick of such protection systems, providing a means to embed a unique code, as a fingerprint, into each copy of the distributed content. However, application of watermarking for multimedia content protection in realistic scenarios poses several security issues. Secure signal processing, by which name we indicate a set of techniques able to process sensitive signals that have been obfuscated either by encryption or by other privacy-preserving primitives, may offer valuable solutions to the aforementioned issues. More specifically, the adoption of efficient methods for watermark embedding or detection on data that have been secured in some way, which we name in short secure watermarking, provides an elegant way to solve the security concerns of fingerprinting applications. The aim of this contribution is to illustrate recent results regarding secure watermarking to the signal processing community, highlighting both benefits and still open issues. Some of the most interesting challenges in this area, as well as new research directions, will also be discussed

    A Digital Rights Management System based on Cloud

    Get PDF
    In the current Internet, digital entertainment contents, such as video or audio files, are easily accessible due to the new multimedia technologies and to broadband network connections. This causes considerable economic loss to global media players since digital contents, once legitimately obtained, can be illegitimately shared through file sharing services on the Internet. Digital Rights Management (DRM) systems have been proposed to support the protection of copyrighted digital contents. Even though such systems have been widely adopted and promoted by global media players, they are based on proprietary mechanisms that usually work only in closed, monolithic environments. In this regard, systems based on watermarking technologies appear more suited to protect digital copyrighted content. This paper describes the implementation scheme of a DRM system able to ensure the copyright protection of digital content according to an innovative buyer-friendly watermarking protocol. The DRM system has been implemented by exploiting a cloud environment in order to improve the overall performance of the system. In particular, cloud behaves as a service infrastructural provider, since the content provider involved in the watermarking protocol uses cloud to speed up the watermark embedding process and to save storage and bandwidth costs needed to store and to deliver protected contents

    Data Hiding and Its Applications

    Get PDF
    Data hiding techniques have been widely used to provide copyright protection, data integrity, covert communication, non-repudiation, and authentication, among other applications. In the context of the increased dissemination and distribution of multimedia content over the internet, data hiding methods, such as digital watermarking and steganography, are becoming increasingly relevant in providing multimedia security. The goal of this book is to focus on the improvement of data hiding algorithms and their different applications (both traditional and emerging), bringing together researchers and practitioners from different research fields, including data hiding, signal processing, cryptography, and information theory, among others

    A survey on security, privacy and anonymity in legal distribution of copyrighted multimedia content over peer-to-peer networks

    Full text link

    Review on Data Hiding Schemes into Multimedia Data

    Get PDF
    In this period of internet, Multimedia data security is crucial issue because there are many cases of illegal production and redistribution through the Internet. Data hiding and encryption algorithms can be used for Security and protection of multimedia data. Video encryption is new area of research for researchers. Data hiding in encrypted videos is important to conquer the aim of content annotation, copy right protection, access control and/or tapering detection. This survey summarizes the latest research results on video encryption with a special focus on applicability and on the most widely-deployed video format H.264 including advanced video Coding

    Framework for privacy-aware content distribution in peer-to- peer networks with copyright protection

    Get PDF
    The use of peer-to-peer (P2P) networks for multimedia distribution has spread out globally in recent years. This mass popularity is primarily driven by the efficient distribution of content, also giving rise to piracy and copyright infringement as well as privacy concerns. An end user (buyer) of a P2P content distribution system does not want to reveal his/her identity during a transaction with a content owner (merchant), whereas the merchant does not want the buyer to further redistribute the content illegally. Therefore, there is a strong need for content distribution mechanisms over P2P networks that do not pose security and privacy threats to copyright holders and end users, respectively. However, the current systems being developed to provide copyright and privacy protection to merchants and end users employ cryptographic mechanisms, which incur high computational and communication costs, making these systems impractical for the distribution of big files, such as music albums or movies.El uso de soluciones de igual a igual (peer-to-peer, P2P) para la distribución multimedia se ha extendido mundialmente en los últimos años. La amplia popularidad de este paradigma se debe, principalmente, a la distribución eficiente de los contenidos, pero también da lugar a la piratería, a la violación del copyright y a problemas de privacidad. Un usuario final (comprador) de un sistema de distribución de contenidos P2P no quiere revelar su identidad durante una transacción con un propietario de contenidos (comerciante), mientras que el comerciante no quiere que el comprador pueda redistribuir ilegalmente el contenido más adelante. Por lo tanto, existe una fuerte necesidad de mecanismos de distribución de contenidos por medio de redes P2P que no supongan un riesgo de seguridad y privacidad a los titulares de derechos y los usuarios finales, respectivamente. Sin embargo, los sistemas actuales que se desarrollan con el propósito de proteger el copyright y la privacidad de los comerciantes y los usuarios finales emplean mecanismos de cifrado que implican unas cargas computacionales y de comunicaciones muy elevadas que convierten a estos sistemas en poco prácticos para distribuir archivos de gran tamaño, tales como álbumes de música o películas.L'ús de solucions d'igual a igual (peer-to-peer, P2P) per a la distribució multimèdia s'ha estès mundialment els darrers anys. L'àmplia popularitat d'aquest paradigma es deu, principalment, a la distribució eficient dels continguts, però també dóna lloc a la pirateria, a la violació del copyright i a problemes de privadesa. Un usuari final (comprador) d'un sistema de distribució de continguts P2P no vol revelar la seva identitat durant una transacció amb un propietari de continguts (comerciant), mentre que el comerciant no vol que el comprador pugui redistribuir il·legalment el contingut més endavant. Per tant, hi ha una gran necessitat de mecanismes de distribució de continguts per mitjà de xarxes P2P que no comportin un risc de seguretat i privadesa als titulars de drets i els usuaris finals, respectivament. Tanmateix, els sistemes actuals que es desenvolupen amb el propòsit de protegir el copyright i la privadesa dels comerciants i els usuaris finals fan servir mecanismes d'encriptació que impliquen unes càrregues computacionals i de comunicacions molt elevades que fan aquests sistemes poc pràctics per a distribuir arxius de grans dimensions, com ara àlbums de música o pel·lícules

    A framework for cascading payment and content exchange within P2P systems

    Get PDF
    Advances in computing technology and the proliferation of broadband in the home have opened up the Internet to wider use. People like the idea of easy access to information at their fingertips, via their personal networked devices. This has been established by the increased popularity of Peer-to-Peer (P2P) file-sharing networks. P2P is a viable and cost effective model for content distribution. Content producers require modest resources by today's standards to act as distributors of their content and P2P technology can assist in further reducing this cost, thus enabling the development of new business models for content distribution to realise market and user needs. However, many other consequences and challenges are introduced; more notably, the issues of copyright violation, free-riding, the lack of participation incentives and the difficulties associated with the provision of payment services within a decentralised heterogeneous and ad hoc environment. Further issues directly relevant to content exchange also arise such as transaction atomicity, non-repudiation and data persistence. We have developed a framework to address these challenges. The novel Cascading Payment Content Exchange (CasPaCE) framework was designed and developed to incorporate the use of cascading payments to overcome the problem of copyright violation and prevent free-riding in P2P file-sharing networks. By incorporating the use of unique identification, copyright mobility and fair compensation for both producers and distributors in the content distribution value chain, the cascading payments model empowers content producers and enables the creation of new business models. The system allows users to manage their content distribution as well as purchasing activities by mobilising payments and automatically gathering royalties on behalf of the producer. The methodology used to conduct this research involved the use of advances in service-oriented architecture development as well as the use of object-oriented analysis and design techniques. These assisted in the development of an open and flexible framework which facilitates equitable digital content exchange without detracting from the advantages of the P2P domain. A prototype of the CasPaCE framework (developed in Java) demonstrates how peer devices can be connected to form a content exchange environment where both producers and distributors benefit from participating in the system. This prototype was successfully evaluated within the bounds of an E-learning Content Exchange (EIConE) case study, which allows students within a large UK university to exchange digital content for compensation enabling the better use of redundant resources in the university

    Towards a human-centric data economy

    Get PDF
    Spurred by widespread adoption of artificial intelligence and machine learning, “data” is becoming a key production factor, comparable in importance to capital, land, or labour in an increasingly digital economy. In spite of an ever-growing demand for third-party data in the B2B market, firms are generally reluctant to share their information. This is due to the unique characteristics of “data” as an economic good (a freely replicable, non-depletable asset holding a highly combinatorial and context-specific value), which moves digital companies to hoard and protect their “valuable” data assets, and to integrate across the whole value chain seeking to monopolise the provision of innovative services built upon them. As a result, most of those valuable assets still remain unexploited in corporate silos nowadays. This situation is shaping the so-called data economy around a number of champions, and it is hampering the benefits of a global data exchange on a large scale. Some analysts have estimated the potential value of the data economy in US$2.5 trillion globally by 2025. Not surprisingly, unlocking the value of data has become a central policy of the European Union, which also estimated the size of the data economy in 827C billion for the EU27 in the same period. Within the scope of the European Data Strategy, the European Commission is also steering relevant initiatives aimed to identify relevant cross-industry use cases involving different verticals, and to enable sovereign data exchanges to realise them. Among individuals, the massive collection and exploitation of personal data by digital firms in exchange of services, often with little or no consent, has raised a general concern about privacy and data protection. Apart from spurring recent legislative developments in this direction, this concern has raised some voices warning against the unsustainability of the existing digital economics (few digital champions, potential negative impact on employment, growing inequality), some of which propose that people are paid for their data in a sort of worldwide data labour market as a potential solution to this dilemma [114, 115, 155]. From a technical perspective, we are far from having the required technology and algorithms that will enable such a human-centric data economy. Even its scope is still blurry, and the question about the value of data, at least, controversial. Research works from different disciplines have studied the data value chain, different approaches to the value of data, how to price data assets, and novel data marketplace designs. At the same time, complex legal and ethical issues with respect to the data economy have risen around privacy, data protection, and ethical AI practices. In this dissertation, we start by exploring the data value chain and how entities trade data assets over the Internet. We carry out what is, to the best of our understanding, the most thorough survey of commercial data marketplaces. In this work, we have catalogued and characterised ten different business models, including those of personal information management systems, companies born in the wake of recent data protection regulations and aiming at empowering end users to take control of their data. We have also identified the challenges faced by different types of entities, and what kind of solutions and technology they are using to provide their services. Then we present a first of its kind measurement study that sheds light on the prices of data in the market using a novel methodology. We study how ten commercial data marketplaces categorise and classify data assets, and which categories of data command higher prices. We also develop classifiers for comparing data products across different marketplaces, and we study the characteristics of the most valuable data assets and the features that specific vendors use to set the price of their data products. Based on this information and adding data products offered by other 33 data providers, we develop a regression analysis for revealing features that correlate with prices of data products. As a result, we also implement the basic building blocks of a novel data pricing tool capable of providing a hint of the market price of a new data product using as inputs just its metadata. This tool would provide more transparency on the prices of data products in the market, which will help in pricing data assets and in avoiding the inherent price fluctuation of nascent markets. Next we turn to topics related to data marketplace design. Particularly, we study how buyers can select and purchase suitable data for their tasks without requiring a priori access to such data in order to make a purchase decision, and how marketplaces can distribute payoffs for a data transaction combining data of different sources among the corresponding providers, be they individuals or firms. The difficulty of both problems is further exacerbated in a human-centric data economy where buyers have to choose among data of thousands of individuals, and where marketplaces have to distribute payoffs to thousands of people contributing personal data to a specific transaction. Regarding the selection process, we compare different purchase strategies depending on the level of information available to data buyers at the time of making decisions. A first methodological contribution of our work is proposing a data evaluation stage prior to datasets being selected and purchased by buyers in a marketplace. We show that buyers can significantly improve the performance of the purchasing process just by being provided with a measurement of the performance of their models when trained by the marketplace with individual eligible datasets. We design purchase strategies that exploit such functionality and we call the resulting algorithm Try Before You Buy, and our work demonstrates over synthetic and real datasets that it can lead to near-optimal data purchasing with only O(N) instead of the exponential execution time - O(2N) - needed to calculate the optimal purchase. With regards to the payoff distribution problem, we focus on computing the relative value of spatio-temporal datasets combined in marketplaces for predicting transportation demand and travel time in metropolitan areas. Using large datasets of taxi rides from Chicago, Porto and New York we show that the value of data is different for each individual, and cannot be approximated by its volume. Our results reveal that even more complex approaches based on the “leave-one-out” value, are inaccurate. Instead, more complex and acknowledged notions of value from economics and game theory, such as the Shapley value, need to be employed if one wishes to capture the complex effects of mixing different datasets on the accuracy of forecasting algorithms. However, the Shapley value entails serious computational challenges. Its exact calculation requires repetitively training and evaluating every combination of data sources and hence O(N!) or O(2N) computational time, which is unfeasible for complex models or thousands of individuals. Moreover, our work paves the way to new methods of measuring the value of spatio-temporal data. We identify heuristics such as entropy or similarity to the average that show a significant correlation with the Shapley value and therefore can be used to overcome the significant computational challenges posed by Shapley approximation algorithms in this specific context. We conclude with a number of open issues and propose further research directions that leverage the contributions and findings of this dissertation. These include monitoring data transactions to better measure data markets, and complementing market data with actual transaction prices to build a more accurate data pricing tool. A human-centric data economy would also require that the contributions of thousands of individuals to machine learning tasks are calculated daily. For that to be feasible, we need to further optimise the efficiency of data purchasing and payoff calculation processes in data marketplaces. In that direction, we also point to some alternatives to repetitively training and evaluating a model to select data based on Try Before You Buy and approximate the Shapley value. Finally, we discuss the challenges and potential technologies that help with building a federation of standardised data marketplaces. The data economy will develop fast in the upcoming years, and researchers from different disciplines will work together to unlock the value of data and make the most out of it. Maybe the proposal of getting paid for our data and our contribution to the data economy finally flies, or maybe it is other proposals such as the robot tax that are finally used to balance the power between individuals and tech firms in the digital economy. Still, we hope our work sheds light on the value of data, and contributes to making the price of data more transparent and, eventually, to moving towards a human-centric data economy.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Georgios Smaragdakis.- Secretario: Ángel Cuevas Rumín.- Vocal: Pablo Rodríguez Rodrígue
    corecore