20,590 research outputs found
Design and Analysis of Fair Content Tracing Protocols
The work in this thesis examines protocols designed to
address the issues of tracing illegal distribution of digital content in a fair manner.
In digital content distribution, a client requests
content from a distributor, and the distributor sends content to the client.
The main concern is misuse of content by the client,
such as illegal distribution.
As a result, digital watermarking schemes that enable the distributor
to trace copies of content and identify the perpetrator were proposed.
However, such schemes do not provide a mechanism for the distributor
to prove to a third party that a client illegally distributed copies of content.
Furthermore, it is possible that the distributor falsely
accuses a client as he has total control of the tracing mechanisms.
Fair content tracing (FaCT)
protocols were thus proposed to allow tracing of content that
does not discriminate either the distributor or the client.
Many FaCT protocols have been proposed, mostly without an appropriate
design framework, and so there is no obvious and systematic way to evaluate them.
Therefore, we propose a framework that provides a definition
of security and which enables classification of FaCT protocols so
that they can be analysed in a systematic manner.
We define, based on our framework, four main categories of FaCT
protocols and propose new approaches to designing them.
The first category is protocols without trusted third parties.
As the name suggests, these protocols do not rely on a
central trusted party for fair tracing of content.
It is difficult to design such a protocol without drawing on
extra measures that increase communication and computation costs.
We show this is the case by demonstrating flaws in two recent proposals.
We also illustrate a possible repair based on relaxing
the assumption of trust on the distributor.
The second category is protocols with online trusted third parties,
where a central online trusted party is deployed.
This means a trusted party must always be available during
content distribution between the distributor and the client.
While the availability of a trusted third party may simplify
the design of such protocols, efficiency may suffer due to the
need to communicate with this third party.
The third category is protocols with offline trusted third parties,
where a central offline trusted party is deployed.
The difference between the offline and the online trusted party is
that the offline trusted party need not be available during content distribution.
It only needs to be available during the initial setup and
when there is a dispute between the distributor and the client.
This reduces the communication requirements compared to using an online trusted party.
Using a symmetric-based cryptographic primitive known as
Chameleon encryption, we proposed a new approach to
designing such protocols.
The fourth category is protocols with trusted hardware.
Previous protocols proposed in this category have abstracted away from
a practical choice of the underlying trusted hardware.
We propose new protocols based on a Trusted Platform Module (TPM).
Finally, we examine the inclusion of payment in a FaCT protocol,
and how adding payment motivates the requirement for
fair exchange of buying and selling digital content
Lime: Data Lineage in the Malicious Environment
Intentional or unintentional leakage of confidential data is undoubtedly one
of the most severe security threats that organizations face in the digital era.
The threat now extends to our personal lives: a plethora of personal
information is available to social networks and smartphone providers and is
indirectly transferred to untrustworthy third party and fourth party
applications.
In this work, we present a generic data lineage framework LIME for data flow
across multiple entities that take two characteristic, principal roles (i.e.,
owner and consumer). We define the exact security guarantees required by such a
data lineage mechanism toward identification of a guilty entity, and identify
the simplifying non repudiation and honesty assumptions. We then develop and
analyze a novel accountable data transfer protocol between two entities within
a malicious environment by building upon oblivious transfer, robust
watermarking, and signature primitives. Finally, we perform an experimental
evaluation to demonstrate the practicality of our protocol
Framework for privacy-aware content distribution in peer-to- peer networks with copyright protection
The use of peer-to-peer (P2P) networks for multimedia distribution has spread out globally in recent years. This mass popularity is primarily driven by the efficient distribution of content, also giving rise to piracy and copyright infringement as well as privacy concerns. An end user (buyer) of a P2P content distribution system does not want to reveal his/her identity during a transaction with a content owner (merchant), whereas the merchant does not want the buyer to further redistribute the content illegally. Therefore, there is a strong need for content distribution mechanisms over P2P networks that do not pose security and privacy threats to copyright holders and end users, respectively. However, the current systems being developed to provide copyright and privacy protection to merchants and end users employ cryptographic mechanisms, which incur high computational and communication costs, making these systems impractical for the distribution of big files, such as music albums or movies.El uso de soluciones de igual a igual (peer-to-peer, P2P) para la distribución multimedia se ha extendido mundialmente en los últimos años. La amplia popularidad de este paradigma se debe, principalmente, a la distribución eficiente de los contenidos, pero también da lugar a la piratería, a la violación del copyright y a problemas de privacidad. Un usuario final (comprador) de un sistema de distribución de contenidos P2P no quiere revelar su identidad durante una transacción con un propietario de contenidos (comerciante), mientras que el comerciante no quiere que el comprador pueda redistribuir ilegalmente el contenido más adelante. Por lo tanto, existe una fuerte necesidad de mecanismos de distribución de contenidos por medio de redes P2P que no supongan un riesgo de seguridad y privacidad a los titulares de derechos y los usuarios finales, respectivamente. Sin embargo, los sistemas actuales que se desarrollan con el propósito de proteger el copyright y la privacidad de los comerciantes y los usuarios finales emplean mecanismos de cifrado que implican unas cargas computacionales y de comunicaciones muy elevadas que convierten a estos sistemas en poco prácticos para distribuir archivos de gran tamaño, tales como álbumes de música o películas.L'ús de solucions d'igual a igual (peer-to-peer, P2P) per a la distribució multimèdia s'ha estès mundialment els darrers anys. L'àmplia popularitat d'aquest paradigma es deu, principalment, a la distribució eficient dels continguts, però també dóna lloc a la pirateria, a la violació del copyright i a problemes de privadesa. Un usuari final (comprador) d'un sistema de distribució de continguts P2P no vol revelar la seva identitat durant una transacció amb un propietari de continguts (comerciant), mentre que el comerciant no vol que el comprador pugui redistribuir il·legalment el contingut més endavant. Per tant, hi ha una gran necessitat de mecanismes de distribució de continguts per mitjà de xarxes P2P que no comportin un risc de seguretat i privadesa als titulars de drets i els usuaris finals, respectivament. Tanmateix, els sistemes actuals que es desenvolupen amb el propòsit de protegir el copyright i la privadesa dels comerciants i els usuaris finals fan servir mecanismes d'encriptació que impliquen unes càrregues computacionals i de comunicacions molt elevades que fan aquests sistemes poc pràctics per a distribuir arxius de grans dimensions, com ara àlbums de música o pel·lícules
On mitigating distributed denial of service attacks
Denial of service (DoS) attacks and distributed denial of service (DDoS) attacks are probably the most ferocious threats in the Internet, resulting in tremendous economic and social implications/impacts on our daily lives that are increasingly depending on the wellbeing of the Internet. How to mitigate these attacks effectively and efficiently has become an active research area. The critical issues here include 1) IP spoofing, i.e., forged source lIP addresses are routinely employed to conceal the identities of the attack sources and deter the efforts of detection, defense, and tracing; 2) the distributed nature, that is, hundreds or thousands of compromised hosts are orchestrated to attack the victim synchronously. Other related issues are scalability, lack of incentives to deploy a new scheme, and the effectiveness under partial deployment.
This dissertation investigates and proposes effective schemes to mitigate DDoS attacks. It is comprised of three parts. The first part introduces the classification of DDoS attacks and the evaluation of previous schemes. The second part presents the proposed IP traceback scheme, namely, autonomous system-based edge marking (ASEM). ASEM enhances probabilistic packet marking (PPM) in several aspects: (1) ASEM is capable of addressing large-scale DDoS attacks efficiently; (2) ASEM is capable of handling spoofed marking from the attacker and spurious marking incurred by subverted routers, which is a unique and critical feature; (3) ASEM can significantly reduce the number of marked packets required for path reconstruction and suppress false positives as well. The third part presents the proposed DDoS defense mechanisms, including the four-color-theorem based path marking, and a comprehensive framework for DDoS defense. The salient features of the framework include (1) it is designed to tackle a wide spectrum of DDoS attacks rather than a specified one, and (2) it can differentiate malicious traffic from normal ones. The receiver-center design avoids several related issues such as scalability, and lack of incentives to deploy a new scheme. Finally, conclusions are drawn and future works are discussed
A Novel Method for Curating Quanti-Qualitative Content
This paper proposes a Researcher-in-the-Loop (RITL) guided content curation
approach for quanti-qualitative research methods that uses a version control
system based on consensus. The paper introduces a workflow for
quanti-qualitative research processes that produces and consumes content
versions through collaborative phases validated through consensus protocols
performed by research teams. We argue that content versioning is a critical
component that supports the research process's reproducibility, traceability,
and rationale. We propose a curation framework that provides methods,
protocols, and tools for supporting the RITL approach to managing the content
produced by quanti-qualitative methods. The paper reports a validation
experiment using a use case about the study on disseminating political
statements in graffiti
On environments as systemic exoskeletons: Crosscutting optimizers and antifragility enablers
Classic approaches to General Systems Theory often adopt an individual
perspective and a limited number of systemic classes. As a result, those
classes include a wide number and variety of systems that result equivalent to
each other. This paper introduces a different approach: First, systems
belonging to a same class are further differentiated according to five major
general characteristics. This introduces a "horizontal dimension" to system
classification. A second component of our approach considers systems as nested
compositional hierarchies of other sub-systems. The resulting "vertical
dimension" further specializes the systemic classes and makes it easier to
assess similarities and differences regarding properties such as resilience,
performance, and quality-of-experience. Our approach is exemplified by
considering a telemonitoring system designed in the framework of Flemish
project "Little Sister". We show how our approach makes it possible to design
intelligent environments able to closely follow a system's horizontal and
vertical organization and to artificially augment its features by serving as
crosscutting optimizers and as enablers of antifragile behaviors.Comment: Accepted for publication in the Journal of Reliable Intelligent
Environments. Extends conference papers [10,12,15]. The final publication is
available at Springer via http://dx.doi.org/10.1007/s40860-015-0006-
- …