1,363 research outputs found
Oblivion: Mitigating Privacy Leaks by Controlling the Discoverability of Online Information
Search engines are the prevalently used tools to collect information about
individuals on the Internet. Search results typically comprise a variety of
sources that contain personal information -- either intentionally released by
the person herself, or unintentionally leaked or published by third parties,
often with detrimental effects on the individual's privacy. To grant
individuals the ability to regain control over their disseminated personal
information, the European Court of Justice recently ruled that EU citizens have
a right to be forgotten in the sense that indexing systems, must offer them
technical means to request removal of links from search results that point to
sources violating their data protection rights. As of now, these technical
means consist of a web form that requires a user to manually identify all
relevant links upfront and to insert them into the web form, followed by a
manual evaluation by employees of the indexing system to assess if the request
is eligible and lawful.
We propose a universal framework Oblivion to support the automation of the
right to be forgotten in a scalable, provable and privacy-preserving manner.
First, Oblivion enables a user to automatically find and tag her disseminated
personal information using natural language processing and image recognition
techniques and file a request in a privacy-preserving manner. Second, Oblivion
provides indexing systems with an automated and provable eligibility mechanism,
asserting that the author of a request is indeed affected by an online
resource. The automated ligibility proof ensures censorship-resistance so that
only legitimately affected individuals can request the removal of corresponding
links from search results. We have conducted comprehensive evaluations, showing
that Oblivion is capable of handling 278 removal requests per second, and is
hence suitable for large-scale deployment
Comparison of shipbuilding and construction industries from the product structure standpoint
Copyright © 2018 Inderscience Enterprises Ltd. The use of building information modelling (BIM) in construction compares to the use of product lifecycle management (PLM) in manufacturing. Previous research has shown that it is possible to improve BIM with the features and the best practices from the PLM approach. This article provides a comparison from the standpoint of the bill of materials (BOM) and product structures. It compares the product beginning of life in both construction and shipbuilding industries. The research then tries to understand the use, form and evolution of product structures and BOM concepts in shipbuilding with the aim of identifying equivalent notions in construction. Research findings demonstrate that similar concepts for structuring information exist in construction; however, the relationship between them is unclear. Further research is therefore required to detail the links identified by the authors and develop an equivalent central structuring backbone as found in PLM platforms
Quantum-noise--randomized data-encryption for WDM fiber-optic networks
We demonstrate high-rate randomized data-encryption through optical fibers
using the inherent quantum-measurement noise of coherent states of light.
Specifically, we demonstrate 650Mbps data encryption through a 10Gbps
data-bearing, in-line amplified 200km-long line. In our protocol, legitimate
users (who share a short secret-key) communicate using an M-ry signal set while
an attacker (who does not share the secret key) is forced to contend with the
fundamental and irreducible quantum-measurement noise of coherent states.
Implementations of our protocol using both polarization-encoded signal sets as
well as polarization-insensitive phase-keyed signal sets are experimentally and
theoretically evaluated. Different from the performance criteria for the
cryptographic objective of key generation (quantum key-generation), one
possible set of performance criteria for the cryptographic objective of data
encryption is established and carefully considered.Comment: Version 2: Some errors have been corrected and arguments refined. To
appear in Physical Review A. Version 3: Minor corrections to version
Generation of eigenstates using the phase-estimation algorithm
The phase estimation algorithm is so named because it allows the estimation
of the eigenvalues associated with an operator. However it has been proposed
that the algorithm can also be used to generate eigenstates. Here we extend
this proposal for small quantum systems, identifying the conditions under which
the phase estimation algorithm can successfully generate eigenstates. We then
propose an implementation scheme based on an ion trap quantum computer. This
scheme allows us to illustrate two simple examples, one in which the algorithm
effectively generates eigenstates, and one in which it does not.Comment: 5 pages, 3 Figures, RevTeX4 Introduction expanded, typos correcte
Security by Spatial Reference:Using Relative Positioning to Authenticate Devices for Spontaneous Interaction
Spontaneous interaction is a desirable characteristic associated with mobile and ubiquitous computing. The aim is to enable users to connect their personal devices with devices encountered in their environment in order to take advantage of interaction opportunities in accordance with their situation. However, it is difficult to secure spontaneous interaction as this requires authentication of the encountered device, in the absence of any prior knowledge of the device. In this paper we present a method for establishing and securing spontaneous interactions on the basis of emphspatial references that capture the spatial relationship of the involved devices. Spatial references are obtained by accurate sensing of relative device positions, presented to the user for initiation of interactions, and used in a peer authentication protocol that exploits a novel mechanism for message transfer over ultrasound to ensures spatial authenticity of the sender
Unconditionally verifiable blind computation
Blind Quantum Computing (BQC) allows a client to have a server carry out a
quantum computation for them such that the client's input, output and
computation remain private. A desirable property for any BQC protocol is
verification, whereby the client can verify with high probability whether the
server has followed the instructions of the protocol, or if there has been some
deviation resulting in a corrupted output state. A verifiable BQC protocol can
be viewed as an interactive proof system leading to consequences for complexity
theory. The authors, together with Broadbent, previously proposed a universal
and unconditionally secure BQC scheme where the client only needs to be able to
prepare single qubits in separable states randomly chosen from a finite set and
send them to the server, who has the balance of the required quantum
computational resources. In this paper we extend that protocol with new
functionality allowing blind computational basis measurements, which we use to
construct a new verifiable BQC protocol based on a new class of resource
states. We rigorously prove that the probability of failing to detect an
incorrect output is exponentially small in a security parameter, while resource
overhead remains polynomial in this parameter. The new resource state allows
entangling gates to be performed between arbitrary pairs of logical qubits with
only constant overhead. This is a significant improvement on the original
scheme, which required that all computations to be performed must first be put
into a nearest neighbour form, incurring linear overhead in the number of
qubits. Such an improvement has important consequences for efficiency and
fault-tolerance thresholds.Comment: 46 pages, 10 figures. Additional protocol added which allows
arbitrary circuits to be verified with polynomial securit
A Practical Cryptanalysis of the Algebraic Eraser
Anshel, Anshel, Goldfeld and Lemieaux introduced the Colored Burau Key
Agreement Protocol (CBKAP) as the concrete instantiation of their Algebraic
Eraser scheme. This scheme, based on techniques from permutation groups, matrix
groups and braid groups, is designed for lightweight environments such as RFID
tags and other IoT applications. It is proposed as an underlying technology for
ISO/IEC 29167-20. SecureRF, the company owning the trademark Algebraic Eraser,
has presented the scheme to the IRTF with a view towards standardisation.
We present a novel cryptanalysis of this scheme. For parameter sizes
corresponding to claimed 128-bit security, our implementation recovers the
shared key using less than 8 CPU hours, and less than 64MB of memory.Comment: 15 pages. Updated references, with brief comments added. Minor typos
corrected. Final version, accepted for CRYPTO 201
Spritz---a spongy RC4-like stream cipher and hash function.
This paper reconsiders the design of the stream cipher RC4, and
proposes an improved variant, which we call ``Spritz\u27\u27
(since the output comes in fine drops rather than big
blocks.)
Our work leverages the considerable cryptanalytic work done
on the original RC4 and its proposed variants. It also uses
simulations extensively to search for biases and to guide the
selection of intermediate expressions.
We estimate that Spritz can produce output with about 24 cycles/byte
of computation. Furthermore, our statistical tests suggest that about bytes of output are needed before one can reasonably distinguish Spritz output from random output; this is a marked improvement over RC4. [Footnote:
However, see Appendix F for references
to more recent work that suggest that our estimates of
the work required to break Spritz may be optimistic.]
In addition, we formulate Spritz as a ``sponge (or sponge-like)
function,\u27\u27 (see Bertoni et al.), which can ``Absorb\u27\u27 new
data at any time, and from which one can ``Squeeze\u27\u27 pseudorandom
output sequences of arbitrary length. Spritz can thus be easily
adapted for use as a cryptographic hash function, an encryption
algorithm, or a message-authentication code generator. (However, in
hash-function mode, Spritz is rather slow.
Complexity transitions in global algorithms for sparse linear systems over finite fields
We study the computational complexity of a very basic problem, namely that of
finding solutions to a very large set of random linear equations in a finite
Galois Field modulo q. Using tools from statistical mechanics we are able to
identify phase transitions in the structure of the solution space and to
connect them to changes in performance of a global algorithm, namely Gaussian
elimination. Crossing phase boundaries produces a dramatic increase in memory
and CPU requirements necessary to the algorithms. In turn, this causes the
saturation of the upper bounds for the running time. We illustrate the results
on the specific problem of integer factorization, which is of central interest
for deciphering messages encrypted with the RSA cryptosystem.Comment: 23 pages, 8 figure
Learn your opponent's strategy (in polynomial time)!
Agents that interact in a distributed environment might increase their utility by behaving optimally given the strategies of the other agents. To do so, agents need to learn about those with whom they share the same world. This paper examines interactions among agents from a game theoretic perspective. In this context, learning has been assumed as a means to reach equilibrium. We analyze the complexity of this learning process. We start with a restricted two-agent model, in which agents are represented by finite automata, and one of the agents plays a fixed strategy. We show that even with this restrictions, the learning process may be exponential in time. We then suggest a criterion of simplicity, that induces a class of automata that are learnable in polynomial time
- …