1,175 research outputs found

    Oblivion: Mitigating Privacy Leaks by Controlling the Discoverability of Online Information

    Get PDF
    Search engines are the prevalently used tools to collect information about individuals on the Internet. Search results typically comprise a variety of sources that contain personal information -- either intentionally released by the person herself, or unintentionally leaked or published by third parties, often with detrimental effects on the individual's privacy. To grant individuals the ability to regain control over their disseminated personal information, the European Court of Justice recently ruled that EU citizens have a right to be forgotten in the sense that indexing systems, must offer them technical means to request removal of links from search results that point to sources violating their data protection rights. As of now, these technical means consist of a web form that requires a user to manually identify all relevant links upfront and to insert them into the web form, followed by a manual evaluation by employees of the indexing system to assess if the request is eligible and lawful. We propose a universal framework Oblivion to support the automation of the right to be forgotten in a scalable, provable and privacy-preserving manner. First, Oblivion enables a user to automatically find and tag her disseminated personal information using natural language processing and image recognition techniques and file a request in a privacy-preserving manner. Second, Oblivion provides indexing systems with an automated and provable eligibility mechanism, asserting that the author of a request is indeed affected by an online resource. The automated ligibility proof ensures censorship-resistance so that only legitimately affected individuals can request the removal of corresponding links from search results. We have conducted comprehensive evaluations, showing that Oblivion is capable of handling 278 removal requests per second, and is hence suitable for large-scale deployment

    List Processing in Real Time on a Serial Computer

    Get PDF
    Key Words and Phrases: real-time, compacting, garbage collection, list processing, virtual memory, file or database management, storage management, storage allocation, LISP, CDR-coding, reference counting. CR Categories: 3.50, 3.60, 373, 3.80, 4.13, 24.32, 433, 4.35, 4.49 This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.A real-time list processing system is one in which the time required by each elementary list operation (CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical list processing systems such as LISP do not have this property because a call to CONS may invoke the garbage collector which requires time proportional to the number of accessible cells to finish. The space requirement of a classical LISP system with N accessible cells under equilibrium conditions is (1.5+μ)N or (1+μ)N, depending upon whether a stack is required for the garbage collector, where μ>0 is typically less than 2. A list processing system is presented which: 1) is real-time--i.e. T(CONS) is bounded by a constant independent of the number of cells in use; 2) requires space (2+2μ)N, i.e. not more than twice that of a classical system; 3) runs on a serial computer without a time-sharing clock; 4) handles directed cycles in the data structures; 5) is fast--the average time for each operation is about the same as with normal garbage collection; 6) compacts--minimizes the working set; 7) keeps the free pool in one contiguous block--objects of nonuniform size pose no problem; 8) uses one phase incremental collection--no separate mark, sweep, relocate phases; 9) requires no garbage collector stack; 10) requires no "mark bits", per se; 11) is simple--suitable for microcoded implementation. Extensions of the system to handle a user program stack, compact list representation ("CDR-coding"), arrays of non-uniform size, and hash linking are discussed. CDR-coding is shown to reduce memory requirements for N LISP cells to ≈(I+μ)N. Our system is also compared with another approach to the real-time storage management problem, reference counting, and reference counting is shown to be neither competitive with our system when speed of allocation is critical, nor compatible, in the sense that a system with both forms of garbage collection is worse than our pure one.MIT Artificial Intelligence Laboratory Department of Defense Advanced Research Projects Agenc

    Generation of eigenstates using the phase-estimation algorithm

    Get PDF
    The phase estimation algorithm is so named because it allows the estimation of the eigenvalues associated with an operator. However it has been proposed that the algorithm can also be used to generate eigenstates. Here we extend this proposal for small quantum systems, identifying the conditions under which the phase estimation algorithm can successfully generate eigenstates. We then propose an implementation scheme based on an ion trap quantum computer. This scheme allows us to illustrate two simple examples, one in which the algorithm effectively generates eigenstates, and one in which it does not.Comment: 5 pages, 3 Figures, RevTeX4 Introduction expanded, typos correcte

    Quantum-noise--randomized data-encryption for WDM fiber-optic networks

    Full text link
    We demonstrate high-rate randomized data-encryption through optical fibers using the inherent quantum-measurement noise of coherent states of light. Specifically, we demonstrate 650Mbps data encryption through a 10Gbps data-bearing, in-line amplified 200km-long line. In our protocol, legitimate users (who share a short secret-key) communicate using an M-ry signal set while an attacker (who does not share the secret key) is forced to contend with the fundamental and irreducible quantum-measurement noise of coherent states. Implementations of our protocol using both polarization-encoded signal sets as well as polarization-insensitive phase-keyed signal sets are experimentally and theoretically evaluated. Different from the performance criteria for the cryptographic objective of key generation (quantum key-generation), one possible set of performance criteria for the cryptographic objective of data encryption is established and carefully considered.Comment: Version 2: Some errors have been corrected and arguments refined. To appear in Physical Review A. Version 3: Minor corrections to version

    Public Evidence from Secret Ballots

    Full text link
    Elections seem simple---aren't they just counting? But they have a unique, challenging combination of security and privacy requirements. The stakes are high; the context is adversarial; the electorate needs to be convinced that the results are correct; and the secrecy of the ballot must be ensured. And they have practical constraints: time is of the essence, and voting systems need to be affordable and maintainable, and usable by voters, election officials, and pollworkers. It is thus not surprising that voting is a rich research area spanning theory, applied cryptography, practical systems analysis, usable security, and statistics. Election integrity involves two key concepts: convincing evidence that outcomes are correct and privacy, which amounts to convincing assurance that there is no evidence about how any given person voted. These are obviously in tension. We examine how current systems walk this tightrope.Comment: To appear in E-Vote-Id '1

    Quantum Physics and Computers

    Get PDF
    Recent theoretical results confirm that quantum theory provides the possibility of new ways of performing efficient calculations. The most striking example is the factoring problem. It has recently been shown that computers that exploit quantum features could factor large composite integers. This task is believed to be out of reach of classical computers as soon as the number of digits in the number to factor exceeds a certain limit. The additional power of quantum computers comes from the possibility of employing a superposition of states, of following many distinct computation paths and of producing a final output that depends on the interference of all of them. This ``quantum parallelism'' outstrips by far any parallelism that can be thought of in classical computation and is responsible for the ``exponential'' speed-up of computation. This is a non-technical (or at least not too technical) introduction to the field of quantum computation. It does not cover very recent topics, such as error-correction.Comment: 27 pages, LaTeX, 8 PostScript figures embedded. A bug in one of the postscript files has been fixed. Reprints available from the author. The files are also available from http://eve.physics.ox.ac.uk/Articles/QC.Articles.htm

    Probabilistic Algorithmic Knowledge

    Full text link
    The framework of algorithmic knowledge assumes that agents use deterministic knowledge algorithms to compute the facts they explicitly know. We extend the framework to allow for randomized knowledge algorithms. We then characterize the information provided by a randomized knowledge algorithm when its answers have some probability of being incorrect. We formalize this information in terms of evidence; a randomized knowledge algorithm returning ``Yes'' to a query about a fact \phi provides evidence for \phi being true. Finally, we discuss the extent to which this evidence can be used as a basis for decisions.Comment: 26 pages. A preliminary version appeared in Proc. 9th Conference on Theoretical Aspects of Rationality and Knowledge (TARK'03

    Complexity transitions in global algorithms for sparse linear systems over finite fields

    Full text link
    We study the computational complexity of a very basic problem, namely that of finding solutions to a very large set of random linear equations in a finite Galois Field modulo q. Using tools from statistical mechanics we are able to identify phase transitions in the structure of the solution space and to connect them to changes in performance of a global algorithm, namely Gaussian elimination. Crossing phase boundaries produces a dramatic increase in memory and CPU requirements necessary to the algorithms. In turn, this causes the saturation of the upper bounds for the running time. We illustrate the results on the specific problem of integer factorization, which is of central interest for deciphering messages encrypted with the RSA cryptosystem.Comment: 23 pages, 8 figure

    The invertibility of the XOR of rotations of a binary word

    Get PDF
    We prove the following result regarding operations on a binary word whose length is a power of two: computing the exclusive-or of a number of rotated versions of the word is an invertible (one-to-one) operation if and only if the number of versions combined is odd. (This result is not new; there is at least one earlier proof, due to Thomsen [Cryptographic hash functions, PhD thesis, Technical University of Denmark, 28 November 2008]. Our proof may be new.
    • …
    corecore