38 research outputs found

    PCD

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Page 96 blank. Cataloged from PDF version of thesis.Includes bibliographical references (p. 87-95).The security of systems can often be expressed as ensuring that some property is maintained at every step of a distributed computation conducted by untrusted parties. Special cases include integrity of programs running on untrusted platforms, various forms of confidentiality and side-channel resilience, and domain-specific invariants. We propose a new approach, proof-carrying data (PCD), which sidesteps the threat of faults and leakage by reasoning about properties of a computation's output data, regardless of the process that produced it. In PCD, the system designer prescribes the desired properties of a computation's outputs. Corresponding proofs are attached to every message flowing through the system, and are mutually verified by the system's components. Each such proof attests that the message's data and all of its history comply with the prescribed properties. We construct a general protocol compiler that generates, propagates, and verifies such proofs of compliance, while preserving the dynamics and efficiency of the original computation. Our main technical tool is the cryptographic construction of short non-interactive arguments (computationally-sound proofs) for statements whose truth depends on "hearsay evidence": previous arguments about other statements. To this end, we attain a particularly strong proof-of-knowledge property. We realize the above, under standard cryptographic assumptions, in a model where the prover has blackbox access to some simple functionality - essentially, a signature card.by Alessandro Chiesa.M.Eng

    Algorithms for Permutation Groups and Cayley Networks

    Get PDF
    110 pagesBases, subgroup towers and strong generating sets (SGSs) have played a key role in the development of algorithms for permutation groups. We analyze the computational complexity of several problems involving bases and SGSs, and we use subgroup towers and SGSs to construct dense networks with practical routing schemes. Given generators for G ≤ Sym(n), we prove that the problem of computing a minimum base for G is NP-hard. In fact, the problem is NP-hard for cyclic groups and elementary abelian groups. However for abelian groups with orbits of size less than 8, a polynomial time algorithm is presented for computing minimum bases. For arbitrary permutation groups a greedy algorithm for approximating minimum bases is investigated. We prove that if G ≤ Sym(n) with a minimum base of size k, then the greedy algorithm produces a base of size Ω (k log log n)

    The Computational Power of Non-interacting Particles

    Full text link
    Shortened abstract: In this thesis, I study two restricted models of quantum computing related to free identical particles. Free fermions correspond to a set of two-qubit gates known as matchgates. Matchgates are classically simulable when acting on nearest neighbors on a path, but universal for quantum computing when acting on distant qubits or when SWAP gates are available. I generalize these results in two ways. First, I show that SWAP is only one in a large family of gates that uplift matchgates to quantum universality. In fact, I show that the set of all matchgates plus any nonmatchgate parity-preserving two-qubit gate is universal, and interpret this fact in terms of local invariants of two-qubit gates. Second, I investigate the power of matchgates in arbitrary connectivity graphs, showing they are universal on any connected graph other than a path or a cycle, and classically simulable on a cycle. I also prove the same dichotomy for the XY interaction. Free bosons give rise to a model known as BosonSampling. BosonSampling consists of (i) preparing a Fock state of n photons, (ii) interfering these photons in an m-mode linear interferometer, and (iii) measuring the output in the Fock basis. Sampling approximately from the resulting distribution should be classically hard, under reasonable complexity assumptions. Here I show that exact BosonSampling remains hard even if the linear-optical circuit has constant depth. I also report several experiments where three-photon interference was observed in integrated interferometers of various sizes, providing some of the first implementations of BosonSampling in this regime. The experiments also focus on the bosonic bunching behavior and on validation of BosonSampling devices. This thesis contains descriptions of the numerical analyses done on the experimental data, omitted from the corresponding publications.Comment: PhD Thesis, defended at Universidade Federal Fluminense on March 2014. Final version, 208 pages. New results in Chapter 5 correspond to arXiv:1106.1863, arXiv:1207.2126, and arXiv:1308.1463. New results in Chapter 6 correspond to arXiv:1212.2783, arXiv:1305.3188, arXiv:1311.1622 and arXiv:1412.678

    Computational learning theory : new models and algorithms

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1989.Includes bibliographical references (leaves 116-120).by Robert Hal Sloan.Ph.D

    Radical Artificial Intelligence: A Postmodern Approach

    Get PDF

    Radical Artificial Intelligence: A Postmodern Approach

    Get PDF
    The dynamic response of end-clamped monolithic beams and sandwich beams has been measured by loading the beams at mid-span using metal foam projectiles. The AISI 304 stainless-steel sandwich beams comprise two identical face sheets and either prismatic Y-frame or corrugated cores. The resistance to shock loading is quantified by the permanent transverse deflection at mid-span of the beams as a function of projectile momentum. The prismatic cores are aligned either longitudinally along the beam length or transversely. It is found that the sandwich beams with a longitudinal core orientation have a higher shock resistance than the monolithic beams of equal mass. In contrast, the performance of the sandwich beams with a transverse core orientation is very similar to that of the monolithic beams. Three-dimensional finite element (FE) simulations are in good agreement with the measured responses. The FE calculations indicate that strain concentrations in the sandwich beams occur at joints within the cores and between the core and face sheets; the level of maximum strain is similar for the Y-frame and corrugated core beams for a given value of projectile momentum. The experimental and FE results taken together reveal that Y-frame and corrugated core sandwich beams of equal mass have similar dynamic performances in terms of rear-face deflection, degree of core compression and level of strain within the beam

    Radical Artificial Intelligence: A Postmodern Approach

    Get PDF

    Combined optimization algorithms applied to pattern classification

    Get PDF
    Accurate classification by minimizing the error on test samples is the main goal in pattern classification. Combinatorial optimization is a well-known method for solving minimization problems, however, only a few examples of classifiers axe described in the literature where combinatorial optimization is used in pattern classification. Recently, there has been a growing interest in combining classifiers and improving the consensus of results for a greater accuracy. In the light of the "No Ree Lunch Theorems", we analyse the combination of simulated annealing, a powerful combinatorial optimization method that produces high quality results, with the classical perceptron algorithm. This combination is called LSA machine. Our analysis aims at finding paradigms for problem-dependent parameter settings that ensure high classifica, tion results. Our computational experiments on a large number of benchmark problems lead to results that either outperform or axe at least competitive to results published in the literature. Apart from paxameter settings, our analysis focuses on a difficult problem in computation theory, namely the network complexity problem. The depth vs size problem of neural networks is one of the hardest problems in theoretical computing, with very little progress over the past decades. In order to investigate this problem, we introduce a new recursive learning method for training hidden layers in constant depth circuits. Our findings make contributions to a) the field of Machine Learning, as the proposed method is applicable in training feedforward neural networks, and to b) the field of circuit complexity by proposing an upper bound for the number of hidden units sufficient to achieve a high classification rate. One of the major findings of our research is that the size of the network can be bounded by the input size of the problem and an approximate upper bound of 8 + √2n/n threshold gates as being sufficient for a small error rate, where n := log/SL and SL is the training set

    Space station automation of common module power management and distribution, volume 2

    Get PDF
    The new Space Station Module Power Management and Distribution System (SSM/PMAD) testbed automation system is described. The subjects discussed include testbed 120 volt dc star bus configuration and operation, SSM/PMAD automation system architecture, fault recovery and management expert system (FRAMES) rules english representation, the SSM/PMAD user interface, and the SSM/PMAD future direction. Several appendices are presented and include the following: SSM/PMAD interface user manual version 1.0, SSM/PMAD lowest level processor (LLP) reference, SSM/PMAD technical reference version 1.0, SSM/PMAD LLP visual control logic representation's (VCLR's), SSM/PMAD LLP/FRAMES interface control document (ICD) , and SSM/PMAD LLP switchgear interface controller (SIC) ICD
    corecore