508 research outputs found

    Limits to Non-Malleability

    Get PDF
    There have been many successes in constructing explicit non-malleable codes for various classes of tampering functions in recent years, and strong existential results are also known. In this work we ask the following question: When can we rule out the existence of a non-malleable code for a tampering class ?? First, we start with some classes where positive results are well-known, and show that when these classes are extended in a natural way, non-malleable codes are no longer possible. Specifically, we show that no non-malleable codes exist for any of the following tampering classes: - Functions that change d/2 symbols, where d is the distance of the code; - Functions where each input symbol affects only a single output symbol; - Functions where each of the n output bits is a function of n-log n input bits. Furthermore, we rule out constructions of non-malleable codes for certain classes ? via reductions to the assumption that a distributional problem is hard for ?, that make black-box use of the tampering functions in the proof. In particular, this yields concrete obstacles for the construction of efficient codes for NC, even assuming average-case variants of P ? NC

    Approximate resilience, monotonicity, and the complexity of agnostic learning

    Full text link
    A function ff is dd-resilient if all its Fourier coefficients of degree at most dd are zero, i.e., ff is uncorrelated with all low-degree parities. We study the notion of approximate\mathit{approximate} resilience\mathit{resilience} of Boolean functions, where we say that ff is α\alpha-approximately dd-resilient if ff is α\alpha-close to a [1,1][-1,1]-valued dd-resilient function in 1\ell_1 distance. We show that approximate resilience essentially characterizes the complexity of agnostic learning of a concept class CC over the uniform distribution. Roughly speaking, if all functions in a class CC are far from being dd-resilient then CC can be learned agnostically in time nO(d)n^{O(d)} and conversely, if CC contains a function close to being dd-resilient then agnostic learning of CC in the statistical query (SQ) framework of Kearns has complexity of at least nΩ(d)n^{\Omega(d)}. This characterization is based on the duality between 1\ell_1 approximation by degree-dd polynomials and approximate dd-resilience that we establish. In particular, it implies that 1\ell_1 approximation by low-degree polynomials, known to be sufficient for agnostic learning over product distributions, is in fact necessary. Focusing on monotone Boolean functions, we exhibit the existence of near-optimal α\alpha-approximately Ω~(αn)\widetilde{\Omega}(\alpha\sqrt{n})-resilient monotone functions for all α>0\alpha>0. Prior to our work, it was conceivable even that every monotone function is Ω(1)\Omega(1)-far from any 11-resilient function. Furthermore, we construct simple, explicit monotone functions based on Tribes{\sf Tribes} and CycleRun{\sf CycleRun} that are close to highly resilient functions. Our constructions are based on a fairly general resilience analysis and amplification. These structural results, together with the characterization, imply nearly optimal lower bounds for agnostic learning of monotone juntas

    Non-Malleable Codes for Small-Depth Circuits

    Get PDF
    We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by small-depth circuits. For constant-depth circuits of polynomial size (i.e. AC0\mathsf{AC^0} tampering functions), our codes have codeword length n=k1+o(1)n = k^{1+o(1)} for a kk-bit message. This is an exponential improvement of the previous best construction due to Chattopadhyay and Li (STOC 2017), which had codeword length 2O(k)2^{O(\sqrt{k})}. Our construction remains efficient for circuit depths as large as Θ(log(n)/loglog(n))\Theta(\log(n)/\log\log(n)) (indeed, our codeword length remains nk1+ϵ)n\leq k^{1+\epsilon}), and extending our result beyond this would require separating P\mathsf{P} from NC1\mathsf{NC^1}. We obtain our codes via a new efficient non-malleable reduction from small-depth tampering to split-state tampering. A novel aspect of our work is the incorporation of techniques from unconditional derandomization into the framework of non-malleable reductions. In particular, a key ingredient in our analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC 2013), a derandomization of the influential switching lemma from circuit complexity; the randomness-efficiency of this switching lemma translates into the rate-efficiency of our codes via our non-malleable reduction.Comment: 26 pages, 4 figure

    A Tamper and Leakage Resilient von Neumann Architecture

    Get PDF
    We present a universal framework for tamper and leakage resilient computation on a von Neumann Random Access Architecture (RAM in short). The RAM has one CPU that accesses a storage, which we call the disk. The disk is subject to leakage and tampering. So is the bus connecting the CPU to the disk. We assume that the CPU is leakage and tamper-free. For a fixed value of the security parameter, the CPU has constant size. Therefore the code of the program to be executed is stored on the disk, i.e., we consider a von Neumann architecture. The most prominent consequence of this is that the code of the program executed will be subject to tampering. We construct a compiler for this architecture which transforms any keyed primitive into a RAM program where the key is encoded and stored on the disk along with the program to evaluate the primitive on that key. Our compiler only assumes the existence of a so-called continuous non-malleable code, and it only needs black-box access to such a code. No further (cryptographic) assumptions are needed. This in particular means that given an information theoretic code, the overall construction is information theoretic secure. Although it is required that the CPU is tamper and leakage proof, its design is independent of the actual primitive being computed and its internal storage is non-persistent, i.e., all secret registers are reset between invocations. Hence, our result can be interpreted as reducing the problem of shielding arbitrary complex computations to protecting a single, simple yet universal component

    Enhanced Written Instructions for Creating Publication-Quality Single-Case Design Graphs in Microsoft Excel

    Get PDF
    Graphs are visual descriptors of functional relations between behavior and the environment, and behavioral researchers and clinicians make informed decisions on current and future procedures by evaluating such functional relations. Thus, graphically depicting and visually inspecting single-subject data is foundational in the science of behavior. Microsoft Excel is the most prevalent program used by behavior analysts to create single-case design graphs; however, the Excel literature mostly includes brief tutorials and descriptions of its utility, neither of which evaluate methods of training Excel’s comprehensive capabilities. Self-directed training with enhanced written instructions (EWI) is a viable option to train graphing skills as it alleviates the amount of resources required by in-person trainings. However, published evaluations of EWI as a method to train graphing are limited in quantity, rely on permanent product measures, and exclude assessments of maintenance and generalization. We used a multiple baseline across participants design to evaluate the effects of EWI to train seven undergraduate students to create publication-quality single-subject design graphs in Excel. We measured graphing accuracy and latency to graph completion using real-time, live-Excel, and permanent product measures. We also assessed response maintenance and generalization. EWI resulted in immediate, robust effects, and we observed generalization and maintenance across all participants. We discuss these results and their implications regarding staff training, Excel’s utility, and data measurement

    On the Impossibility of Sender-Deniable Public Key Encryption

    Get PDF
    The primitive of deniable encryption was first introduced by Canetti et al. (CRYPTO, 1997). Deniable encryption is a regular public key encryption scheme with the added feature that after running the protocol honestly and transmitting a message mm, both Sender and Receiver may produce random coins showing that the transmitted ciphertext was an encryption of any message m2˘7m\u27 in the message space. Deniable encryption is a key tool for constructing incoercible protocols, since it allows a party to send one message and later provide apparent evidence to a coercer that a different message was sent. In addition, deniable encryption may be used to obtain \emph{adaptively}-secure multiparty computation (MPC) protocols and is secure under \emph{selective-opening} attacks. Different flavors such as sender-deniable and receiver-deniable encryption, where only the Sender or Receiver can produce fake random coins, have been considered. Recently, several open questions regarding the feasibility of deniable encryption have been resolved (c.f. (O\u27Neill et al., CRYPTO, 2011), (Bendlin et al., ASIACRYPT, 2011)). A fundamental remaining open question is whether it is possible to construct sender-deniable Encryption Schemes with super-polynomial security, where an adversary has negligible advantage in distinguishing real and fake openings. The primitive of simulatable public key encryption (PKE), introduced by Damgård and Nielsen (CRYPTO, 2000), is a public key encryption scheme with additional properties that allow oblivious sampling of public keys and ciphertexts. It is one of the low-level primitives used to construct adaptively-secure MPC protocols and was used by O\u27Neill et al. in their construction of bi-deniable encryption in the multi-distributional model (CRYPTO, 2011). Moreover, the original construction of sender-deniable encryption with polynomial security given by Canetti et al. can be instantiated with simulatable PKE. Thus, a natural question to ask is whether it is possible to construct sender-deniable encryption with \emph{super-polynomial security} from simulatable PKE. In this work, we investigate the possibility of constructing sender-deniable public key encryption from the primitive of simulatable PKE in a black-box manner. We show that, in fact, there is no black-box construction of sender-deniable encryption with super-polynomial security from simulatable PKE. This indicates that the original construction of sender-deniable public key encryption given by Canetti et al. is in some sense optimal, since improving on it will require the use of non-black-box techniques, stronger underlying assumptions or interaction

    An Analysis of a Comprehensive and Collaborative Truancy Prevention and Diversion Program

    Get PDF
    Education is fundamental for the development of skills required for academic and social success. When students fail to attend school regularly, adverse consequences result at the individual, school, and societal level. Truancy, or not attending school as required by law, has been linked to academic failure, school dropout, substance use and abuse, delinquency, and problems that persist into adulthood (e.g., job problems, marital issues, adult criminality, incarceration). Past research demonstrates the need for a collaborative and comprehensive approach to combat truancy that includes monitoring attendance, mentoring, providing meaningful consequences, increasing parental and school involvement, and ongoing evaluation. The present study evaluates the effects of a truancy prevention and diversion program (TPDP) on the decrease in unexcused absences accumulated by students in violation of the compulsory education law. The TPDP is recognized as an appropriate alternative to formal court involvement and has been offered to truant students and parents for 40 years. The program is a collaborative effort with public schools, the district attorney’s office, a child protective services agency, a youth services agency, and a midwestern university. Undergraduate practicum students act as mentors for truant students by developing positive relationships, monitoring attendance, and providing incentives through a behavioral contract. The program includes a review team led by an assistant district attorney. The primary investigator analyzed group data (i.e., unexcused absences) collected over the past 10 years and a representative sample of individual participants' pre-and post-intervention data collected over the past 10 years using single-subject methodology. Results demonstrate the effectiveness of the TPDP in reducing truancy across participants and years
    corecore