32 research outputs found

    A Lockdown Technique to Prevent Machine Learning on PUFs for Lightweight Authentication

    Get PDF
    We present a lightweight PUF-based authentication approach that is practical in settings where a server authenticates a device, and for use cases where the number of authentications is limited over a device's lifetime. Our scheme uses a server-managed challenge/response pair (CRP) lockdown protocol: unlike prior approaches, an adaptive chosen-challenge adversary with machine learning capabilities cannot obtain new CRPs without the server's implicit permission. The adversary is faced with the problem of deriving a PUF model with a limited amount of machine learning training data. Our system-level approach allows a so-called strong PUF to be used for lightweight authentication in a manner that is heuristically secure against today's best machine learning methods through a worst-case CRP exposure algorithmic validation. We also present a degenerate instantiation using a weak PUF that is secure against computationally unrestricted adversaries, which includes any learning adversary, for practical device lifetimes and read-out rates. We validate our approach using silicon PUF data, and demonstrate the feasibility of supporting 10, 1,000, and 1M authentications, including practical configurations that are not learnable with polynomial resources, e.g., the number of CRPs and the attack runtime, using recent results based on the probably-approximately-correct (PAC) complexity-theoretic framework

    Polynomial Bounds for Learning Noisy Optical Physical Unclonable Functions and Connections to Learning With Errors

    Full text link
    It is shown that a class of optical physical unclonable functions (PUFs) can be learned to arbitrary precision with arbitrarily high probability, even in the presence of noise, given access to polynomially many challenge-response pairs and polynomially bounded computational power, under mild assumptions about the distributions of the noise and challenge vectors. This extends the results of Rh\"uramir et al. (2013), who showed a subset of this class of PUFs to be learnable in polynomial time in the absence of noise, under the assumption that the optics of the PUF were either linear or had negligible nonlinear effects. We derive polynomial bounds for the required number of samples and the computational complexity of a linear regression algorithm, based on size parameters of the PUF, the distributions of the challenge and noise vectors, and the probability and accuracy of the regression algorithm, with a similar analysis to one done by Bootle et al. (2018), who demonstrated a learning attack on a poorly implemented version of the Learning With Errors problem.Comment: 10 pages, 2 figures, submitted to IEEE Transactions on Information Forensics and Securit

    Interpose PUF can be PAC Learned

    Get PDF
    In this work, we prove that Interpose PUF is learnable in the PAC model. First, we show that Interpose PUF can be approximated by a Linear Threshold Function~(LTF), assuming the interpose bit to be random. We translate the randomness in the interpose bit to classification noise of the hypothesis. Using classification noise model, we prove that the resultant LTF can be learned with number of labelled examples~(challenge response pairs) polynomial in the number of stages and PAC model parameters

    A Fourier Analysis Based Attack against Physically Unclonable Functions

    Get PDF
    Electronic payment systems have leveraged the advantages offered by the RFID technology, whose security is promised to be improved by applying the notion of Physically Unclonable Functions (PUFs). Along with the evolution of PUFs, numerous successful attacks against PUFs have been proposed in the literature. Among these are machine learning (ML) attacks, ranging from heuristic approaches to provable algorithms, that have attracted great attention. Our paper pursues this line of research by introducing a Fourier analysis based attack against PUFs. More specifically, this paper focuses on two main aspects of ML attacks, namely being provable and noise tolerant. In this regard, we prove that our attack is naturally integrated into a provable Probably Approximately Correct (PAC) model. Moreover, we show that our attacks against known PUF families are effective and applicable even in the presence of noise. Our proof relies heavily on the intrinsic properties of these PUF families, namely arbiter, Ring Oscillator (RO), and Bistable Ring (BR) PUF families. We believe that our new style of ML algorithms, which take advantage of the Fourier analysis principle, can offer better measures of PUF security

    PAC Learnability of iPUF Variants

    Get PDF
    Interpose PUF~(iPUF) is a strong PUF construction that was shown to be vulnerable against empirical machine learning as well as PAC learning attacks. In this work, we extend the PAC Learning results of Interpose PUF to prove that the variants of iPUF are also learnable in the PAC model under the Linear Threshold Function representation class
    corecore