29 research outputs found

    Emerging Opportunities and Challenges in Hardware Security

    Get PDF
    Recent years have seen the rapid development of many emerging technologies in various aspects of computer engineering, such as new devices, new fabrication techniques of integrated circuits (IC), new computation frameworks, etc. In this dissertation, we study the security challenges to these emerging technologies as well as the security opportunities they bring. Specifically, we investigate the security opportunities in double patterning lithography, the security challenges in physical unclonable functions, and security issues in machine learning. Double patterning lithography (DPL) is an emerging fabrication technique for ICs. We study the security opportunities that DPL brings at the layout level. DPL is used to set up two independent mask development lines which do not need to share any information. Under this setup, we consider the attack model where the untrusted employee(s) who has access to only one mask may try to infer the entire circuit design or insert additional malicious circuitry into the design. As a countermeasure, we customize DPL to decompose the layout into two sub-layouts in such a way that each sub-layout individually exposes minimum information about the other and hence protects the entire layout from any untrusted personnel. Physical unclonable functions (PUF) are a type of circuits for which each copy (of the same circuit structure) has a unique and unpredictable functionality. The unpredictable behavior is caused by the manufacturing variations of electronic devices. However, for many state-of-the-art PUF designs, we show that the device variations can be estimated using an optimization-theoretic formulation and hence the PUF's input-output behavior becomes predictable. Simulations show a substantial reduction in attack complexity compared to previously proposed machine learning based attacks. Neural network (NN) is an emerging computation framework for machine learning (ML). It is increasingly popular for system developers to use pre-trained NN models instead of training their own because training is painstaking and sometimes requires private data. We call these pre-trained neural models neural intellectual properties (IP). Neural IPs raise multiple security concerns. On the one hand, as the IP user does not know about the training process, it is crucial to ensure the integrity of the neural IP. To this end, we investigate possible hidden malicious functionality, i.e. neural Trojans, that can be embedded into neural IPs and propose effective mitigation techniques. On the other hand, the neural IP owner may want to protect the NN model from reverse engineering attacks. However, it has been shown that hardware side-channels can be exploited to decipher the structure of neural networks. We propose both a novel attack approach based on cache timing side-channel and a defensive memory access mechanism. NNs also raise challenges to conventional hardware security techniques. Specifically, we focus on its challenge to logic locking, a strong key-based protection of hardware IP against untrusted foundries by injecting incorrect behavior into the digital functionality when the key is incorrect. We formally prove a trade-off between the amount of injected error and the complexity of Boolean satisfiability (SAT)-based attacks to find the correct key. Due to the inherent error resiliency of NNs, state-of-the-art logic locking schemes fail to inject enough error to derail NN-based applications while maintaining exponential SAT complexity. To fix this issue, we propose a novel secure and effective logic locking scheme, called Strong Anti-SAT (SAS), to lock the hardware and make sure that the NN modes undergo significant accuracy loss when any wrong key is applied

    Advances in Logic Locking: Past, Present, and Prospects

    Get PDF
    Logic locking is a design concealment mechanism for protecting the IPs integrated into modern System-on-Chip (SoC) architectures from a wide range of hardware security threats at the IC manufacturing supply chain. Logic locking primarily helps the designer to protect the IPs against reverse engineering, IP piracy, overproduction, and unauthorized activation. For more than a decade, the research studies that carried out on this paradigm has been immense, in which the applicability, feasibility, and efficacy of the logic locking have been investigated, including metrics to assess the efficacy, impact of locking in different levels of abstraction, threat model definition, resiliency against physical attacks, tampering, and the application of machine learning. However, the security and strength of existing logic locking techniques have been constantly questioned by sophisticated logical and physical attacks that evolve in sophistication at the same rate as logic locking countermeasure approaches. By providing a comprehensive definition regarding the metrics, assumptions, and principles of logic locking, in this survey paper, we categorize the existing defenses and attacks to capture the most benefit from the logic locking techniques for IP protection, and illuminating the need for and giving direction to future research studies in this topic. This survey paper serves as a guide to quickly navigate and identify the state-of-the-art that should be considered and investigated for further studies on logic locking techniques, helping IP vendors, SoC designers, and researchers to be informed of the principles, fundamentals, and properties of logic locking

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    EF/CF: High Performance Smart Contract Fuzzing for Exploit Generation

    Full text link
    Smart contracts are increasingly being used to manage large numbers of high-value cryptocurrency accounts. There is a strong demand for automated, efficient, and comprehensive methods to detect security vulnerabilities in a given contract. While the literature features a plethora of analysis methods for smart contracts, the existing proposals do not address the increasing complexity of contracts. Existing analysis tools suffer from false alarms and missed bugs in today's smart contracts that are increasingly defined by complexity and interdependencies. To scale accurate analysis to modern smart contracts, we introduce EF/CF, a high-performance fuzzer for Ethereum smart contracts. In contrast to previous work, EF/CF efficiently and accurately models complex smart contract interactions, such as reentrancy and cross-contract interactions, at a very high fuzzing throughput rate. To achieve this, EF/CF transpiles smart contract bytecode into native C++ code, thereby enabling the reuse of existing, optimized fuzzing toolchains. Furthermore, EF/CF increases fuzzing efficiency by employing a structure-aware mutation engine for smart contract transaction sequences and using a contract's ABI to generate valid transaction inputs. In a comprehensive evaluation, we show that EF/CF scales better -- without compromising accuracy -- to complex contracts compared to state-of-the-art approaches, including other fuzzers, symbolic/concolic execution, and hybrid approaches. Moreover, we show that EF/CF can automatically generate transaction sequences that exploit reentrancy bugs to steal Ether.Comment: To be published at Euro S&P 202

    CazDataProvider: a solution to the object-relational mismatch

    Get PDF
    Dissertação de mestrado em Engenharia de InformáticaToday, most software applications require mechanisms to store information persistently. For decades, Relational Database Management Systems (RDBMSs) have been the most common technology to provide efficient and reliable persistence. Due to the object-relational paradigm mismatch, object oriented applications that store data in relational databases have to deal with Object Relational Mapping (ORM) problems. Since the emerging of new ORM frameworks, there has been an attempt to lure developers for a radical paradigm shift. However, they still often have troubles finding the best persistence mechanism for their applications, especially when they have to bear with legacy database systems. The aim of this dissertation is to discuss the persistence problem on object oriented applications and find the best solutions. The main focus lies on the ORM limitations, patterns, technologies and alternatives. The project supporting this dissertation was implemented at Cachapuz under the Project Global Weighting Solutions (GWS). Essentially, the objectives of GWS were centred on finding the optimal persistence layer for CazFramework, mostly providing database interoperability with close-to-Structured Query Language (SQL) querying. Therefore, this work provides analyses on ORM patterns, frameworks, alternatives to ORM like Object-Oriented Database Management Systems (OODBMSs). It also describes the implementation of CazDataProvider, a .NET library tool providing database interoperability and dynamic query features. In the end, there is a performance comparison of all the technologies debated in this dissertation. The result of this dissertation provides guidance for adopting the best persistence technology or implement the most suitable ORM architectures.Hoje, a maioria dos aplicações requerem mecanismos para armazenar informação persistentemente. Durante décadas, as RDBMSs têm sido a tecnologia mais comum para fornecer persistência eficiente e confiável. Devido à incompatibilidade dos paradigmas objetos-relacional, as aplicações orientadas a objetos que armazenam dados em bases de dados relacionais têm de lidar com os problemas do ORM. Desde o surgimento de novas frameworks ORM, houve uma tentativa de atrair programadores para uma mudança radical de paradigmas. No entanto, eles ainda têm muitas vezes dificuldade em encontrar o melhor mecanismo de persistência para as suas aplicações, especialmente quando eles têm de lidar com bases de dados legadss. O objetivo deste trabalho é discutir o problema de persistência em aplicações orientadas a objetos e encontrar as melhores soluções. O foco principal está nas limitações, padrões e tecnologias do ORM bem como suas alternativas. O projeto de apoio a esta dissertação foi implementado na Cachapuz no âmbito do Projeto GWS. Essencialmente, os objetivos do GWS foram centrados em encontrar a camada de persistência ideal para a CazFramework, principalmente fornecendo interoperabilidade de base de dados e consultas em SQL. Portanto, este trabalho fornece análises sobre padrões, frameworks e alternativas ao ORM como OODBMS. Além disso descreve a implementação do CazDataProvider, uma biblioteca .NET que fornece interoperabilidade de bases de dados e consultas dinâmicas. No final, há uma comparação de desempenho de todas as tecnologias discutidas nesta dissertação. O resultado deste trabalho fornece orientação para adotar a melhor tecnologia de persistência ou implementar as arquiteturas ORM mais adequadas

    Análise colaborativa de grandes conjuntos de séries temporais

    Get PDF
    The recent expansion of metrification on a daily basis has led to the production of massive quantities of data, and in many cases, these collected metrics are only useful for knowledge building when seen as a full sequence of data ordered by time, which constitutes a time series. To find and interpret meaningful behavioral patterns in time series, a multitude of analysis software tools have been developed. Many of the existing solutions use annotations to enable the curation of a knowledge base that is shared between a group of researchers over a network. However, these tools also lack appropriate mechanisms to handle a high number of concurrent requests and to properly store massive data sets and ontologies, as well as suitable representations for annotated data that are visually interpretable by humans and explorable by automated systems. The goal of the work presented in this dissertation is to iterate on existing time series analysis software and build a platform for the collaborative analysis of massive time series data sets, leveraging state-of-the-art technologies for querying, storing and displaying time series and annotations. A theoretical and domain-agnostic model was proposed to enable the implementation of a distributed, extensible, secure and high-performant architecture that handles various annotation proposals in simultaneous and avoids any data loss from overlapping contributions or unsanctioned changes. Analysts can share annotation projects with peers, restricting a set of collaborators to a smaller scope of analysis and to a limited catalog of annotation semantics. Annotations can express meaning not only over a segment of time, but also over a subset of the series that coexist in the same segment. A novel visual encoding for annotations is proposed, where annotations are rendered as arcs traced only over the affected series’ curves in order to reduce visual clutter. Moreover, the implementation of a full-stack prototype with a reactive web interface was described, directly following the proposed architectural and visualization model while applied to the HVAC domain. The performance of the prototype under different architectural approaches was benchmarked, and the interface was tested in its usability. Overall, the work described in this dissertation contributes with a more versatile, intuitive and scalable time series annotation platform that streamlines the knowledge-discovery workflow.A recente expansão de metrificação diária levou à produção de quantidades massivas de dados, e em muitos casos, estas métricas são úteis para a construção de conhecimento apenas quando vistas como uma sequência de dados ordenada por tempo, o que constitui uma série temporal. Para se encontrar padrões comportamentais significativos em séries temporais, uma grande variedade de software de análise foi desenvolvida. Muitas das soluções existentes utilizam anotações para permitir a curadoria de uma base de conhecimento que é compartilhada entre investigadores em rede. No entanto, estas ferramentas carecem de mecanismos apropriados para lidar com um elevado número de pedidos concorrentes e para armazenar conjuntos massivos de dados e ontologias, assim como também representações apropriadas para dados anotados que são visualmente interpretáveis por seres humanos e exploráveis por sistemas automatizados. O objetivo do trabalho apresentado nesta dissertação é iterar sobre o software de análise de séries temporais existente e construir uma plataforma para a análise colaborativa de grandes conjuntos de séries temporais, utilizando tecnologias estado-de-arte para pesquisar, armazenar e exibir séries temporais e anotações. Um modelo teórico e agnóstico quanto ao domínio foi proposto para permitir a implementação de uma arquitetura distribuída, extensível, segura e de alto desempenho que lida com várias propostas de anotação em simultâneo e evita quaisquer perdas de dados provenientes de contribuições sobrepostas ou alterações não-sancionadas. Os analistas podem compartilhar projetos de anotação com colegas, restringindo um conjunto de colaboradores a uma janela de análise mais pequena e a um catálogo limitado de semântica de anotação. As anotações podem exprimir significado não apenas sobre um intervalo de tempo, mas também sobre um subconjunto das séries que coexistem no mesmo intervalo. Uma nova codificação visual para anotações é proposta, onde as anotações são desenhadas como arcos traçados apenas sobre as curvas de séries afetadas de modo a reduzir o ruído visual. Para além disso, a implementação de um protótipo full-stack com uma interface reativa web foi descrita, seguindo diretamente o modelo de arquitetura e visualização proposto enquanto aplicado ao domínio AVAC. O desempenho do protótipo com diferentes decisões arquiteturais foi avaliado, e a interface foi testada quanto à sua usabilidade. Em geral, o trabalho descrito nesta dissertação contribui com uma abordagem mais versátil, intuitiva e escalável para uma plataforma de anotação sobre séries temporais que simplifica o fluxo de trabalho para a descoberta de conhecimento.Mestrado em Engenharia Informátic

    Safety Assurance in Interlocking Design

    Get PDF
    This thesis takes a pedagogical stance in demonstrating how results from theoretical computer science may be applied to yield significant insight into the behaviour of the devices computer systems engineering practice seeks to put in place, and that this is immediately attainable with the present state of the art. The focus for this detailed study is provided by the type of solid state signalling systems currently being deployed throughout mainline British railways. Safety and system reliability concerns dominate in this domain. With such motivation, two issues are tackled: the special problem of software quality assurance in these data-driven control systems, and the broader problem of design dependability. In the former case, the analysis is directed towards proving safety properties of the geographic data which encode the control logic for the railway interlocking; the latter examines the fidelity of the communication protocols upon which the distributed control system depends. The starting point for both avenues of attack is a mathematical model of the interlocking logic that is derived by interpreting the geographic data in process algebra. Thus, the emphasis is on the semantics of the programming language in question, and the kinds of safety properties which can be expressed as invariants of the system's ongoing behaviour. Although the model so derived turns out to be too concrete to be effectual in program verification in general, a careful analysis of the safety proof reveals a simple co-induction argument that leads to a highly efficient proof methodology. From this understanding it is straightforward to mechanise the safety arguments, and a prototype verification system is realised in higher-order logic which uses the proof tactics of the theorem prover to achieve full automation. The other line of inquiry considers whether the integrity of the overall design that coordinates the activities of many concurrent control elements can be compromised. Therefore, the formal model is developed to specifically answer safety-related concerns about the protocol employed to achieve distributed control in the management of larger railway networks. The exercise reveals that moderately serious design flaws do exist, but the real value of the mathematical model is twofold: it makes explicit one's assumptions about the conditions under which the faults can and cannot be activated, and it provides a framework in which to prove a simple modification to the design recovers complete security at negligible cost to performance
    corecore