109 research outputs found

    M-MAP: Multi-Factor Memory Authentication for Secure Embedded Processors

    Get PDF
    The challenges faced in securing embedded computing systems against multifaceted memory safety vulnerabilities have prompted great interest in the development of memory safety countermeasures. These countermeasures either provide protection only against their corresponding type of vulnerabilities, or incur substantial architectural modifications and overheads in order to provide complete safety, which makes them infeasible for embedded systems. In this paper, we propose M-MAP: a comprehensive system based on multi-factor memory authentication for complete memory safety, inspired by everyday user authentication factors. We examine certain crucial theoretical and practical implications of composing memory integrity verification and bounds checking protection schemes in a comprehensive system. Based on these implications, we implement M-MAP with hardware based memory integrity verification and software based bounds checking to achieve a balance between hardware modifications and performance. We demonstrate that M-MAP implemented on top of a lightweight out-of-order processor delivers complete memory safety with only 32%32\% performance overhead on average, and incurs minimal hardware modifications and area overhead

    FPGA-Augmented Secure Crash-Consistent Non-Volatile Memory

    Get PDF
    Emerging byte-addressable Non-Volatile Memory (NVM) technology, although promising superior memory density and ultra-low energy consumption, poses unique challenges to achieving persistent data privacy and computing security, both of which are critically important to the embedded and IoT applications. Specifically, to successfully restore NVMs to their working states after unexpected system crashes or power failure, maintaining and recovering all the necessary security-related metadata can severely increase memory traffic, degrade runtime performance, exacerbate write endurance problem, and demand costly hardware changes to off-the-shelf processors. In this thesis, we summarize and expand upon two of our innovative works, ARES and HERMES, to design a new FPGA-assisted processor-transparent security mechanism aiming at efficiently and effectively achieving all three aspects of a security triad—confidentiality, integrity, and recoverability—in modern embedded computing. Given the growing prominence of CPU-FPGA heterogeneous computing architectures, ARES leverages FPGA\u27s hardware reconfigurability to offload performance-critical and security-related functions to the programmable hardware without microprocessors\u27 involvement. In particular, recognizing that the traditional Merkle tree caching scheme cannot fully exploit FPGA\u27s parallelism due to its sequential and recursive function calls, ARES proposed a new Merkle tree cache architecture and a novel Merkle tree scheme which flattened and reorganized the computation in the traditional Merkle tree verification and update processes to fully exploit the parallel cache ports and to fully pipeline time-consuming hashing operations. To further optimize the throughput of BMT operations, HERMES proposed an optimally efficient dataflow architecture by processing multiple outstanding counter requests simultaneously. Specifically, HERMES explored and addressed three technical challenges when exploiting task-level parallelism of BMT and proposed a speculative execution approach with both low latency and high throughput

    Security plane for data authentication in information-centric networks

    Get PDF
    Orientadores: Maurício Ferreira Magalhães, Jussi KangasharjuTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: A segurança da informação é responsável pela proteção das informações contra o acesso nãoautorizado, uso, modificação ou a sua destruição. Com o objetivo de proteger os dados contra esses ataques de segurança, vários protocolos foram desenvolvidos, tais como o Internet Protocol Security (IPSEC) e o Transport Layer Security (TLS), provendo mecanismos de autenticação, integridade e confidencialidade dos dados para os usuários. Esses protocolos utilizam o endereço IP como identificador de hosts na Internet, tornando-o referência e identificador no estabelecimento de conexões seguras para a troca de dados entre aplicações na rede. Com o advento da Web e o aumento exponencial do consumo de conteúdos, como vídeos e áudios, há indícios da migração gradual do uso predominante da Internet, passando da ênfase voltada para a conexão entre hosts para uma ênfase voltada para a obtenção de conteúdo da rede, paradigma esse conhecido como information-centric networking. Nesse paradigma, usuários buscam por documentos e recursos na Internet sem se importarem com o conhecimento explícito da localização do conteúdo. Como consequência, o endereço IP que previamente era utilizado como ponto de referência do provedor de dados, torna-se meramente um identificador efêmero do local onde o conteúdo está armazenado, resultando em implicações para a autenticação correta dos dados. Nesse contexto, a simples autenticação de um endereço IP não garante a autenticidade dos dados, uma vez que o servidor identificado por um dado endereço IP não é necessariamente o endereço do produtor do conteúdo. No contexto de redes orientadas à informação, existem propostas na literatura que possibilitam a autenticação dos dados utilizando somente o conteúdo propriamente dito, como a utilização de assinaturas digitais por bloco de dado e a construção de árvores de hash sobre os blocos de dados. A ideia principal dessas abordagens é atrelar uma informação do provedor original do conteúdo nos blocos de dados transportados, por exemplo, uma assinatura digital, possibilitando a autenticação direta dos dados com o provedor, independentemente do host onde o dado foi obtido. Apesar do mecanismo citado anteriormente possibilitar tal verificação, esse procedimento é muito oneroso do ponto de vista de processamento, especialmente quando o número de blocos é grande, tornando-o inviável de ser utilizado na prática. Este trabalho propõe um novo mecanismo de autenticação utilizando árvores de hash com o objetivo de prover a autenticação dos dados de forma eficiente e explícita com o provedor original e, também, de forma independente do host onde os dados foram obtidos. Nesta tese, propomos duas técnicas de autenticação de dados baseadas em árvores de hash, chamadas de skewed hash tree (SHT) e composite hash tree (CHT), para a autenticação de dados em redes orientadas à informação. Uma vez criadas, parte dos dados de autenticação é armazenada em um plano de segurança e uma outra parte permanece acoplada ao dado propriamente dito, possibilitando a verificação baseada no conteúdo e não no host de origem. Além disso, essa tese apresenta o modelo formal, a especificação e a implementação das duas técnicas de árvore de hash para autenticação dos dados em redes de conteúdo através de um plano de segurança. Por fim, esta tese detalha a instanciação do modelo de plano de segurança proposto em dois cenários de autenticação de dados: 1) redes Peer-to-Peer e 2) autenticação paralela de dados sobre o HTTPAbstract: Information security is responsible for protecting information against unauthorized access, use, modification or destruction. In order to protect such data against security attacks, many security protocols have been developed, for example, Internet Protocol Security (IPSec) and Transport Layer Security (TLS), providing mechanisms for data authentication, integrity and confidentiality for users. These protocols use the IP address as host identifier on the Internet, making it as a reference and identifier during the establishment of secure connections for data exchange between applications on the network. With the advent of the Web and the exponential increase in content consumption (e.g., video and audio), there is an evidence of a gradual migration of the predominant usage of the Internet, moving the emphasis on the connection between hosts to the content retrieval from the network, which paradigm is known as information-centric networking. In this paradigm, users look for documents and resources on the Internet without caring about the explicit knowledge of the location of the content. As a result, the IP address that was used previously as a reference point of a data provider, becomes merely an ephemeral identifier of where the content is stored, resulting in implications for the correct authentication data. In this context, the simple authentication of an IP address does not guarantee the authenticity of the data, because a hosting server identified by a given IP address is not necessarily the same one that is producing the requested content. In the context of information-oriented networks, some proposals in the literature proposes authentication mechanisms based on the content itself, for example, digital signatures over a data block or the usage of hash trees over data blocks. The main idea of these approaches is to add some information from the original provider in the transported data blocks, for example, a digital signature, enabling data authentication directly with the original provider, regardless of the host where the data was obtained. Although the mechanism mentioned previously allows for such verification, this procedure is very costly in terms of processing, especially when the number of blocks is large, making it unfeasible in practice. This thesis proposes a new authentication mechanism using hash trees in order to provide efficient data authentication and explicitly with the original provider, and also independently of the host where the data were obtained. We propose two techniques for data authentication based on hash trees, called skewed hash tree (SHT) and composite hash tree (CHT), for data authentication in information-oriented networks. Once created, part of the authentication data is stored in a security plane and another part remains attached to the data itself, allowing for the verification based on content and not on the source host. In addition, this thesis presents the formal model, specification and implementation of two hash tree techniques for data authentication in information-centric networks through a security plane. Finally, this thesis details the instantiation of the security plane model in two scenarios of data authentication: 1) Peer-to-Peer and 2) parallel data authentication over HTTPDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétric

    Segurança de computadores por meio de autenticação intrínseca de hardware

    Get PDF
    Orientadores: Guido Costa Souza de Araújo, Mario Lúcio Côrtes e Diego de Freitas AranhaTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Neste trabalho apresentamos Computer Security by Hardware-Intrinsic Authentication (CSHIA), uma arquitetura de computadores segura para sistemas embarcados que tem como objetivo prover autenticidade e integridade para código e dados. Este trabalho está divido em três fases: Projeto da Arquitetura, sua Implementação, e sua Avaliação de Segurança. Durante a fase de projeto, determinamos como integridade e autenticidade seriam garantidas através do uso de Funções Fisicamente Não Clonáveis (PUFs) e propusemos um algoritmo de extração de chaves criptográficas de memórias cache de processadores. Durante a implementação, flexibilizamos o projeto da arquitetura para fornecer diferentes possibilidades de configurações sem comprometimento da segurança. Então, avaliamos seu desempenho levando em consideração o incremento em área de chip, aumento de consumo de energia e memória adicional para diferentes configurações. Por fim, analisamos a segurança de PUFs e desenvolvemos um novo ataque de canal lateral que circunvê a propriedade de unicidade de PUFs por meio de seus elementos de construçãoAbstract: This work presents Computer Security by Hardware-Intrinsic Authentication (CSHIA), a secure computer architecture for embedded systems that aims at providing authenticity and integrity for code and data. The work encompassed three phases: Design, Implementation, and Security Evaluation. In design, we laid out the basic ideas behind CSHIA, namely, how integrity and authenticity are employed through the use of Physical Unclonable Functions (PUFs), and we proposed an algorithm to extract cryptographic keys from the intrinsic memories of processors. In implementation, we made CSHIA¿s design more flexible, allowing different configurations without compromising security. Then, we evaluated CSHIA¿s performance and overheads, such as area, energy, and memory, for multiple configurations. Finally, we evaluated security of PUFs, which led us to develop a new side-channel-based attack that enabled us to circumvent PUFs¿ uniqueness property through their architectural elementsDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação2015/06829-2; 2016/25532-3147614/2014-7FAPESPCNP

    Communication-Efficient Probabilistic Algorithms: Selection, Sampling, and Checking

    Get PDF
    Diese Dissertation behandelt drei grundlegende Klassen von Problemen in Big-Data-Systemen, für die wir kommunikationseffiziente probabilistische Algorithmen entwickeln. Im ersten Teil betrachten wir verschiedene Selektionsprobleme, im zweiten Teil das Ziehen gewichteter Stichproben (Weighted Sampling) und im dritten Teil die probabilistische Korrektheitsprüfung von Basisoperationen in Big-Data-Frameworks (Checking). Diese Arbeit ist durch einen wachsenden Bedarf an Kommunikationseffizienz motiviert, der daher rührt, dass der auf das Netzwerk und seine Nutzung zurückzuführende Anteil sowohl der Anschaffungskosten als auch des Energieverbrauchs von Supercomputern und der Laufzeit verteilter Anwendungen immer weiter wächst. Überraschend wenige kommunikationseffiziente Algorithmen sind für grundlegende Big-Data-Probleme bekannt. In dieser Arbeit schließen wir einige dieser Lücken. Zunächst betrachten wir verschiedene Selektionsprobleme, beginnend mit der verteilten Version des klassischen Selektionsproblems, d. h. dem Auffinden des Elements von Rang kk in einer großen verteilten Eingabe. Wir zeigen, wie dieses Problem kommunikationseffizient gelöst werden kann, ohne anzunehmen, dass die Elemente der Eingabe zufällig verteilt seien. Hierzu ersetzen wir die Methode zur Pivotwahl in einem schon lange bekannten Algorithmus und zeigen, dass dies hinreichend ist. Anschließend zeigen wir, dass die Selektion aus lokal sortierten Folgen – multisequence selection – wesentlich schneller lösbar ist, wenn der genaue Rang des Ausgabeelements in einem gewissen Bereich variieren darf. Dies benutzen wir anschließend, um eine verteilte Prioritätswarteschlange mit Bulk-Operationen zu konstruieren. Später werden wir diese verwenden, um gewichtete Stichproben aus Datenströmen zu ziehen (Reservoir Sampling). Schließlich betrachten wir das Problem, die global häufigsten Objekte sowie die, deren zugehörige Werte die größten Summen ergeben, mit einem stichprobenbasierten Ansatz zu identifizieren. Im Kapitel über gewichtete Stichproben werden zunächst neue Konstruktionsalgorithmen für eine klassische Datenstruktur für dieses Problem, sogenannte Alias-Tabellen, vorgestellt. Zu Beginn stellen wir den ersten Linearzeit-Konstruktionsalgorithmus für diese Datenstruktur vor, der mit konstant viel Zusatzspeicher auskommt. Anschließend parallelisieren wir diesen Algorithmus für Shared Memory und erhalten so den ersten parallelen Konstruktionsalgorithmus für Aliastabellen. Hiernach zeigen wir, wie das Problem für verteilte Systeme mit einem zweistufigen Algorithmus angegangen werden kann. Anschließend stellen wir einen ausgabesensitiven Algorithmus für gewichtete Stichproben mit Zurücklegen vor. Ausgabesensitiv bedeutet, dass die Laufzeit des Algorithmus sich auf die Anzahl der eindeutigen Elemente in der Ausgabe bezieht und nicht auf die Größe der Stichprobe. Dieser Algorithmus kann sowohl sequentiell als auch auf Shared-Memory-Maschinen und verteilten Systemen eingesetzt werden und ist der erste derartige Algorithmus in allen drei Kategorien. Wir passen ihn anschließend an das Ziehen gewichteter Stichproben ohne Zurücklegen an, indem wir ihn mit einem Schätzer für die Anzahl der eindeutigen Elemente in einer Stichprobe mit Zurücklegen kombinieren. Poisson-Sampling, eine Verallgemeinerung des Bernoulli-Sampling auf gewichtete Elemente, kann auf ganzzahlige Sortierung zurückgeführt werden, und wir zeigen, wie ein bestehender Ansatz parallelisiert werden kann. Für das Sampling aus Datenströmen passen wir einen sequentiellen Algorithmus an und zeigen, wie er in einem Mini-Batch-Modell unter Verwendung unserer im Selektionskapitel eingeführten Bulk-Prioritätswarteschlange parallelisiert werden kann. Das Kapitel endet mit einer ausführlichen Evaluierung unserer Aliastabellen-Konstruktionsalgorithmen, unseres ausgabesensitiven Algorithmus für gewichtete Stichproben mit Zurücklegen und unseres Algorithmus für gewichtetes Reservoir-Sampling. Um die Korrektheit verteilter Algorithmen probabilistisch zu verifizieren, schlagen wir Checker für grundlegende Operationen von Big-Data-Frameworks vor. Wir zeigen, dass die Überprüfung zahlreicher Operationen auf zwei „Kern“-Checker reduziert werden kann, nämlich die Prüfung von Aggregationen und ob eine Folge eine Permutation einer anderen Folge ist. Während mehrere Ansätze für letzteres Problem seit geraumer Zeit bekannt sind und sich auch einfach parallelisieren lassen, ist unser Summenaggregations-Checker eine neuartige Anwendung der gleichen Datenstruktur, die auch zählenden Bloom-Filtern und dem Count-Min-Sketch zugrunde liegt. Wir haben beide Checker in Thrill, einem Big-Data-Framework, implementiert. Experimente mit absichtlich herbeigeführten Fehlern bestätigen die von unserer theoretischen Analyse vorhergesagte Erkennungsgenauigkeit. Dies gilt selbst dann, wenn wir häufig verwendete schnelle Hash-Funktionen mit in der Theorie suboptimalen Eigenschaften verwenden. Skalierungsexperimente auf einem Supercomputer zeigen, dass unsere Checker nur sehr geringen Laufzeit-Overhead haben, welcher im Bereich von 2%2\,\% liegt und dabei die Korrektheit des Ergebnisses nahezu garantiert wird

    Enhancing Security and Transparency of User Data Systems with Blockchain Technology

    Get PDF
    As blockchain is increasingly gaining popularity, interest in corporate use is also gaining traction. One area blockchain has seen an increased amount of use are systems involving management of sensitive information, such as user data systems. In this thesis commissioned by a stakeholder, blockchain is implemented in user data system as a proof-of-concept prototype aiming to prove that the implementation can enhance data security and transparency in user data systems. The first half of the this will build a theoretical framework. First, fundamental theory of blockchain is examined, which includes an overview of blockchain’s architecture and its security features. This includes architecture and functionality on general level, consensus methodology, and other security algorithms such as hashes. Second, already existing blockchain solutions that could benefit in designing the proof-of-concept prototype are explored, which included a patient data system, and an Internet-of-Things system. Patient data system’s case provided a solution of implementing blockchain as a separate component in the patient information system, while Internet-of-Things’ solution provided insight of storing functional data in the blockchain, while keeping the actual raw data in a separate database with a restricted access. These solutions formed an adequate foundation; however, the solutions couldn’t be applied as-is, which lead to the need of applying and designing a new solution. The latter half of reports the implementation process of blockchain. First, the research method used in this thesis, constructive research approach, is demonstrated - constructive research approach aims to create a practical solution for a real-life problem. The prototype’s primary requirement is that it should be able to record a log of activity in the user data system, telling who did what and to whom, and without revealing any confidential information. The prototype is implemented in a test environment using a separate database for storing the actual user data, and blockchain for storing data about activity happening in the user data system. The prototype’s validity was tested using software testing methods, more specifically integration testing and user acceptance testing. The research will benefit the stakeholder with a working example showing a potential way of implementing a blockchain solution in a commercial software. The research aims to prove that with the implemented blockchain solution can adequately help monitoring actions committed by users, enforcing honest usage, and helping spot malicious activity, and this way improving transparency and security. The research has also value in scientific community with its practical approach demonstrating how could a blockchain system be implemented in sensitive user data systems, and what are its potential benefits in security. The next step for the study is evaluating the actual value of the implementation in the commercial software, or proof-of-value research.Lohkoketjujen kasvaessa suosiota myös kiinnostus yrityskäyttöä kohtaan on ollut kasvussa. Yksi osa-alue missä lohkoketjujen käyttö on nähnyt kasvua ovat luottamuksellista tietoa käsittelevät järjestelmät, kuten käyttäjätietojärjestelmät. Tässä sidosryhmän toimeksiantamassa tutkielmassa implementoidaan lohkoketju käyttäjätietojärjestelmään osana konseptitodistusprototyyppiä (proof-of-concept), joka pyrkii todistamaan lohkoketjun kykyä tietoturvan ja läpinäkyvyyden kehittämisessä käyttäjätietojärjestelmissä. Tutkielman ensimmäinen puolisko luo teoreettisen viitekehyksen. Ensimmäisenä tutustutaan lohkoketjujen perusteisiin, johon kuuluu sen arkkitehtuurin sekä tietoturvaominaisuuksien tarkastelua. Tämä sisältää yleisen tason katsauksen toiminnallisuudesta, yhteisymmärrysmetodologiasta sekä muista tietoturva-algoritmeista, kuten tiivisteistä (hash). Tämän jälkeen syvennytään olemassa oleviin lohkoketjuratkaisuihin, jotka voisivat hyödyntää konseptitodistusprototyypin suunnittelemisessa. Tarkasteltavina toimivat esimerkit potilastietojärjestelmästä sekä esineiden internetistä. Potilastietojärjestelmän tapauksessa perusteltiin lohkoketjun implementoimista erillisenä komponenttina, kun taas esineiden internetin tapauksessa esitettiin toiminnallisen datan säilyttämistä lohkoketjussa, kun taas raakadata säilytetään erillisessä tietokannassa rajatulla pääsyllä. Nämä esimerkit loivat hyvän pohjan, mutta eivät ole sovellettavissa prototyyppiin sellaisinaan. Tutkielman toinen puolisko selostaa prototyypin kehitysprosessia. Aluksi esitellään käytetty tutkimusmenetelmä, eli konstruktiivinen tutkimusmenetelmä, jonka ominaispiirteenä on luoda käytännön ratkaisu oikean elämän ongelmaan. Prototyypin ensisijaisena vaatimuksena on pystyä kirjata aktiviteettilokeja käyttäjätietojärjestelmässä, kertoen kuka teki mitäkin ja kenelle, kuitenkaan paljastamatta luottamuksellista tietoa. Prototyyppi implementoitiin testiympäristöön käyttäen erillistä tietokantaa itse käyttäjätietojen tallentamiseen, kun taas lohkoketjua käytettiin käyttäjätietojärjestelmän aktiviteettilokien tallentamiseen. Prototyypin toimivuus varmistettiin ohjelmistotestausmetodeilla, tarkemmin ottaen integraatiotestauksella ja hyväksymistestauksella. Tehty tutkimus tulee hyödyttämään sidosryhmää toimivalla prototyypillä esittelemällä potentiaalisen tavan lisätä lohkoketjutoteutus kaupalliseen ohjelmistoon. Tutkielma pyrkii todistamaan lohkoketjutotetuksen tuomaa hyötyä käyttäjien tekemien muutosten tarkkailussa, täten kannustaen rehelliseen käyttöön ja samoin auttaa tunnistamaan haitallisen toiminnan, joka kaiken kaikkiaan johtaa kehittyneeseen tietoturvallisuuteen ja läpinäkyvyyteen. Tutkielmalla on myös tieteellistä arvoa esitellen käyttäjätietojärjestelmien tietoturvan kehittämistä lohkoketjutoteutusta hyödyntäen. Jatkotutkimusmahdollisuutena on arvioida toteutuksen varsinainen tuomalisäarvo kaupallisessa ohjelmistossa

    A patient agent controlled customized blockchain based framework for internet of things

    Get PDF
    Although Blockchain implementations have emerged as revolutionary technologies for various industrial applications including cryptocurrencies, they have not been widely deployed to store data streaming from sensors to remote servers in architectures known as Internet of Things. New Blockchain for the Internet of Things models promise secure solutions for eHealth, smart cities, and other applications. These models pave the way for continuous monitoring of patient’s physiological signs with wearable sensors to augment traditional medical practice without recourse to storing data with a trusted authority. However, existing Blockchain algorithms cannot accommodate the huge volumes, security, and privacy requirements of health data. In this thesis, our first contribution is an End-to-End secure eHealth architecture that introduces an intelligent Patient Centric Agent. The Patient Centric Agent executing on dedicated hardware manages the storage and access of streams of sensors generated health data, into a customized Blockchain and other less secure repositories. As IoT devices cannot host Blockchain technology due to their limited memory, power, and computational resources, the Patient Centric Agent coordinates and communicates with a private customized Blockchain on behalf of the wearable devices. While the adoption of a Patient Centric Agent offers solutions for addressing continuous monitoring of patients’ health, dealing with storage, data privacy and network security issues, the architecture is vulnerable to Denial of Services(DoS) and single point of failure attacks. To address this issue, we advance a second contribution; a decentralised eHealth system in which the Patient Centric Agent is replicated at three levels: Sensing Layer, NEAR Processing Layer and FAR Processing Layer. The functionalities of the Patient Centric Agent are customized to manage the tasks of the three levels. Simulations confirm protection of the architecture against DoS attacks. Few patients require all their health data to be stored in Blockchain repositories but instead need to select an appropriate storage medium for each chunk of data by matching their personal needs and preferences with features of candidate storage mediums. Motivated by this context, we advance third contribution; a recommendation model for health data storage that can accommodate patient preferences and make storage decisions rapidly, in real-time, even with streamed data. The mapping between health data features and characteristics of each repository is learned using machine learning. The Blockchain’s capacity to make transactions and store records without central oversight enables its application for IoT networks outside health such as underwater IoT networks where the unattended nature of the nodes threatens their security and privacy. However, underwater IoT differs from ground IoT as acoustics signals are the communication media leading to high propagation delays, high error rates exacerbated by turbulent water currents. Our fourth contribution is a customized Blockchain leveraged framework with the model of Patient-Centric Agent renamed as Smart Agent for securely monitoring underwater IoT. Finally, the smart Agent has been investigated in developing an IoT smart home or cities monitoring framework. The key algorithms underpinning to each contribution have been implemented and analysed using simulators.Doctor of Philosoph

    SoK: Layer-Two Blockchain Protocols

    Get PDF
    Blockchains have the potential to revolutionize markets and services. However, they currently exhibit high latencies and fail to handle transaction loads comparable to those managed by traditional financial systems. Layer-two protocols, built on top of layer-one blockchains, avoid disseminating every transaction to the whole network by exchanging authenticated transactions off-chain. Instead, they utilize the expensive and low-rate blockchain only as a recourse for disputes. The promise of layer-two protocols is to complete off-chain transactions in sub-seconds rather than minutes or hours while retaining asset security, reducing fees and allowing blockchains to scale. We systematize the evolution of layer-two protocols over the period from the inception of cryptocurrencies in 2009 until today, structuring the multifaceted body of research on layer-two transactions. Categorizing the research into payment and state channels, commit-chains and protocols for refereed delegation, we provide a comparison of the protocols and their properties. We provide a systematization of the associated synchronization and routing protocols along with their privacy and security aspects. This Systematization of Knowledge (SoK) clears the layer-two fog, highlights the potential of layer-two solutions and identifies their unsolved challenges, indicating propitious avenues of future work
    corecore