16 research outputs found

    Challenges and applications of assembly level software model checking

    Get PDF
    This thesis addresses the application of a formal method called Model Checking to the domain of software verification. Here, exploration algorithms are used to search for errors in a program. In contrast to the majority of other approaches, we claim that the search should be applied to the actual source code of the program, rather than to some formal model. There are several challenges that need to be overcome to build such a model checker. First, the tool must be capable to handle the full semantics of the underlying programming language. This implies a considerable amount of additional work unless the interpretation of the program is done by some existing infrastructure. The second challenge lies in the increased memory requirements needed to memorize entire program configurations. This additionally aggravates the problem of large state spaces that every model checker faces anyway. As a remedy to the first problem, the thesis proposes to use an existing virtual machine to interpret the program. This takes the burden off the developer, who can fully concentrate on the model checking algorithms. To address the problem of large program states, we call attention to the fact that most transitions in a program only change small fractions of the entire program state. Based on this observation, we devise an incremental storing of states which considerably lowers the memory requirements of program exploration. To further alleviate the per-state memory requirement, we apply state reconstruction, where states are no longer memorized explicitly but through their generating path. Another problem that results from the large state description of a program lies in the computational effort of hashing, which is exceptionally high for the used approach. Based on the same observation as used for the incremental storing of states, we devise an incremental hash function which only needs to process the changed parts of the program’s state. Due to the dynamic nature of computer programs, this is not a trivial task and constitutes a considerable part of the overall thesis. Moreover, the thesis addresses a more general problem of model checking - the state explosion, which says that the number of reachable states grows exponentially in the number of state components. To minimize the number of states to be memorized, the thesis concentrates on the use of heuristic search. It turns out that only a fraction of all reachable states needs to be visited to find a specific error in the program. Heuristics can greatly help to direct the search forwards the error state. As another effective way to reduce the number of memorized states, the thesis proposes a technique that skips intermediate states that do not affect shared resources of the program. By merging several consecutive state transitions to a single transition, the technique may considerably truncate the search tree. The proposed approach is realized in StEAM, a model checker for concurrent C++ programs, which was developed in the course of the thesis. Building on an existing virtual machine, the tool provides a set of blind and directed search algorithms for the detection of errors in the actual C++ implementation of a program. StEAM implements all of the aforesaid techniques, whose effectiveness is experimentally evaluated at the end of the thesis. Moreover, we exploit the relation between model checking and planning. The claim is, that the two fields of research have great similarities and that technical advances in one fields can easily carry over to the other. The claim is supported by a case study where StEAM is used as a planner for concurrent multi-agent systems. The thesis also contains a user manual for StEAM and technical details that facilitate understanding the engineering process of the tool

    Decentralized SDN Control Plane for a Distributed Cloud-Edge Infrastructure: A Survey

    Get PDF
    International audienceToday’s emerging needs (Internet of Things applications, Network Function Virtualization services, Mobile Edge computing, etc.) are challenging the classic approach of deploying a few large data centers to provide cloud services. A massively distributed Cloud-Edge architecture could better fit these new trends’ requirements and constraints by deploying on-demand infrastructure services in Point-of-Presences within backbone networks. In this context, a key feature is establishing connectivity among several resource managers in charge of operating, each one a subset of the infrastructure. After explaining the networking management challenges related to distributed Cloud-Edge infrastructures, this article surveys and analyzes the characteristics and limitations of existing technologies in the Software Defined Network field that could be used to provide the intersite connectivity feature. We also introduce Kubernetes, the new de facto container orchestrator platform, and analyze its use in the proposed context. This survey is concluded by providing a discussion about some research directions in the field of SDN applied to distributed Cloud-Edge infrastructures’ management

    Toward High-Performance Blockchains

    Get PDF
    The decentralized nature of blockchains has attracted many applications to build atop them, such as cryptocurrencies, smart contracts, and non-fungible tokens. The health and performance of the underlying blockchain systems considerably influence these applications. Bootstrapping new nodes by replaying all transactions on the ledger is not sustainable for ever-growing blockchains. In addition, poor performance impedes the adoption of blockchains in large-scale applications with high transaction rates. First, in order to address the bootstrapping problem of already-deployed UTXO-based blockchains, this thesis proposes a snapshot synchronization approach. This approach allows new nodes to synchronize themselves with the rest of the network by downloading a snapshot of the system state, thereby avoiding verifying transactions since the genesis block. In addition, snapshots are stored efficiently on disk by taking advantage of the system state database. Second, although sharding improves the performance of blockchains by distributing the workload among shards, it leaves the duplicated efforts within a shard unhandled. Specifically, every node has to verify all transactions on the ledger of its shard, thus limiting shard performance to the processing power of individual nodes. Aiming to improve the performance of individual shards, this thesis proposes Collaborative Transaction Verification, which enables nodes to share transaction verification results and thus reduces the per-node workload. Dependency graphs are employed to ensure that nodes reach the same system state despite different transaction verification and execution orders. Finally, cross-shard transactions rely on expensive atomic commit protocols to ensure inter-shard state consistency, thus impairing the performance of sharded blockchains. This thesis explores ways of lessening the impact of cross-shard transactions. On the one hand, a dependency-aware transaction placement algorithm is proposed to reduce cross-shard transactions. On the other hand, the processing cost of the remaining cross-shard transactions is reduced by optimizing the atomic commit protocol and parallelizing dependent transaction verification with the atomic commit protocol. The above techniques are devoted to addressing the bootstrapping and performance problems of blockchains. Our evaluation shows that the first technique can significantly expedite the initial synchronization of new nodes, and the other techniques can greatly boost the performance of sharded blockchains

    Scaling Permissioned Blockchains via Sharding

    Get PDF
    Traditional distributed systems, such as those used in banking and real estate, require a trusted third party to operate and maintain them, which is highly dependent on the reliability of the operator. Since Bitcoin was introduced by Nakamoto in 2008, blockchain technology has been considered as a promising solution to the trust issue raised by the traditional centralized approach.Blockchain is now used by most cryptocurrencies and has meaningful applications in other areas, such as logistics and supply chain management. However, scalability remains a major limitation. Various techniques are being investigated to tackle the scalability issue. Sharding is an intuitive approach to improve the scalability of blockchain systems. This thesis explores sharding techniques in permissioned blockchains. First of all, two techniques are examined for interleaving the shards of permissioned blockchains, which are referred to as strong temporal coupling and weak temporal coupling. The analysis and experiment results show that strong coupling loses performance when different shards grow unevenly, but outperforms weak coupling in a wide-area environment due to its inherent efficiency. Weak coupling, in contrast, deals naturally with load imbalance across shards and in fact tolerates shard failures without any additional effort, but loses performance when running on a high-latency network due to the additional coordination performed. Second, we propose Antipaxos, a leaderless consensus protocol that reaches agreement on multiple proposals with a fast path solution in the failure-free case, and falls back on a slow path to handle other cases. A new agreement problem, termed as k-Interactive Consistency is formalized first. Then, two algorithms to solve this problem are proposed under the crash failure model and Byzantine failure model, respectively. We prove the safety and liveness of the proposed algorithms, and present an experimental evaluation of their performance in the Amazon cloud. Both the crash-tolerant and Byzantine-tolerant designs reach agreement on n batches of proposals with Θ(n2) messages. This leads to the linear complexity of each batch in one consensus cycle, rather than a single batch of proposals per cycle in conventional solutions. The experiments show that our algorithms achieve not only lower execution latency but also higher peak throughput in the failure-free case when deployed in a geo-distributed environment. Lastly, we introduce a full sharding protocol, Geochain, for permissioned blockchains. The transaction latency is minimized by clustering participants using their geographical properties--locality. In addition, the locality is also being used to decide the transaction placement which suggests a low ratio of cross-shard transactions for applications, such as everyday banking, retail payments, and electric vehicle charging. We also propose a client-driven efficient mechanism to handle cross-shard transactions and present an analysis. This enables clients to manage their assets across different shards directly. A prototype is implemented on top of Hyperleder Fabric v2.3 and evaluated on Amazon EC2. The experiments show that our protocol doubles the peak throughput even with a high ratio of cross-shard transactions while minimizing the transaction latency

    Tennison: A Distributed SDN Framework for Scalable Network Security

    Get PDF
    Despite the relative maturity of the Internet, the computer networks of today are still susceptible to attack. The necessary distributed nature of networks for wide area connectivity has traditionally led to high cost and complexity in designing and implementing secure networks. With the introduction of software-defined networks (SDNs) and network functions virtualization, there are opportunities for efficient network threat detection and protection. SDN's global view provides a means of monitoring and defense across the entire network. However, current SDN-based security systems are limited by a centralized framework that introduces significant control plane overhead, leading to the saturation of vital control links. In this paper, we introduce TENNISON, a novel distributed SDN security framework that combines the efficiency of SDN control and monitoring with the resilience and scalability of a distributed system. TENNISON offers effective and proportionate monitoring and remediation, compatibility with widely available networking hardware, support for legacy networks, and a modular and extensible distributed design. We demonstrate the effectiveness and capabilities of the TENNISON framework through the use of four attack scenarios. These highlight multiple levels of monitoring, rapid detection, and remediation, and provide a unique insight into the impact of multiple controllers on network attack detection at scale

    Understanding Scalability Issues in Sharded Blockchains

    Get PDF
    Since the release of Bitcoin in 2008, cryptocurrencies have attracted attention from academia, government, and enterprises. Blockchain, the backbone ledger in many cryptocurrencies, has shown its potential to be a data structure carrying information over the network securely without the need for a centralized trust party. In this thesis, I delve into the consensus protocols used in permissioned blockchains and analyze the sharding technique that aims to improve the scalability in blockchain systems. I discuss a permissioned sharded blockchain that I use to examine different methods to interleave blocks, referred to as strong temporal coupling and weak temporal coupling. I provide empirical experiments to show the roles of lightweight nodes in solving the scalability issues in sharded blockchain systems. The results suggest that the weak temporal coupling method performs worse than the strong temporal coupling method and is more susceptible to an increase in network latency. The results also show the importance of separating the roles of nodes and adding lightweight nodes to improve the performance and scalability of sharded blockchain systems

    Explainable, Security-Aware and Dependency-Aware Framework for Intelligent Software Refactoring

    Full text link
    As software systems continue to grow in size and complexity, their maintenance continues to become more challenging and costly. Even for the most technologically sophisticated and competent organizations, building and maintaining high-performing software applications with high-quality-code is an extremely challenging and expensive endeavor. Software Refactoring is widely recognized as the key component for maintaining high-quality software by restructuring existing code and reducing technical debt. However, refactoring is difficult to achieve and often neglected due to several limitations in the existing refactoring techniques that reduce their effectiveness. These limitation include, but not limited to, detecting refactoring opportunities, recommending specific refactoring activities, and explaining the recommended changes. Existing techniques are mainly focused on the use of quality metrics such as coupling, cohesion, and the Quality Metrics for Object Oriented Design (QMOOD). However, there are many other factors identified in this work to assist and facilitate different maintenance activities for developers: 1. To structure the refactoring field and existing research results, this dissertation provides the most scalable and comprehensive systematic literature review analyzing the results of 3183 research papers on refactoring covering the last three decades. Based on this survey, we created a taxonomy to classify the existing research, identified research trends and highlighted gaps in the literature for further research. 2. To draw attention to what should be the current refactoring research focus from the developers’ perspective, we carried out the first large scale refactoring study on the most popular online Q&A forum for developers, Stack Overflow. We collected and analyzed posts to identify what developers ask about refactoring, the challenges that practitioners face when refactoring software systems, and what should be the current refactoring research focus from the developers’ perspective. 3. To improve the detection of refactoring opportunities in terms of quality and security in the context of mobile apps, we designed a framework that recommends the files to be refactored based on user reviews. We also considered the detection of refactoring opportunities in the context of web services. We proposed a machine learning-based approach that helps service providers and subscribers predict the quality of service with the least costs. Furthermore, to help developers make an accurate assessment of the quality of their software systems and decide if the code should be refactored, we propose a clustering-based approach to automatically identify the preferred benchmark to use for the quality assessment of a project. 4. Regarding the refactoring generation process, we proposed different techniques to enhance the change operators and seeding mechanism by using the history of applied refactorings and incorporating refactoring dependencies in order to improve the quality of the refactoring solutions. We also introduced the security aspect when generating refactoring recommendations, by investigating the possible impact of improving different quality attributes on a set of security metrics and finding the best trade-off between them. In another approach, we recommend refactorings to prioritize fixing quality issues in security-critical files, improve quality attributes and remove code smells. All the above contributions were validated at the large scale on thousands of open source and industry projects in collaboration with industry partners and the open source community. The contributions of this dissertation are integrated in a cloud-based refactoring framework which is currently used by practitioners.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/171082/1/Chaima Abid Final Dissertation.pdfDescription of Chaima Abid Final Dissertation.pdf : Dissertatio

    Reconfigurable Antenna Systems: Platform implementation and low-power matters

    Get PDF
    Antennas are a necessary and often critical component of all wireless systems, of which they share the ever-increasing complexity and the challenges of present and emerging trends. 5G, massive low-orbit satellite architectures (e.g. OneWeb), industry 4.0, Internet of Things (IoT), satcom on-the-move, Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles, all call for highly flexible systems, and antenna reconfigurability is an enabling part of these advances. The terminal segment is particularly crucial in this sense, encompassing both very compact antennas or low-profile antennas, all with various adaptability/reconfigurability requirements. This thesis work has dealt with hardware implementation issues of Radio Frequency (RF) antenna reconfigurability, and in particular with low-power General Purpose Platforms (GPP); the work has encompassed Software Defined Radio (SDR) implementation, as well as embedded low-power platforms (in particular on STM32 Nucleo family of micro-controller). The hardware-software platform work has been complemented with design and fabrication of reconfigurable antennas in standard technology, and the resulting systems tested. The selected antenna technology was antenna array with continuously steerable beam, controlled by voltage-driven phase shifting circuits. Applications included notably Wireless Sensor Network (WSN) deployed in the Italian scientific mission in Antarctica, in a traffic-monitoring case study (EU H2020 project), and into an innovative Global Navigation Satellite Systems (GNSS) antenna concept (patent application submitted). The SDR implementation focused on a low-cost and low-power Software-defined radio open-source platform with IEEE 802.11 a/g/p wireless communication capability. In a second embodiment, the flexibility of the SDR paradigm has been traded off to avoid the power consumption associated to the relevant operating system. Application field of reconfigurable antenna is, however, not limited to a better management of the energy consumption. The analysis has also been extended to satellites positioning application. A novel beamforming method has presented demonstrating improvements in the quality of signals received from satellites. Regarding those who deal with positioning algorithms, this advancement help improving precision on the estimated position

    SoK: Understanding BFT Consensus in the Age of Blockchains

    Get PDF
    Blockchain as an enabler to current Internet infrastructure has provided many unique features and revolutionized current distributed systems into a new era. Its decentralization, immutability, and transparency have attracted many applications to adopt the design philosophy of blockchain and customize various replicated solutions. Under the hood of blockchain, consensus protocols play the most important role to achieve distributed replication systems. The distributed system community has extensively studied the technical components of consensus to reach agreement among a group of nodes. Due to trust issues, it is hard to design a resilient system in practical situations because of the existence of various faults. Byzantine fault-tolerant (BFT) state machine replication (SMR) is regarded as an ideal candidate that can tolerate arbitrary faulty behaviors. However, the inherent complexity of BFT consensus protocols and their rapid evolution makes it hard to practically adapt themselves into application domains. There are many excellent Byzantine-based replicated solutions and ideas that have been contributed to improving performance, availability, or resource efficiency. This paper conducts a systematic and comprehensive study on BFT consensus protocols with a specific focus on the blockchain era. We explore both general principles and practical schemes to achieve consensus under Byzantine settings. We then survey, compare, and categorize the state-of-the-art solutions to understand BFT consensus in detail. For each representative protocol, we conduct an in-depth discussion of its most important architectural building blocks as well as the key techniques they used. We aim that this paper can provide system researchers and developers a concrete view of the current design landscape and help them find solutions to concrete problems. Finally, we present several critical challenges and some potential research directions to advance the research on exploring BFT consensus protocols in the age of blockchains

    Integrating protein structural information

    Get PDF
    Dissertação apresentada para obtenção de Grau de Doutor em Bioquímica,Bioquímica Estrutural, pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaThe central theme of this work is the application of constraint programming and other artificial intelligence techniques to protein structure problems, with the goal of better combining experimental data with structure prediction methods. Part one of the dissertation introduces the main subjects of protein structure and constraint programming, summarises the state of the art in the modelling of protein structures and complexes, sets the context for the techniques described later on, and outlines the main points of the thesis: the integration of experimental data in modelling. The first chapter, Protein Structure, introduces the reader to the basic notions of amino acid structure, protein chains, and protein folding and interaction. These are important concepts to understand the work described in parts two and three. Chapter two, Protein Modelling, gives a brief overview of experimental and theoretical techniques to model protein structures. The information in this chapter provides the context of the investigations described in parts two and three, but is not essential to understanding the methods developed. Chapter three, Constraint Programming, outlines the main concepts of this programming technique. Understanding variable modelling, the notions of consistency and propagation, and search methods should greatly help the reader interested in the details of the algorithms, as described in part two of this book. The fourth chapter, Integrating Structural Information, is a summary of the thesis proposed here. This chapter is an overview of the objectives of this work, and gives an idea of how the algorithms developed here could help in modelling protein structures. The main goal is to provide a flexible and continuously evolving framework for the integration of structural information from a diversity of experimental techniques and theoretical predictions. Part two describes the algorithms developed, which make up the main original contribution of this work. This part is aimed especially at developers interested in the details of the algorithms, in replicating the results, in improving the method or in integrating them in other applications. Biochemical aspects are dealt with briefly and as necessary, and the emphasis is on the algorithms and the code
    corecore