618 research outputs found

    Breeding unicorns:Developing trustworthy and scalable randomness beacons

    Get PDF
    Randomness beacons are services that periodically emit a random number, allowing users to base decisions on the same random value without trusting anyone: ideally, the randomness beacon does not only produce unpredictable values, but is also of low computational complexity for the users, bias-resistant and publicly verifiable. Such randomness beacons can serve as an important primitive for smart contracts in a variety of contexts. This paper first presents a structured security analysis, based on which we then design, implement, and evaluate a trustworthy and efficient randomness beacon. Our approach does not require users to register or run any computationally intensive operations. We then compare different implementation and deployment options on distributed ledgers, and report on an Ethereum smart contract-based lottery using our beacon

    Securely Scaling Blockchain Base Layers

    Get PDF
    This thesis presents the design, implementation and evaluation of techniques to scale the base layers of decentralised blockchain networks---where transactions are directly posted on the chain. The key challenge is to scale the base layer without sacrificing properties such as decentralisation, security and public verifiability. It proposes Chainspace, a blockchain sharding system where nodes process and reach consensus on transactions in parallel, thereby scaling block production and increasing on-chain throughput. In order to make the actions of consensus-participating nodes efficiently verifiable despite the increase of on-chain data, a system of fraud and data availability proofs is proposed so that invalid blocks can be efficiently challenged and rejected without the need for all users to download all transactions, thereby scaling block verification. It then explores blockchain and application design paradigms that enable on-chain scalability on the outset. This is in contrast to sharding, which scales blockchains designed under the traditional state machine replication paradigm where consensus and transaction execution are coupled. LazyLedger, a blockchain design where the consensus layer separated from the execution layer is proposed, where the consensus is only responsible for checking the availability of the data in blocks via data availability proofs. Transactions are instead executed off-chain, eliminating the need for nodes to execute on-chain transactions in order to verify blocks. Finally, as an example of a blockchain use case that does not require an execution layer, Contour, a scalable design for software binary transparency is proposed on top of the existing Bitcoin blockchain, where all software binary records do not need to be posted on-chain

    Advanced flight control system study

    Get PDF
    The architecture, requirements, and system elements of an ultrareliable, advanced flight control system are described. The basic criteria are functional reliability of 10 to the minus 10 power/hour of flight and only 6 month scheduled maintenance. A distributed system architecture is described, including a multiplexed communication system, reliable bus controller, the use of skewed sensor arrays, and actuator interfaces. Test bed and flight evaluation program are proposed

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This book is Open Access under a CC BY licence. The LNCS 11427 and 11428 proceedings set constitutes the proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019. The total of 42 full and 8 short tool demo papers presented in these volumes was carefully reviewed and selected from 164 submissions. The papers are organized in topical sections as follows: Part I: SAT and SMT, SAT solving and theorem proving; verification and analysis; model checking; tool demo; and machine learning. Part II: concurrent and distributed systems; monitoring and runtime verification; hybrid and stochastic systems; synthesis; symbolic verification; and safety and fault-tolerant systems

    Performance assessment of real-time data management on wireless sensor networks

    Get PDF
    Technological advances in recent years have allowed the maturity of Wireless Sensor Networks (WSNs), which aim at performing environmental monitoring and data collection. This sort of network is composed of hundreds, thousands or probably even millions of tiny smart computers known as wireless sensor nodes, which may be battery powered, equipped with sensors, a radio transceiver, a Central Processing Unit (CPU) and some memory. However due to the small size and the requirements of low-cost nodes, these sensor node resources such as processing power, storage and especially energy are very limited. Once the sensors perform their measurements from the environment, the problem of data storing and querying arises. In fact, the sensors have restricted storage capacity and the on-going interaction between sensors and environment results huge amounts of data. Techniques for data storage and query in WSN can be based on either external storage or local storage. The external storage, called warehousing approach, is a centralized system on which the data gathered by the sensors are periodically sent to a central database server where user queries are processed. The local storage, in the other hand called distributed approach, exploits the capabilities of sensors calculation and the sensors act as local databases. The data is stored in a central database server and in the devices themselves, enabling one to query both. The WSNs are used in a wide variety of applications, which may perform certain operations on collected sensor data. However, for certain applications, such as real-time applications, the sensor data must closely reflect the current state of the targeted environment. However, the environment changes constantly and the data is collected in discreet moments of time. As such, the collected data has a temporal validity, and as time advances, it becomes less accurate, until it does not reflect the state of the environment any longer. Thus, these applications must query and analyze the data in a bounded time in order to make decisions and to react efficiently, such as industrial automation, aviation, sensors network, and so on. In this context, the design of efficient real-time data management solutions is necessary to deal with both time constraints and energy consumption. This thesis studies the real-time data management techniques for WSNs. It particularly it focuses on the study of the challenges in handling real-time data storage and query for WSNs and on the efficient real-time data management solutions for WSNs. First, the main specifications of real-time data management are identified and the available real-time data management solutions for WSNs in the literature are presented. Secondly, in order to provide an energy-efficient real-time data management solution, the techniques used to manage data and queries in WSNs based on the distributed paradigm are deeply studied. In fact, many research works argue that the distributed approach is the most energy-efficient way of managing data and queries in WSNs, instead of performing the warehousing. In addition, this approach can provide quasi real-time query processing because the most current data will be retrieved from the network. Thirdly, based on these two studies and considering the complexity of developing, testing, and debugging this kind of complex system, a model for a simulation framework of the real-time databases management on WSN that uses a distributed approach and its implementation are proposed. This will help to explore various solutions of real-time database techniques on WSNs before deployment for economizing money and time. Moreover, one may improve the proposed model by adding the simulation of protocols or place part of this simulator on another available simulator. For validating the model, a case study considering real-time constraints as well as energy constraints is discussed. Fourth, a new architecture that combines statistical modeling techniques with the distributed approach and a query processing algorithm to optimize the real-time user query processing are proposed. This combination allows performing a query processing algorithm based on admission control that uses the error tolerance and the probabilistic confidence interval as admission parameters. The experiments based on real world data sets as well as synthetic data sets demonstrate that the proposed solution optimizes the real-time query processing to save more energy while meeting low latency.Fundação para a Ciência e Tecnologi

    AICPA Technical Practice Aids, as o June 1, 2003, Volume 2

    Get PDF
    https://egrove.olemiss.edu/aicpa_guides/2551/thumbnail.jp

    AICPA Technical Practice Aids, as of June 1, 2005, Volume 2

    Get PDF
    https://egrove.olemiss.edu/aicpa_guides/2584/thumbnail.jp
    • …
    corecore