510,598 research outputs found

    LightChain: A DHT-based Blockchain for Resource Constrained Environments

    Get PDF
    As an append-only distributed database, blockchain is utilized in a vast variety of applications including the cryptocurrency and Internet-of-Things (IoT). The existing blockchain solutions have downsides in communication and storage efficiency, convergence to centralization, and consistency problems. In this paper, we propose LightChain, which is the first blockchain architecture that operates over a Distributed Hash Table (DHT) of participating peers. LightChain is a permissionless blockchain that provides addressable blocks and transactions within the network, which makes them efficiently accessible by all the peers. Each block and transaction is replicated within the DHT of peers and is retrieved in an on-demand manner. Hence, peers in LightChain are not required to retrieve or keep the entire blockchain. LightChain is fair as all of the participating peers have a uniform chance of being involved in the consensus regardless of their influence such as hashing power or stake. LightChain provides a deterministic fork-resolving strategy as well as a blacklisting mechanism, and it is secure against colluding adversarial peers attacking the availability and integrity of the system. We provide mathematical analysis and experimental results on scenarios involving 10K nodes to demonstrate the security and fairness of LightChain. As we experimentally show in this paper, compared to the mainstream blockchains like Bitcoin and Ethereum, LightChain requires around 66 times less per node storage, and is around 380 times faster on bootstrapping a new node to the system, while each LightChain node is rewarded equally likely for participating in the protocol

    Database for structural steel experiments under a distributed collaboration environment

    Full text link
    Due to the high requirements of civil infrastructures against the earthquake in Japan, a great number of research organizations have been conducting the structural steel experiments, in particular the seismic tests such as the cyclic loading test and the pseudo-dynamic test, for many years to determine the seismic performances of steel structures. However, the original test data gained by most research organizations are not well stored in an appropriate manner for distribution and possible usage of others. Although a Numerical Database of Steel Structures (NDSS) was developed some years ago to preserve and share experimental data of the ultimate strength tests acquired at Nagoya University, it was not easy to access this database from other computer platform due to the lack of the support of proper communication media. With the rapid development of information networks and their browsers, structural engineers and researchers are able to exchange various types of test data through Internet. This paper presents the development of a distributed collaborative database system for structural steel experiments. The database is made available on the World-Wide Web, and the Java language enables the interactive retrieval efficiently. The applications of the developed database system for the retrieval of experimental data and seismic numerical analysis are validated in the form of examples.<br /

    Generic performance management of multiservice networks

    Get PDF
    This paper discusses various approaches to the development of an integrated and automated network performance measurement tool. Adopting an object-orientated approach to the entire system design can assist this requirement for intelligence in a distributed manner. The authors have found the JAVA language to greatly assist in this task and this language has been used for all aspects of the system from traffic generation/reception, to the database and display systems. Finally, examples of the systems have been developed and implemented at various levels, from experimental operations on ATM networks, to a prototype operational system on BT's commercial SMDS (Switched Multi-megabit Data Service) networ

    Prognostic Reasoner based adaptive power management system for a more electric aircraft

    Get PDF
    This research work presents a novel approach that addresses the concept of an adaptive power management system design and development framed in the Prognostics and Health Monitoring(PHM) perspective of an Electrical power Generation and distribution system(EPGS).PHM algorithms were developed to detect the health status of EPGS components which can accurately predict the failures and also able to calculate the Remaining Useful Life(RUL), and in many cases reconfigure for the identified system and subsystem faults. By introducing these approach on Electrical power Management system controller, we are gaining a few minutes lead time to failures with an accurate prediction horizon on critical systems and subsystems components that may introduce catastrophic secondary damages including loss of aircraft. The warning time on critical components and related system reconfiguration must permits safe return to landing as the minimum criteria and would enhance safety. A distributed architecture has been developed for the dynamic power management for electrical distribution system by which all the electrically supplied loads can be effectively controlled.A hybrid mathematical model based on the Direct-Quadrature (d-q) axis transformation of the generator have been formulated for studying various structural and parametric faults. The different failure modes were generated by injecting faults into the electrical power system using a fault injection mechanism. The data captured during these studies have been recorded to form a “Failure Database” for electrical system. A hardware in loop experimental study were carried out to validate the power management algorithm with FPGA-DSP controller. In order to meet the reliability requirements a Tri-redundant electrical power management system based on DSP and FPGA has been develope

    Design and Analysis of a Logless Dynamic Reconfiguration Protocol

    Get PDF
    Distributed replication systems based on the replicated state machine model have become ubiquitous as the foundation of modern database systems. To ensure availability in the presence of faults, these systems must be able to dynamically replace failed nodes with healthy ones via dynamic reconfiguration. MongoDB is a document oriented database with a distributed replication mechanism derived from the Raft protocol. In this paper, we present MongoRaftReconfig, a novel dynamic reconfiguration protocol for the MongoDB replication system. MongoRaftReconfig utilizes a logless approach to managing configuration state and decouples the processing of configuration changes from the main database operation log. The protocol's design was influenced by engineering constraints faced when attempting to redesign an unsafe, legacy reconfiguration mechanism that existed previously in MongoDB. We provide a safety proof of MongoRaftReconfig, along with a formal specification in TLA+. To our knowledge, this is the first published safety proof and formal specification of a reconfiguration protocol for a Raft-based system. We also present results from model checking its safety properties on finite protocol instances. Finally, we discuss the conceptual novelties of MongoRaftReconfig, how it can be understood as an optimized and generalized version of the single server reconfiguration algorithm of Raft, and present an experimental evaluation of how its optimizations can provide performance benefits for reconfigurations.Comment: 35 pages, 2 figure

    SANNS: Scaling Up Secure Approximate k-Nearest Neighbors Search

    Get PDF
    The kk-Nearest Neighbor Search (kk-NNS) is the backbone of several cloud-based services such as recommender systems, face recognition, and database search on text and images. In these services, the client sends the query to the cloud server and receives the response in which case the query and response are revealed to the service provider. Such data disclosures are unacceptable in several scenarios due to the sensitivity of data and/or privacy laws. In this paper, we introduce SANNS, a system for secure kk-NNS that keeps client's query and the search result confidential. SANNS comprises two protocols: an optimized linear scan and a protocol based on a novel sublinear time clustering-based algorithm. We prove the security of both protocols in the standard semi-honest model. The protocols are built upon several state-of-the-art cryptographic primitives such as lattice-based additively homomorphic encryption, distributed oblivious RAM, and garbled circuits. We provide several contributions to each of these primitives which are applicable to other secure computation tasks. Both of our protocols rely on a new circuit for the approximate top-kk selection from nn numbers that is built from O(n+k2)O(n + k^2) comparators. We have implemented our proposed system and performed extensive experimental results on four datasets in two different computation environments, demonstrating more than 1831×18-31\times faster response time compared to optimally implemented protocols from the prior work. Moreover, SANNS is the first work that scales to the database of 10 million entries, pushing the limit by more than two orders of magnitude.Comment: 18 pages, to appear at USENIX Security Symposium 202
    corecore