3,073 research outputs found
ARPA Whitepaper
We propose a secure computation solution for blockchain networks. The
correctness of computation is verifiable even under malicious majority
condition using information-theoretic Message Authentication Code (MAC), and
the privacy is preserved using Secret-Sharing. With state-of-the-art multiparty
computation protocol and a layer2 solution, our privacy-preserving computation
guarantees data security on blockchain, cryptographically, while reducing the
heavy-lifting computation job to a few nodes. This breakthrough has several
implications on the future of decentralized networks. First, secure computation
can be used to support Private Smart Contracts, where consensus is reached
without exposing the information in the public contract. Second, it enables
data to be shared and used in trustless network, without disclosing the raw
data during data-at-use, where data ownership and data usage is safely
separated. Last but not least, computation and verification processes are
separated, which can be perceived as computational sharding, this effectively
makes the transaction processing speed linear to the number of participating
nodes. Our objective is to deploy our secure computation network as an layer2
solution to any blockchain system. Smart Contracts\cite{smartcontract} will be
used as bridge to link the blockchain and computation networks. Additionally,
they will be used as verifier to ensure that outsourced computation is
completed correctly. In order to achieve this, we first develop a general MPC
network with advanced features, such as: 1) Secure Computation, 2) Off-chain
Computation, 3) Verifiable Computation, and 4)Support dApps' needs like
privacy-preserving data exchange
Recommended from our members
Low tech connections into the ARPA internet : the RawPacket split-gateway
This report describes a "low technology" method for connecting into the ARPA Internet. The use of a RawPacket interface in a system which supoprts IP makes possible the construction of a split-gateway between two hosts. The RawPacket interface permits a user-level process to introduce arbitrary packets into the IP layer, resulting in a virtual network interface. Since the split-gateway is implemented using a RawPacket interface, two networks may be connected together using a convenient medium which does not require explicit kernel support. Hence, split-gateways are well-suited for use as stub-gateways, connecting a local network to a long-haul network such as the ARPA backbone. In particular, the split-gateway discussed in this report achieves a reasonable level of connectivity for a comparatively small expenditure.This report details how the RawPacket software and split-gateways are implemented. In addition, various daemon configurations are presented, modifications to the operating environment are discussed, and some performance measurements are given
Robot computer problem solving system
The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed
How robust are distributed systems
A distributed system is made up of large numbers of components operating asynchronously from one another and hence with imcomplete and inaccurate views of one another's state. Load fluctuations are common as new tasks arrive and active tasks terminate. Jointly, these aspects make it nearly impossible to arrive at detailed predictions for a system's behavior. It is important to the successful use of distributed systems in situations in which humans cannot provide the sorts of predictable realtime responsiveness of a computer, that the system be robust. The technology of today can too easily be affected by worn programs or by seemingly trivial mechanisms that, for example, can trigger stock market disasters. Inventors of a technology have an obligation to overcome flaws that can exact a human cost. A set of principles for guiding solutions to distributed computing problems is presented
Monkeys, typewriters and networks: the internet in the light of the theory of accidental excellence
Viewed in the light of the theory of accidental excellence, there is much to suggest that the success of the Internet and its various protocols derives from a communications technology accident, or better, a series of accidents. In the early 1990s, many experts still saw the Internet as an academic toy that would soon vanish into thin air again. The Internet probably gained its reputation as an academic toy largely because it violated the basic principles of traditional communications networks. The quarrel about paradigms that erupted in the 1970s between the telephony world and the newly emerging Internet community was not, however, only about transmission technology doctrines. It was also about the question â still unresolved today â as to who actually governs the flow of information: the operators or the users of the network? The paper first describes various network architectures in relation to the communication cultures expressed in their make-up. It then examines the creative environment found at the nodes of the network, whose coincidental importance for the Internet boom must not be forgotten. Finally, the example of Usenet is taken to look at the kind of regulatory practices that have emerged in the communications services provided within the framework of a decentralised network architecture. --
Evaluation of an Internet Document Delivery Service
An Internet-based Document Delivery Service (DDS) has been developed within the framework of the CNR ( the Italian Research National Council) Project BiblioMIME, in order to take advantage of new Internet technologies and promote cooperation among CNR and Italian university libraries. Adopting such technologies changes the traditional organisation of DDS and may drastically reduce costs and delivery times.
An information system managing DDS requests and monitoring the temporal evolution of the service has been implemented, running on the local-area network of a test-site library. It aims to track number and types of documents requested and received, user distribution, delivery times and types (surface mail, fax, Internet), to automate repetitive manual procedures and to deal with the various accounting methods used by other libraries. Transmission of documents is carried out by means of an e-mail/Web gateway system supporting document exchange via Internet, which assists receiving libraries in retrieving requested documents.
This paper describes the architecture and main design features of the e-mail/Web gateway server (the BiblioMime server). This approach permits librarians to continue using e-mail service to send large documents, while resolving problems that users may encounter when downloading large size files with e-mail agents. The library operator sends the document as an attachment to the destination address; on fly the e-mail server extracts and saves the attachments in a web-server disk file and substitutes them with a new message part that includes an URL pointing to the saved document. The receiver can download these large objects by means of a user-friendly browser.
We further discuss the data gathered during the triennium 1998-2000; this consists of about 5,000 DDS transactions per annum with 300 other Italian scientific and bio-medical libraries and commercial document suppliers. Use of the instruments described above allowed us to evaluate the performance of service âbeforeâ and âafterâ the use of Internet Document Delivery and to extract some critical data regarding DDS. Those include:
a) libraries with which we have greater numbers of exchanges and their turnaround times;
b) extraordinary reduction in costs and delivery times;
c) the most frequently requested serial titles (allowing cost-effective decisions on new subscriptions);
d) impact on DDS of library participation in consortia which allow user access to greater numbers of online serials
- âŠ