1,791 research outputs found
Architectural mismatch tolerance
The integrity of complex software systems built from existing components is becoming more dependent on the integrity of the mechanisms used to interconnect these components and, in particular, on the ability of these mechanisms to cope with architectural mismatches that might exist between components. There is a need to detect and handle (i.e. to tolerate) architectural mismatches during runtime because in the majority of practical situations it is impossible to localize and correct all such mismatches during development time. When developing complex software systems, the problem is not only to identify the appropriate components, but also to make sure that these components are interconnected in a way that allows mismatches to be tolerated. The resulting architectural solution should be a system based on the existing components, which are independent in their nature, but are able to interact in well-understood ways. To find such a solution we apply general principles of fault tolerance to dealing with arch itectural mismatche
Architectural Techniques to Enable Reliable and Scalable Memory Systems
High capacity and scalable memory systems play a vital role in enabling our
desktops, smartphones, and pervasive technologies like Internet of Things
(IoT). Unfortunately, memory systems are becoming increasingly prone to faults.
This is because we rely on technology scaling to improve memory density, and at
small feature sizes, memory cells tend to break easily. Today, memory
reliability is seen as the key impediment towards using high-density devices,
adopting new technologies, and even building the next Exascale supercomputer.
To ensure even a bare-minimum level of reliability, present-day solutions tend
to have high performance, power and area overheads. Ideally, we would like
memory systems to remain robust, scalable, and implementable while keeping the
overheads to a minimum. This dissertation describes how simple cross-layer
architectural techniques can provide orders of magnitude higher reliability and
enable seamless scalability for memory systems while incurring negligible
overheads.Comment: PhD thesis, Georgia Institute of Technology (May 2017
Multi-party Quantum Computation
We investigate definitions of and protocols for multi-party quantum computing
in the scenario where the secret data are quantum systems. We work in the
quantum information-theoretic model, where no assumptions are made on the
computational power of the adversary. For the slightly weaker task of
verifiable quantum secret sharing, we give a protocol which tolerates any t <
n/4 cheating parties (out of n). This is shown to be optimal. We use this new
tool to establish that any multi-party quantum computation can be securely
performed as long as the number of dishonest players is less than n/6.Comment: Masters Thesis. Based on Joint work with Claude Crepeau and Daniel
Gottesman. Full version is in preparatio
Improving Independence of Failures in BFT
International audienceIndependence of failures is a basic assumption for the correctness of BFT protocols. In literature, this subject was addressed by providing N-version like abstractions. Though this can provide a good level of obfuscation against semantic- based attacks, if the replicas know each others identities then non-semantic attacks like DoS can still compromise all replicas together. In this paper, we address the obfuscation problem in a different way by keeping replicas unaware of each other. This makes it harder for attackers to sneak from one replica to another and reduces the impact of simultaneous attacks on all replicas. For this sake, we present a new obfuscated BFT protocol, called OBFT, where the replicas remain unaware of each other by exchanging their messages through the clients. Thus, OBFT assumes honest, but possibly crash-prone clients. We show that obfuscation in our context could not be achieved without this assumption, and we give possible applications where this assumption can be accepted. We evaluated our protocol on an Emulab cluster with a wide area topology. Our experiments show that the scalability and throughput of OBFT remain comparable to existing BFT protocols despite the obfuscation overhead
Study of a unified hardware and software fault-tolerant architecture
A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified
- …