1,388 research outputs found

    Tolerating multiple faults in multistage interconnection networks with minimal extra stages

    Get PDF
    Adams and Siegel (1982) proposed an extra stage cube interconnection network that tolerates one switch failure with one extra stage. We extend their results and discover a class of extra stage interconnection networks that tolerate multiple switch failures with a minimal number of extra stages. Adopting the same fault model as Adams and Siegel, the faulty switches can be bypassed by a pair of demultiplexer/multiplexer combinations. It is easy to show that, to maintain point to point and broadcast connectivities, there must be at least S extra stages to tolerate I switch failures. We present the first known construction of an extra stage interconnection network that meets this lower-bound. This 12-dimensional multistage interconnection network has n+f stages and tolerates I switch failures. An n-bit label called mask is used for each stage that indicates the bit differences between the two inputs coming into a common switch. We designed the fault-tolerant construction such that it repeatedly uses the singleton basis of the n-dimensional vector space as the stage mask vectors. This construction is further generalized and we prove that an n-dimensional multistage interconnection network is optimally fault-tolerant if and only if the mask vectors of every n consecutive stages span the n-dimensional vector space

    Fault-tolerant interconnection networks for multiprocessor systems

    Get PDF
    Interconnection networks represent the backbone of multiprocessor systems. A failure in the network, therefore, could seriously degrade the system performance. For this reason, fault tolerance has been regarded as a major consideration in interconnection network design. This thesis presents two novel techniques to provide fault tolerance capabilities to three major networks: the Baseline network, the Benes network and the Clos network. First, the Simple Fault Tolerance Technique (SFT) is presented. The SFT technique is in fact the result of merging two widely known interconnection mechanisms: a normal interconnection network and a shared bus. This technique is most suitable for networks with small switches, such as the Baseline network and the Benes network. For the Clos network, whose switches may be large for the SFT, another technique is developed to produce the Fault-Tolerant Clos (FTC) network. In the FTC, one switch is added to each stage. The two techniques are described and thoroughly analyzed

    In-depth Analysis On Parallel Processing Patterns for High-Performance Dataframes

    Full text link
    The Data Science domain has expanded monumentally in both research and industry communities during the past decade, predominantly owing to the Big Data revolution. Artificial Intelligence (AI) and Machine Learning (ML) are bringing more complexities to data engineering applications, which are now integrated into data processing pipelines to process terabytes of data. Typically, a significant amount of time is spent on data preprocessing in these pipelines, and hence improving its e fficiency directly impacts the overall pipeline performance. The community has recently embraced the concept of Dataframes as the de-facto data structure for data representation and manipulation. However, the most widely used serial Dataframes today (R, pandas) experience performance limitations while working on even moderately large data sets. We believe that there is plenty of room for improvement by taking a look at this problem from a high-performance computing point of view. In a prior publication, we presented a set of parallel processing patterns for distributed dataframe operators and the reference runtime implementation, Cylon [1]. In this paper, we are expanding on the initial concept by introducing a cost model for evaluating the said patterns. Furthermore, we evaluate the performance of Cylon on the ORNL Summit supercomputer

    Balancing Security, Performance and Deployability in Encrypted Search

    Get PDF
    Encryption is an important tool for protecting data, especially data stored in the cloud. However, standard encryption techniques prevent efficient search. Searchable encryption attempts to solve this issue, protecting the data while still providing search functionality. Retaining the ability to search comes at a cost of security, performance and/or utility. An important practical aspect of utility is compatibility with legacy systems. Unfortunately, the efficient searchable encryption constructions that are compatible with these systems have been proven vulnerable to attack, even against weaker adversary models. The goal of this work is to address this security problem inherent with efficient, legacy compatible constructions. First, we present attacks on previous constructions that are compatible with legacy systems, demonstrating their vulnerability. Then we present two new searchable encryption constructions. The first, weakly randomized encryption, provides superior security to prior easily deployable constructions, while providing similar ease of deployment and query performance nearly identical to unencrypted databases. The second construction, EDDiES, provides much stronger security at the expense of a slight regression on performance. These constructions show that it is possible to achieve a better balance of security and performance with the utility constraints that come with deployment in legacy systems

    Secure Network-on-Chip Against Black Hole and Tampering Attacks

    Get PDF
    The Network-on-Chip (NoC) has become the communication heart of Multiprocessors-System-on-Chip (MPSoC). Therefore, it has been subject to a plethora of security threats to degrade the system performance or steal sensitive information. Due to the globalization of the modern semiconductor industry, many different parties take part in the hardware design of the system. As a result, the NoC could be infected with a malicious circuit, known as a Hardware Trojan (HT), to leave a back door for security breach purposes. HTs are smartly designed to be too small to be uncovered by offline circuit-level testing, so the system requires an online monitoring to detect and prevent the HT in runtime. This dissertation focuses on HTs inside the router of a NoC designed by a third party. It explores two HT-based threat models for the MPSoC, where the NoC experiences packet-loss and packet-tampering once the HT in the infected router is activated and is in the attacking state. Extensive experiments for each proposed architecture were conducted using a cycle-accurate simulator to demonstrate its effectiveness on the performance of the NoC-based system. The first threat model is the Black Hole Router (BHR) attack, where it silently discards the packets that are passing through without further announcement. The effect of the BHR is presented and analyzed to show the potency of the attack on a NoC-based system. A countermeasure protocol is proposed to detect the BHR at runtime and counteract the deliberate packet-dropping attack with a 26.9% area overhead, an average 21.31% performance overhead and a 22% energy consumption overhead. The protocol is extended to provide an efficient and power-gated scheme to enhance the NoC throughput and reduce the energy consumption by using end-to-end (e2e) approach. The power-gated e2e technique locates the BHR and avoids it with a 1% performance overhead and a 2% energy consumption overhead. The second threat model is a packet-integrity attack, where the HT tampers with the packet to apply a denial-of-service attack, steal sensitive information, gain unauthorized access, or misroute the packet to an unintended node. An authentic and secure NoC platform is proposed to detect and countermeasure the packet-tampering attack to maintain data-integrity and authenticity while keeping its secrecy with a 24.21% area overhead. The proposed NoC architecture is not only able to detect the attack, but also locates the infected router and isolates it from the network

    Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems

    Get PDF
    Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ). This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)

    Applying Secure Multi-party Computation in Practice

    Get PDF
    In this work, we present solutions for technical difficulties in deploying secure multi-party computation in real-world applications. We will first give a brief overview of the current state of the art, bring out several shortcomings and address them. The main contribution of this work is an end-to-end process description of deploying secure multi-party computation for the first large-scale registry-based statistical study on linked databases. Involving large stakeholders like government institutions introduces also some non-technical requirements like signing contracts and negotiating with the Data Protection Agency

    The connection machine

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1988.Bibliography: leaves 134-157.by William Daniel Hillis.Ph.D

    Parallel and Distributed Computing

    Get PDF
    The 14 chapters presented in this book cover a wide variety of representative works ranging from hardware design to application development. Particularly, the topics that are addressed are programmable and reconfigurable devices and systems, dependability of GPUs (General Purpose Units), network topologies, cache coherence protocols, resource allocation, scheduling algorithms, peertopeer networks, largescale network simulation, and parallel routines and algorithms. In this way, the articles included in this book constitute an excellent reference for engineers and researchers who have particular interests in each of these topics in parallel and distributed computing
    corecore