1 research outputs found

    Decentralized Algorithm for Communication Efficient Distributed Shared Memory

    Get PDF
    A sequential computer executes one CPU instruction at a time. Over the years sequential computers have increased steadily in performance primarily as a result of improvements in digital hardware technology. One major concern of computer designers is that logic and memory devices are approaching ultimate physical limits on their size and speed. While size reductions and speed increases of a few orders of magnitude beyond present levels seem feasible, further improvements in the performance of sequential computers may not be achievable at acceptable cost. A more economic solution is to design systems that can process more than one CPU instruction at a time. This is known as parallel processing. Parallel processors are also referred to as distributed systems. These systems consists of an interconnected collection of autonomous computers [Sta84]. There are many ways of classifying distributed systems based on their structure or behavior. Based on Flynn's taxonomy of computer architectures, distributed systems belong to the MIMD (multiple instruction multiple data) class of computer architectures [Tan92]. The MIMD class consist of two categories: those that have shared memory (tightly coupled), and those that do not (loosely coupled). As shown in Figure 1.1, each category can be furthered divided based on the architecture of the interconnection network. In bus-based systems, there is a single network, backplane, bus, cable, or other medium that connects all machines. Switched systems connect machines by individual wires
    corecore