3 research outputs found

    Management of fault tolerance and traffic congestion in cloud data center

    Get PDF
    In this era of ubiquitous computing, coupled with the emergence of big data and internet of things, there have been constant changes in every aspect of cloud data center communications - its network connectivity, data storage, data transfer, and architectural design. As a result of this, the amount of data transferable, and the frequency of data transfer have tremendously increased; causing device failures and traffic congestions. To cope with these changes so that performance can be sustained amidst device failures and traffic congestion, the design of fault tolerant cloud data center is important. A fault tolerant cloud data center network should be able to provide alternative paths from source to destination during failures so that there will not be abrupt fall in performance. But still with the ongoing researches in this regard, there has not been a robust cloud data center design that can boast of being suitable for alleviating the poor fault tolerance of cloud data center. In this paper, we proposed the improved versions of fat-tree interconnection hybrid designs derived from the structure called Z-fat tree; to address the issues of fault tolerance. Then, we compared these designs with single fat tree architecture with the same amount of resources for client server communication pattern such as Email application in a cloud data center. The simulation results obtained based on failed switches and links, show that our proposed hybrid designs outperformed the single fat tree design as the inter arrival time of the packets reduces

    Can Cooling Technology Save Many-Core Parallel Programming from Its Programming Woes?

    Get PDF
    An abstract of this work will be presented at the Compiler, Architecture and Tools Conference (CATC), Intel Development Center, Haifa, Israel November 23, 2015.This paper is advancing the following premise (henceforth, "vision"): that it is feasible to greatly enhance data movement in the short term, and do it in ways that would be both power efficient and pragmatic in the long term. The paper spells this premise out in greater detail: 1. it is feasible to build first generations of a variety of (power-inefficient) designs for which data movement will not be a restriction and begin application software development for them; 2. growing reliance on silicon compatible photonic technologies, and feasible advances in them with proper investment, will allow reduction of power consumption in these design by several orders of magnitude; 3. successful high performance application software, the ease of programming demonstrated and growing adoption by customers, software vendors and programmers will incentivize (hardware vendor) investment in new application-software-compatible generations of these designs (a new "software spiral" a la former Intel CEO, Andy Grove) with further reduction of power consumption in each generation; 4. microfluidic cooling is instrumental for enabling item 1, as well as for midwifing this overall vision. The opening paragraph of the paper provides a preamble to that vision, the body of the paper supports it and the paragraph "Moore's-Law-type vision" summarizes it. The scope of the paper is a bit forward looking and it may not exactly fit any particular community. However, its new directions for interaction among architecture and programming may suggest new horizons for representing and exposing a greater variety of data and task parallelism.National Science Foundatio

    Quasi Fat Trees for HPC Clouds and Their Fault-Resilient Closed-Form Routing

    No full text
    corecore