135,695 research outputs found

    Joint Bandwidth Assignment and Routing for Power Saving on Large File Transfer with Time Constraints

    Get PDF
    The increase in network traffic in recent years has led to increased power consumption. Accordingly, many studies have tried to reduce the energy consumption of network devices. Various types of data have become available in large quantities via large high-speed computer networks. Time-constrained file transfer is receiving much attention as an advanced service. In this model, a request must be completed within a user-specified deadline or rejected if the requested deadline cannot be met. Some bandwidth assignment and routing methods to accept more requests have been proposed. However, these existing methods do not consider energy consumption. Herein, we propose a joint bandwidth assignment and routing method that reduces energy consumption for time-constrained large file transfer. The bandwidth assignment method reduces the power consumption of mediate node, typically router, by waiting for requests and transferring several requests at the same time. The routing method reduces the power consumption by selecting the path with the least predicted energy consumption. Finally, we evaluate the proposed method through simulation experiments

    WebWave: Globally Load Balanced Fully Distributed Caching of Hot Published Documents

    Full text link
    Document publication service over such a large network as the Internet challenges us to harness available server and network resources to meet fast growing demand. In this paper, we show that large-scale dynamic caching can be employed to globally minimize server idle time, and hence maximize the aggregate server throughput of the whole service. To be efficient, scalable and robust, a successful caching mechanism must have three properties: (1) maximize the global throughput of the system, (2) find cache copies without recourse to a directory service, or to a discovery protocol, and (3) be completely distributed in the sense of operating only on the basis of local information. In this paper, we develop a precise definition, which we call tree load-balance (TLB), of what it means for a mechanism to satisfy these three goals. We present an algorithm that computes TLB off-line, and a distributed protocol that induces a load distribution that converges quickly to a TLB one. Both algorithms place cache copies of immutable documents, on the routing tree that connects the cached document's home server to its clients, thus enabling requests to stumble on cache copies en route to the home server.Harvard University; The Saudi Cultural Mission to the U.S.A

    Minimum cost mirror sites using network coding: Replication vs. coding at the source nodes

    Get PDF
    Content distribution over networks is often achieved by using mirror sites that hold copies of files or portions thereof to avoid congestion and delay issues arising from excessive demands to a single location. Accordingly, there are distributed storage solutions that divide the file into pieces and place copies of the pieces (replication) or coded versions of the pieces (coding) at multiple source nodes. We consider a network which uses network coding for multicasting the file. There is a set of source nodes that contains either subsets or coded versions of the pieces of the file. The cost of a given storage solution is defined as the sum of the storage cost and the cost of the flows required to support the multicast. Our interest is in finding the storage capacities and flows at minimum combined cost. We formulate the corresponding optimization problems by using the theory of information measures. In particular, we show that when there are two source nodes, there is no loss in considering subset sources. For three source nodes, we derive a tight upper bound on the cost gap between the coded and uncoded cases. We also present algorithms for determining the content of the source nodes.Comment: IEEE Trans. on Information Theory (to appear), 201

    dOpenCL: Towards a Uniform Programming Approach for Distributed Heterogeneous Multi-/Many-Core Systems

    Get PDF
    Modern computer systems are becoming increasingly heterogeneous by comprising multi-core CPUs, GPUs, and other accelerators. Current programming approaches for such systems usually require the application developer to use a combination of several programming models (e. g., MPI with OpenCL or CUDA) in order to exploit the full compute capability of a system. In this paper, we present dOpenCL (Distributed OpenCL) – a uniform approach to programming distributed heterogeneous systems with accelerators. dOpenCL extends the OpenCL standard, such that arbitrary computing devices installed on any node of a distributed system can be used together within a single application. dOpenCL allows moving data and program code to these devices in a transparent, portable manner. Since dOpenCL is designed as a fully-fledged implementation of the OpenCL API, it allows running existing OpenCL applications in a heterogeneous distributed environment without any modifications. We describe in detail the mechanisms that are required to implement OpenCL for distributed systems, including a device management mechanism for running multiple applications concurrently. Using three application studies, we compare the performance of dOpenCL with MPI+OpenCL and a standard OpenCL implementation
    • …
    corecore