53,901 research outputs found

    File Fragmentation over an Unreliable Channel

    Get PDF
    It has been recently discovered that heavy-tailed file completion time can result from protocol interaction even when file sizes are light-tailed. A key to this phenomenon is the RESTART feature where if a file transfer is interrupted before it is completed, the transfer needs to restart from the beginning. In this paper, we show that independent or bounded fragmentation guarantees light-tailed file completion time as long as the file size is light-tailed, i.e., in this case, heavy-tailed file completion time can only originate from heavy-tailed file sizes. If the file size is heavy-tailed, then the file completion time is necessarily heavy-tailed. For this case, we show that when the file size distribution is regularly varying, then under independent or bounded fragmentation, the completion time tail distribution function is asymptotically upper bounded by that of the original file size stretched by a constant factor. We then prove that if the failure distribution has non-decreasing failure rate, the expected completion time is minimized by dividing the file into equal sized fragments; this optimal fragment size is unique but depends on the file size. We also present a simple blind fragmentation policy where the fragment sizes are constant and independent of the file size and prove that it is asymptotically optimal. Finally, we bound the error in expected completion time due to error in modeling of the failure process

    Aspects of land consolidation in Bulgaria

    Get PDF

    Topology-aware GPU scheduling for learning workloads in cloud environments

    Get PDF
    Recent advances in hardware, such as systems with multiple GPUs and their availability in the cloud, are enabling deep learning in various domains including health care, autonomous vehicles, and Internet of Things. Multi-GPU systems exhibit complex connectivity among GPUs and between GPUs and CPUs. Workload schedulers must consider hardware topology and workload communication requirements in order to allocate CPU and GPU resources for optimal execution time and improved utilization in shared cloud environments. This paper presents a new topology-aware workload placement strategy to schedule deep learning jobs on multi-GPU systems. The placement strategy is evaluated with a prototype on a Power8 machine with Tesla P100 cards, showing speedups of up to ≈1.30x compared to state-of-the-art strategies; the proposed algorithm achieves this result by allocating GPUs that satisfy workload requirements while preventing interference. Additionally, a large-scale simulation shows that the proposed strategy provides higher resource utilization and performance in cloud systems.This project is supported by the IBM/BSC Technology Center for Supercomputing collaboration agreement. It has also received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 639595). It is also partially supported by the Ministry of Economy of Spain under contract TIN2015-65316-P and Generalitat de Catalunya under contract 2014SGR1051, by the ICREA Academia program, and by the BSC-CNS Severo Ochoa program (SEV-2015-0493). We thank our IBM Research colleagues Alaa Youssef and Asser Tantawi for the valuable discussions. We also thank SC17 committee member Blair Bethwaite of Monash University for his constructive feedback on the earlier drafts of this paper.Peer ReviewedPostprint (published version

    Bridging “the Great Divide”: Countering Financial Repression in Transition

    Full text link
    The large and widening gap between economic performance in Eastern European transition economies and those of the former Soviet Union has been dubbed “the Great Divide” by Berglof and Bolton (2002). This paper provides a rationale for the gap based upon the concept of financial repression. The magnified effects of transition to the market can be attributed to the government manipulation of financial markets in these countries, with the divide defined by the length of time that governments relied upon financial-market manipulation to finance government fiscal policy. Policies undertaken to assist in financing government expenditures caused financial repression and financial fragmentation, to use the terms introduced by McKinnon (1973). After an introductory section, I introduce a theoretical model of real and financial sectors in transition. The dynamic path to equilibrium from transition is derived. It is shown to have a tendency toward output contraction and hyperinflation when government policies promote financial repression. In the third section this hypothesis is examined with macroeconomic data from Ukraine for the period 1992 - 2001. These data are consistent with the hypothesis, although other factors (e.g., recession in trading partners) are also shown to be important.http://deepblue.lib.umich.edu/bitstream/2027.42/39895/3/wp510.pd

    International Fragmentation: Boon or Bane for Domestic Employment?

    Get PDF
    In this paper, we introduce the fairness approach to efficiency wages into a standard model of international fragmentation. This gives us a theoretical framework in which wage inequality and unemployment rates are co-determined and therefore the public concern can be addressed that international fragmentation and outsourcing to low wage countries lead to domestic job-losses. We develop a novel diagrammatic tool to illustrate the main labour market effects of international fragmentation. We also explore how preferences for fair wages and the size of unemployment benefits govern the employment effects of outsourcing and critically assess the role of political intervention that aims to reduce unemployment benefits under internationally fragmented production.international fragmentation, unemployment, fair wages

    Learning Scheduling Algorithms for Data Processing Clusters

    Full text link
    Efficiently scheduling data processing jobs on distributed compute clusters requires complex algorithms. Current systems, however, use simple generalized heuristics and ignore workload characteristics, since developing and tuning a scheduling policy for each workload is infeasible. In this paper, we show that modern machine learning techniques can generate highly-efficient policies automatically. Decima uses reinforcement learning (RL) and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective such as minimizing average job completion time. Off-the-shelf RL techniques, however, cannot handle the complexity and scale of the scheduling problem. To build Decima, we had to develop new representations for jobs' dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals. Our prototype integration with Spark on a 25-node cluster shows that Decima improves the average job completion time over hand-tuned scheduling heuristics by at least 21%, achieving up to 2x improvement during periods of high cluster load

    Farm size, land fragmentation and economic efficiency in southern Rwanda

    Get PDF
    Butare, where this study was conducted, exhibits one of the highest population densities in Rwanda. As a direct result of population growth, most peasants have small fields and land fragmentation is common. The purpose of this article is to examine the effect of land fragmentation on economic efficiency. Regression analysis shows that area operated is primarily determined by the population-land ratio, non-agricultural employment opportunities, ownership certainty and adequate information through agricultural training. Results from a block-recursive regression analysis indicate that the level of net farm income per hectare, which indirectly reflects greater economic efficiency, is determined by the area operated, use of farm information, field extension staff visits, formal education of a farm operator, and the fragmentation of land holdings. Economies of size are evident in the data. The results obtained using ridge regression support the findings of two-stage least squares. Policies should be implemented to improve the functioning of land rental markets in order to reduce land fragmentation, improve rural education and access to relevant information; and strengthen extension facilities to individual farmers.Productivity Analysis,
    • 

    corecore