6,495 research outputs found

    Management information systems in social safety net programs : a look at accountability and control mechanisms

    Get PDF
    This paper is intended to provide task managers and World Bank Group clients working on Social Safety Net (SSN) programs with practical and systematic ways to use information management practices to mitigate risks by strengthening control and accountability mechanisms. It lays out practices and options to consider in the design and implementation of the Management Information System (MIS), and how to evaluate and mitigate operational risks originating from running a MIS. The findings of the paper are based on the review of several Conditional Cash Transfer (CCT) programs in the Latin American Region and various World Bank publications on CCTs. The paper presents a framework for the implementation of MIS and cross-cutting information management systems that is based on industry standards and information management practices. This framework can be applied both to programs that make use of information and communications technology (ICT) and programs that are paper based. It includes examples of MIS practices that can strengthen control and accountability mechanisms of SSN programs, and presents a roadmap for the design and implementation of an MIS in these programs. The application of the framework is illustrated through case studies from three fictitious countries. The paper concludes with some considerations and recommendations for task managers and government officials in charge of implementing CCTs and other safety nets program, and with a checklist for the implementation and monitoring of MIS.E-Business,Technology Industry,Education for Development (superceded),Labor Policies,Knowledge Economy

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Cloud Index Tracking: Enabling Predictable Costs in Cloud Spot Markets

    Full text link
    Cloud spot markets rent VMs for a variable price that is typically much lower than the price of on-demand VMs, which makes them attractive for a wide range of large-scale applications. However, applications that run on spot VMs suffer from cost uncertainty, since spot prices fluctuate, in part, based on supply, demand, or both. The difficulty in predicting spot prices affects users and applications: the former cannot effectively plan their IT expenditures, while the latter cannot infer the availability and performance of spot VMs, which are a function of their variable price. To address the problem, we use properties of cloud infrastructure and workloads to show that prices become more stable and predictable as they are aggregated together. We leverage this observation to define an aggregate index price for spot VMs that serves as a reference for what users should expect to pay. We show that, even when the spot prices for individual VMs are volatile, the index price remains stable and predictable. We then introduce cloud index tracking: a migration policy that tracks the index price to ensure applications running on spot VMs incur a predictable cost by migrating to a new spot VM if the current VM's price significantly deviates from the index price.Comment: ACM Symposium on Cloud Computing 201

    An Analytical Approach to Cycle Time Evaluation in an Unreliable Multi-Product Production Line with Finite Buffers

    Get PDF
    This thesis develops an analytical approximation method to measure the performance of a multi-product unreliable production line with finite buffers between workstations. The performance measure used in this thesis is Total Cycle Time. The proposed approximation method generalizes the processing times to relax the variation of product types in a multi-product system. A decomposition method is then employed to approximate the production rate of a multi-product production line. The decomposition method considers generally distributed processing times as well as random failure and repair. A GI/G/1/N queuing model is also applied to obtain parameters such as blocking and starving probabilities that are needed for the approximation procedure. Several numerical experiments under different scenarios are performed, and results are validated by simulation models in order to assess the accuracy and strength of the approximation method. Consequent analysis and discussion of the results is also presented

    Routing and transfers amongst parallel queues

    Get PDF
    This thesis is concerned with maximizing the performance of policies for routing and transferring jobs in systems of heterogeneous servers. The tools used are probabilistic modelling, optimization and simulation. First, a system is studied where incoming jobs are allocated to the queue belonging to one of a number of servers, each of which goes through alternating periods of being operative and inoperative. The objective is to evaluate and optimize performance and cost metrics. Jobs incur costs for the amount of time that they spend in a queue, before commencing service. The optimal routing policy for incoming jobs is obtained by solving numerical programming equations. A number of heuristic policies are compared against the optimal, and one dynamic routing policy is shown to perform well over a large range of parameters. Next, the problem of how best to deal with the transfer of jobs is considered. Jobs arrive externally into the queue attached to one of a number of servers, and on arrival are assigned a time-out period. Jobs whose time-out period expires before it commences service is instantaneously transferred to the end another queue, based on a routing policy. Upon transfer, a transfer cost is incurred. An approximation to the optimal routing policy is computed, and compared with a number of heuristic policies. One heuristic policy is found to perform well over a large range of parameters. The last model considered is the case where incoming jobs are allocated to the queue attached to one of a number of servers, each of which goes through periods of being operative and inoperative. Additionally, each job is assigned a time-out on arrival into a queue. Any job whose time-out period expires before it commences service is instantaneously transferred to the end of another queue, based on a transfer policy. The objective is to evaluate and optimize performance and cost metrics. Jobs incur costs for the amount of time that they spend in a queue, before commencing service, and additionally incur a cost for each transfer they experience. A number of heuristic transfer policies are evaluated and one heuristic which performs for a wide range of parameters is observed.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    MOON: MapReduce On Opportunistic eNvironments

    Get PDF
    Abstract—MapReduce offers a flexible programming model for processing and generating large data sets on dedicated resources, where only a small fraction of such resources are every unavailable at any given time. In contrast, when MapReduce is run on volunteer computing systems, which opportunistically harness idle desktop computers via frameworks like Condor, it results in poor performance due to the volatility of the resources, in particular, the high rate of node unavailability. Specifically, the data and task replication scheme adopted by existing MapReduce implementations is woefully inadequate for resources with high unavailability. To address this, we propose MOON, short for MapReduce On Opportunistic eNvironments. MOON extends Hadoop, an open-source implementation of MapReduce, with adaptive task and data scheduling algorithms in order to offer reliable MapReduce services on a hybrid resource architecture, where volunteer computing systems are supplemented by a small set of dedicated nodes. The adaptive task and data scheduling algorithms in MOON distinguish between (1) different types of MapReduce data and (2) different types of node outages in order to strategically place tasks and data on both volatile and dedicated nodes. Our tests demonstrate that MOON can deliver a 3-fold performance improvement to Hadoop in volatile, volunteer computing environments
    • …
    corecore