4,743 research outputs found

    On Production and Subcontracting Strategies for Manufacturers with Limited Capacity and Backlog-Dependent Demand

    Get PDF
    We study a manufacturing firm that builds a product to stock to meet a random demand. If there is a positive surplus of finished goods, the customers make their purchases without delay and leave. If there is a backlog, the customers are sensitive to the quoted lead time and some choose not to order if they feel that the lead time is excessive. A set of subcontractors, who have different costs and capacities, are available to supplement the firm's own production capacity. We derive a feedback policy that determines the production rate and the rate at which the subcontractors are requested to deliver products. The performance of the system when it is managed according to this policy is evaluated. The subcontractors represent a set of capacity options, and we calculate the values of these options

    Workload Interleaving with Performance Guarantees in Data Centers

    Get PDF
    In the era of global, large scale data centers residing in clouds, many applications and users share the same pool of resources for the purposes of reducing energy and operating costs, and of improving availability and reliability. Along with the above benefits, resource sharing also introduces performance challenges: when multiple workloads access the same resources concurrently, contention may occur and introduce delays in the performance of individual workloads. Providing performance isolation to individual workloads needs effective management methodologies. The challenges of deriving effective management methodologies lie in finding accurate, robust, compact metrics and models to drive algorithms that can meet different performance objectives while achieving efficient utilization of resources. This dissertation proposes a set of methodologies aiming at solving the challenging performance isolation problem in workload interleaving in data centers, focusing on both storage components and computing components. at the storage node level, we focus on methodologies for better interleaving user traffic with background workloads, such as tasks for improving reliability, availability, and power savings. More specifically, a scheduling policy for background workload based on the statistical characteristics of the system busy periods and a methodology that quantitatively estimates the performance impact of power savings are developed. at the storage cluster level, we consider methodologies on how to efficiently conduct work consolidation and schedule asynchronous updates without violating user performance targets. More specifically, we develop a framework that can estimate beforehand the benefits and overheads of each option in order to automate the process of reaching intelligent consolidation decisions while achieving faster eventual consistency. at the computing node level, we focus on improving workload interleaving at off-the-shelf servers as they are the basic building blocks of large-scale data centers. We develop priority scheduling middleware that employs different policies to schedule background tasks based on the instantaneous resource requirements of the high priority applications running on the server node. Finally, at the computing cluster level, we investigate popular computing frameworks for large-scale data intensive distributed processing, such as MapReduce and its Hadoop implementation. We develop a new Hadoop scheduler called DyScale to exploit capabilities offered by heterogeneous cores in order to achieve a variety of performance objectives

    A Simulation-Based Evaluation Of Efficiency Strategies For A Primary Care Clinic With Unscheduled Visits

    Get PDF
    In the health care industry, there are strategies to remove inefficiencies from the health delivery process called efficiency strategies. This dissertation proposed a simulation model to evaluate the impact of the efficiency strategies on a primary care clinic with unscheduled walk-in patient visits. The simulation model captures the complex characteristics of the Orlando Veteran\u27s Affairs Medical Center (VAMC) primary care clinic. This clinic system includes different types of patients, patient paths, and multiple resources that serve them. Added to the problem complexity is the presence of patient no-shows characteristics and unscheduled patient arrivals, a problem which has been until recently, largely neglected. The main objectives of this research were to develop a model that captures the complexities of the Orlando VAMC, evaluate alternative scenarios to work in unscheduled patient visits, and examine the impact of patient flow, appointment scheduling, and capacity management decisions on the performance of the primary care clinic system. The main results show that only a joint policy of appointment scheduling rules and patient flow decisions has a significant impact on the wait time of scheduled patients. It is recommended that in the future the clinic addresses the problem of serving additional walk-in patients from an integrated scheduling and patient flow viewpoint

    Analiza vremena boravka u QBD modelu thread pool-a

    Get PDF
    Modern Web or database servers are usually designed with a thread pool as a major component for servicing. Controlling of such servers, as well as defining adequate resource management policies, with the aim of minimizing requests’ sojourn times presuppose the existence of performance models of thread-pooled systems. In this paper a queuing model of a thread pool is formulated along with a set of underlying assumptions and definitions used. Requests are abstracted in such a way that they are characterized by service time distribution and CPU consumption parameter. The model is defined as a Quasi-Birth-and-Death (QBD) process. Stability conditions for the model are derived and an analytic method based on generating functions for calculation of expected sojourn times is presented. The analytical results thus obtained are evaluated in a developed experimental environment. The environment contains a synthetic workload generator and an instrumented server application based on a standard Java 7 ThreadPoolExecutor thread pool. Sojourn time measurements confirm the theoretical results and also give additional insight into sojourn times related to more realistic workload cases that otherwise would be difficult to analyze formally.Kod suvremenih web poslužitelja ili poslužitelja baza podataka thread pool najčešće predstavlja glavnu komponentu za posluživanje. Operativno upravljanje kao i definiranje odgovarajućih politika upravljanja resursima s ciljem minimiziranja vremena boravka zahtjeva u sistemu, pretpostavlja postojanje modela performansi sistema koji sadrže thread pool. U ovom radu je formuliran model thread poola zajedno s definicijama i pretpostavkama na kojima se taj model temelji. Zahtjevi su karakterizirani apstraktno preko raspodjele vremena posluživanja i parametra upotrebe CPU. Model je definiran kao Quasi-Birth-and-Death (QBD) proces. Izvedeni su uvjeti stabilnosti modela i razra.ena je metoda proračuna očekivanog vremena boravka zahtjeva. Dobiveni analitički rezultati su provjereni u eksperimentalnoj okolini. Ta okolina se sastoji od generatora opterećenja i instrumentalizirane poslužiteljske aplikacije koja sadrži standardni Java 7 ThreadPoolExecutor. Mjerenja vremena boravka potvr.uju teoretske rezultate i daju nam dodatan uvid u trajanje boravaka koje se odnosi na slučejeve opterećenja koja više odgovaraju praksi, a koje je inače teško ili nemoguće analitički odrediti

    Experimenting with Gnutella Communities

    Get PDF
    Computer networks and distributed systems in general may be regarded as communities where the individual components, be they entire systems, application software or users, interact in a shared environment. Such communities dynamically evolve with components or nodes joning and leaving the system. Their own individual activities affect the community's behaviour and vice-versa. This paper discusses various experiments undertaken to investigate the behaviour of a real system, the Gnutella network, which represents such a community. Gnutella is a distributed Peer-to-Peer data-sharing system without any central control. It turns out that most interactions between nodes do not last long and much of their activity is devoted to finding appropriate partners in the network. Good connections lasting longer appear only as rare events. For example, out of 42,000 connections only 57 hosts were found to available on a regular basis. This means that, in contrast to the common belief that this kind of peer-to-peer networks or sub-communities are always large, they are actually quite small. However, those sub-communities examplify very dynamic behaviour because their actual composition can change very quickly. The experimental results presented have been obtained from a Java implementation of Gnutella running in the open Internet environment, and thus in unknown and quickly changing network structures heavily dependent on chance. Les réseaux informatique ainsi que les systèmes distribués peuvent être considérés comme des communautés où les composantes - que ce soit des systèmes complets, des programmes ou des usagers - interagissent dans un environnement partagé. Ces communautés sont dynamiques car des éléments peuvent s'y joindre ou quitter en tout temps. L'article présente les résultats d'une suite d'expériences et de mesures faites sur Gnutella, un système peer-to-peer à grande échelle qui opère sans aucun contrôle centralisé. Nous avons remarqué qu'une grande partie des messages échangés sont erronés ou redondants et que les interactions entre n?uds ne durent pas très longtemps. En particulier, des connexions durant plus d'une minute sont des phénomènes rares. Les n?uds passent donc la majorité de leur temps à remplacer les partenaires perdus et, contrairement à l'idée répandue que les réseaux peer-to-peer sont immenses, nous avons noté que les communautés effectives étaient assez limitées. Gnutella est un environnement très dynamique avec peu de stabilité. Par exemple, de 42,000 sites avec lesquels nous avons établi une connexion, il a seulement été possible de re-communiquer de façon régulière avec 57. Dans un tel environnement, la chance joue un rôle important dans la performance observée; mais nous avons élaboré un protocole expérimental permettant de comparer diverses options.Gnutella, peer-to-peer networks, Internet communities, distributed systems, protocols, Gnutella, réseaux peer-to-peer, communautés virtuelles, internet, systèmes distribués, protocoles de télécommunication

    Analysis of proper ground slot capacity for receiving-Delivery operation in Indonesia Kendaraan terminal

    Get PDF

    Web Server Performance of Apache and Nginx: A Systematic Literature Review

    Get PDF
    Web Server performance is cardinal to effective and efficient Information communication. Performance measures include response time and service rate, memory usage, CPU utilization among others. A review of various studies indicates a close comparison among different web servers that included Apache, IIS, Nginx and Lighttpd among others. The results of various studies indicates that response time, CPU utilization and memory usage varied with different web servers depending on the model used. However, it was found that Nginx out performed Apache on many metrics that included response time, CPU utilization and memory usage. Nginx performance under these metrics showed that its memory (in case of memory) does not increase with increased requests. It was concluded that though Nginx out performed Apache, both web servers are powerful, flexible and capable and the decision of which web server to adopt is entirely dependent on the need of the user. Since some metric such as uptime (the amount of time that a server stays up and running properly) which reflects the reliability and availability of the server and also landing page speed was not included, we propose that future studies should consider uptime and landing page speed in the testing of web server performance. Keywords: web server, web server performance, apache, Ngin

    Dynamic Composite Data Physicalization Using Wheeled Micro-Robots

    Get PDF
    This paper introduces dynamic composite physicalizations, a new class of physical visualizations that use collections of self-propelled objects to represent data. Dynamic composite physicalizations can be used both to give physical form to well-known interactive visualization techniques, and to explore new visualizations and interaction paradigms. We first propose a design space characterizing composite physicalizations based on previous work in the fields of Information Visualization and Human Computer Interaction. We illustrate dynamic composite physicalizations in two scenarios demonstrating potential benefits for collaboration and decision making, as well as new opportunities for physical interaction. We then describe our implementation using wheeled micro-robots capable of locating themselves and sensing user input, before discussing limitations and opportunities for future work

    Exploiting Data Mining Techniques for Broadcasting Data in Mobile Computing Environments

    Get PDF
    Cataloged from PDF version of article.Mobile computers can be equipped with wireless communication devices that enable users to access data services from any location. In wireless communication, the server-to-client (downlink) communication bandwidth is much higher than the client-to-server (uplink) communication bandwidth. This asymmetry makes the dissemination of data to client machines a desirable approach. However, dissemination of data by broadcasting may induce high access latency in case the number of broadcast data items is large. In this paper, we propose two methods aiming to reduce client access latency of broadcast data. Our methods are based on analyzing the broadcast history (i.e., the chronological sequence of items that have been requested by clients) using data mining techniques. With the first method, the data items in the broadcast disk are organized in such a way that the items requested subsequently are placed close to each other. The second method focuses on improving the cache hit ratio to be able to decrease the access latency. It enables clients to prefetch the data from the broadcast disk based on the rules extracted from previous data request patterns. The proposed methods are implemented on a Web log to estimate their effectiveness. It is shown through performance experiments that the proposed rule-based methods are effective in improving the system performance in terms of the average latency as well as the cache hit ratio of mobile clients
    corecore