6,680 research outputs found

    Cost minimization for unstable concurrent products in multi-stage production line using queueing analysis

    Get PDF
    This research and resulting contribution are results of Assumption University of Thailand. The university partially supports financially the publication.Purpose: The paper copes with the queueing theory for evaluating a muti-stage production line process with concurrent goods. The intention of this article is to evaluate the efficiency of products assembly in the production line. Design/Methodology/Approach: To elevate the efficiency of the assembly line it is required to control the performance of individual stations. The arrival process of concurrent products is piled up before flowing to each station. All experiments are based on queueing network analysis. Findings: The performance analysis for unstable concurrent sub-items in the production line is discussed. The proposed analysis is based on the improvement of the total sub-production time by lessening the queue time in each station. Practical implications: The collected data are number of workers, incoming and outgoing sub-products, throughput rate, and individual station processing time. The front loading place unpacks product items into concurrent sub-items by an operator and automatically sorts them by RFID tag or bar code identifiers. Experiments of the work based on simulation are compared and validated with results from real approximation. Originality/Value: It is an alternative improvement to increase the efficiency of the operation in each station with minimum costs.peer-reviewe

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Comprehensive characterization of an open source document search engine

    Get PDF
    This work performs a thorough characterization and analysis of the open source Lucene search library. The article describes in detail the architecture, functionality, and micro-architectural behavior of the search engine, and investigates prominent online document search research issues. In particular, we study how intra-server index partitioning affects the response time and throughput, explore the potential use of low power servers for document search, and examine the sources of performance degradation ands the causes of tail latencies. Some of our main conclusions are the following: (a) intra-server index partitioning can reduce tail latencies but with diminishing benefits as incoming query traffic increases, (b) low power servers given enough partitioning can provide same average and tail response times as conventional high performance servers, (c) index search is a CPU-intensive cache-friendly application, and (d) C-states are the main culprits for performance degradation in document search.Web of Science162art. no. 1
    corecore