17,669 research outputs found

    EUROPEAN CONFERENCE ON QUEUEING THEORY 2016

    Get PDF
    International audienceThis booklet contains the proceedings of the second European Conference in Queueing Theory (ECQT) that was held from the 18th to the 20th of July 2016 at the engineering school ENSEEIHT, Toulouse, France. ECQT is a biannual event where scientists and technicians in queueing theory and related areas get together to promote research, encourage interaction and exchange ideas. The spirit of the conference is to be a queueing event organized from within Europe, but open to participants from all over the world. The technical program of the 2016 edition consisted of 112 presentations organized in 29 sessions covering all trends in queueing theory, including the development of the theory, methodology advances, computational aspects and applications. Another exciting feature of ECQT2016 was the institution of the TakĂĄcs Award for outstanding PhD thesis on "Queueing Theory and its Applications"

    Call Center Capacity Planning

    Get PDF

    Holistic assessment of call centre performance

    Get PDF
    In modern call centres 60–70% of the operational costs come in the form of the human agents who take the calls. Ensuring that the call centre operates at lowest cost and maximum efficiency involves a trade‐off of the cost of agents against lost revenue and increased customer dissatisfaction due to lost calls. Modelling the performance characteristics of a call centre in terms of the agent queue alone misses key performance influencers, specifically the interaction between channel availability at the media gateway and the time a call is queued. A blocking probability at the media gateway, as low as 0.45%, has a significant impact on the degree of queuing observed and therefore the cost and performance of the call centre. Our analysis also shows how abandonment impacts queuing delay. However, the call centre manager has less control over this than the level of contention at the media gateway. Our commercial assessment provides an evaluation of the balance between abandonment and contention, and shows that the difference in cost between the best and worst strategy is £130K per annum, however this must be balanced against a possible additional £2.98 m exposure in lost calls if abandonment alone is used

    High-Throughput Computing on High-Performance Platforms: A Case Study

    Full text link
    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan---a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner

    Computing server power modeling in a data center: survey,taxonomy and performance evaluation

    Full text link
    Data centers are large scale, energy-hungry infrastructure serving the increasing computational demands as the world is becoming more connected in smart cities. The emergence of advanced technologies such as cloud-based services, internet of things (IoT) and big data analytics has augmented the growth of global data centers, leading to high energy consumption. This upsurge in energy consumption of the data centers not only incurs the issue of surging high cost (operational and maintenance) but also has an adverse effect on the environment. Dynamic power management in a data center environment requires the cognizance of the correlation between the system and hardware level performance counters and the power consumption. Power consumption modeling exhibits this correlation and is crucial in designing energy-efficient optimization strategies based on resource utilization. Several works in power modeling are proposed and used in the literature. However, these power models have been evaluated using different benchmarking applications, power measurement techniques and error calculation formula on different machines. In this work, we present a taxonomy and evaluation of 24 software-based power models using a unified environment, benchmarking applications, power measurement technique and error formula, with the aim of achieving an objective comparison. We use different servers architectures to assess the impact of heterogeneity on the models' comparison. The performance analysis of these models is elaborated in the paper

    Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis

    Get PDF
    Traditional data centers are designed with a rigid architecture of fit-for-purpose servers that provision resources beyond the average workload in order to deal with occasional peaks of data. Heterogeneous data centers are pushing towards more cost-efficient architectures with better resource provisioning. In this paper we study the feasibility of using disaggregated architectures for intensive data applications, in contrast to the monolithic approach of server-oriented architectures. Particularly, we have tested a proactive network analysis system in which the workload demands are highly variable. In the context of the dReDBox disaggregated architecture, the results show that the overhead caused by using remote memory resources is significant, between 66\% and 80\%, but we have also observed that the memory usage is one order of magnitude higher for the stress case with respect to average workloads. Therefore, dimensioning memory for the worst case in conventional systems will result in a notable waste of resources. Finally, we found that, for the selected use case, parallelism is limited by memory. Therefore, using a disaggregated architecture will allow for increased parallelism, which, at the same time, will mitigate the overhead caused by remote memory.Comment: 8 pages, 6 figures, 2 tables, 32 references. Pre-print. The paper will be presented during the IEEE International Conference on High Performance Computing and Communications in Bangkok, Thailand. 18 - 20 December, 2017. To be published in the conference proceeding

    The Camp View of Inflation Forecasts

    Get PDF
    Analyzing sample moments of survey forecasts, we derive disagreement and un- certainty measures for the short- and medium term inflation outlook. The latter provide insights into the development of inflation forecast uncertainty in the context of a changing macroeconomic environment since the beginning of 2008. Motivated by the debate on the role of monetary aggregates and cyclical variables describing a Phillips-curve logic, we develop a macroeconomic indicator spread which is assumed to drive forecasters’ judgments. Empirical evidence suggests procyclical dynamics between disagreement among forecasters, individual forecast uncertainty and the macro-spread. We call this approach the camp view of inflation forecasts and show that camps form up whenever the spread widens.monetary policy, survey forecasts, inflation uncertainty, heterogenous beliefs and expectations, monetary aggregates
    • 

    corecore