4,344 research outputs found

    Metascheduling of HPC Jobs in Day-Ahead Electricity Markets

    Full text link
    High performance grid computing is a key enabler of large scale collaborative computational science. With the promise of exascale computing, high performance grid systems are expected to incur electricity bills that grow super-linearly over time. In order to achieve cost effectiveness in these systems, it is essential for the scheduling algorithms to exploit electricity price variations, both in space and time, that are prevalent in the dynamic electricity price markets. In this paper, we present a metascheduling algorithm to optimize the placement of jobs in a compute grid which consumes electricity from the day-ahead wholesale market. We formulate the scheduling problem as a Minimum Cost Maximum Flow problem and leverage queue waiting time and electricity price predictions to accurately estimate the cost of job execution at a system. Using trace based simulation with real and synthetic workload traces, and real electricity price data sets, we demonstrate our approach on two currently operational grids, XSEDE and NorduGrid. Our experimental setup collectively constitute more than 433K processors spread across 58 compute systems in 17 geographically distributed locations. Experiments show that our approach simultaneously optimizes the total electricity cost and the average response time of the grid, without being unfair to users of the local batch systems.Comment: Appears in IEEE Transactions on Parallel and Distributed System

    Efficient Service for Next Generation Network Slicing Architecture and Mobile Traffic Analysis Using Machine Learning Technique

    Get PDF
    The tremendous growth of mobile devices, IOT devices, applications and many other services have placed high demand on mobile and wireless network infrastructures. Much research and development of 5G mobile networks have found the way to support the huge volume of traffic, extracting of fine-gained analytics and agile management of mobile network elements, so that it can maximize the user experience. It is very challenging to accomplish the tasks as mobile networks increase the complexity, due to increases in the high volume of data penetration, devices, and applications. One of the solutions, advance machine learning techniques, can help to mitigate the large number of data and algorithm driven applications. This work mainly focus on extensive analysis of mobile traffic for improving the performance, key performance indicators and quality of service from the operations perspective. The work includes the collection of datasets and log files using different kind of tools in different network layers and implementing the machine learning techniques to analyze the datasets to predict mobile traffic activity. A wide range of algorithms were implemented to compare the analysis in order to identify the highest performance. Moreover, this thesis also discusses about network slicing architecture its use cases and how to efficiently use network slicing to meet distinct demands

    Cutting the Electric Bill for Internet-Scale Systems

    Get PDF
    Energy expenses are becoming an increasingly important fraction of data center operating costs. At the same time, the energy expense per unit of computation can vary significantly between two different locations. In this paper, we characterize the variation due to fluctuating electricity prices and argue that existing distributed systems should be able to exploit this variation for significant economic gains. Electricity prices exhibit both temporal and geographic variation, due to regional demand differences, transmission inefficiencies, and generation diversity. Starting with historical electricity prices, for twenty nine locations in the US, and network traffic data collected on Akamai's CDN, we use simulation to quantify the possible economic gains for a realistic workload. Our results imply that existing systems may be able to save millions of dollars a year in electricity costs, by being cognizant of locational computation cost differences.NokiaNational Science Foundatio

    Identifying clients’ bad experiences with their internet service

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsIdentifying clients who had experienced a bad internet service is important for network providers, as bad service experiences may lead to less client satisfaction. It is possible to measure quality of service by looking at objective network quality measures. However, a decrease in the quality of service will not translate into a bad quality of experience for all clients at all times. This is because a) if the client does not try to use the internet, he or she would not notice the deterioration of the service; and b) different clients have different needs in terms of service quality; a slight decrease in network quality maybe be noticed by an intensive user but not by a light user, even if the latter is using the internet. In the present report, we describe the work we have done to develop: a) a segmentation the clients according to their typical internet usage; b) a probability that a given client would use the internet at a given time. These two features were then fed to a classifier, along with the objective network quality measures. This classifier, a gradient boosted model, was able to classify clients who filled a service request due to lack of access to the internet with an accuracy of 0.98, sensitivity of 0.87 and specificity of 0.98. The results of the classifier and the role of the special features we developed is discussed, along with future directions for this work.A identificação dos clientes que tiveram uma má experiência de serviço é importante para as empresas de comunicações, uma vez que uma má experiência pode levar a menor satisfação dos clientes. É possível medir a qualidade do serviço através da análise das medidas objetivas de qualidade de rede. No entanto, uma diminuição da qualidade do serviço não se traduz numa má experiência de utilização para todos os clientes e em todos os momentos, por dois motivos: a) se o cliente não tenta utilizar a internet, ele/ela não se apercebe que houve uma deterioração do serviço; e b) diferentes clientes têm necessidades diferentes em termos de qualidade do serviço; uma deterioração ligeira na qualidade da rede de internet pode ser detetada por um cliente que usa o serviço intensamente, mas não por um cliente que usa a internet para tarefas menos exigentes, mesmo que este último esteja a usar a internet. Neste relatório, descrevemos o desenvolvimento de a) uma segmentação de clientes de acordo com o seu uso típico da internet; b) uma probabilidade de o cliente utilizar a internet numa dada hora. Estes dois atributos foram depois utilizados num algoritmo de classificação, em conjunto com medidas objetivas de qualidade de rede. Este algoritmo de classificação, um gradient boosted model, foi capaz de classificar clientes que fizeram um pedido de apoio técnico devido a falha no acesso à internet com uma taxa de acerto de 98% (sensitivity = 0.87, specificity = 0.98). Os resultados do classificador e o papel dos atributos desenvolvidos são discutidos, assim como futuras direções para o trabalho

    Identifying clients’ bad experiences with their internet service

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsIdentifying clients who had experienced a bad internet service is important for network providers, as bad service experiences may lead to less client satisfaction. It is possible to measure quality of service by looking at objective network quality measures. However, a decrease in the quality of service will not translate into a bad quality of experience for all clients at all times. This is because a) if the client does not try to use the internet, he or she would not notice the deterioration of the service; and b) different clients have different needs in terms of service quality; a slight decrease in network quality maybe be noticed by an intensive user but not by a light user, even if the latter is using the internet. In the present report, we describe the work we have done to develop: a) a segmentation the clients according to their typical internet usage; b) a probability that a given client would use the internet at a given time. These two features were then fed to a classifier, along with the objective network quality measures. This classifier, a gradient boosted model, was able to classify clients who filled a service request due to lack of access to the internet with an accuracy of 0.98, sensitivity of 0.87 and specificity of 0.98. The results of the classifier and the role of the special features we developed is discussed, along with future directions for this work.A identificação dos clientes que tiveram uma má experiência de serviço é importante para as empresas de comunicações, uma vez que uma má experiência pode levar a menor satisfação dos clientes. É possível medir a qualidade do serviço através da análise das medidas objetivas de qualidade de rede. No entanto, uma diminuição da qualidade do serviço não se traduz numa má experiência de utilização para todos os clientes e em todos os momentos, por dois motivos: a) se o cliente não tenta utilizar a internet, ele/ela não se apercebe que houve uma deterioração do serviço; e b) diferentes clientes têm necessidades diferentes em termos de qualidade do serviço; uma deterioração ligeira na qualidade da rede de internet pode ser detetada por um cliente que usa o serviço intensamente, mas não por um cliente que usa a internet para tarefas menos exigentes, mesmo que este último esteja a usar a internet. Neste relatório, descrevemos o desenvolvimento de a) uma segmentação de clientes de acordo com o seu uso típico da internet; b) uma probabilidade de o cliente utilizar a internet numa dada hora. Estes dois atributos foram depois utilizados num algoritmo de classificação, em conjunto com medidas objetivas de qualidade de rede. Este algoritmo de classificação, um gradient boosted model, foi capaz de classificar clientes que fizeram um pedido de apoio técnico devido a falha no acesso à internet com uma taxa de acerto de 98% (sensitivity = 0.87, specificity = 0.98). Os resultados do classificador e o papel dos atributos desenvolvidos são discutidos, assim como futuras direções para o trabalho
    • …
    corecore