6,141 research outputs found

    A Novel Approach for Performance Characterization of IaaS Clouds

    Get PDF
    The ability to predict the energy consumption of an HPC task, varying the number of assigned nodes, can lead to the ability to assign the correct number of nodes to tasks, saving large amount of energy. In this paper we present LBM, a model capable of predicting the resource usage (applicable to different resources, such as completion time and energy consumption) of programs, following a black box approach, where only passive measures of the running program are used to build the prediction model, without requiring its source code, or static analysis of the binary. LBM builds the predicting model using other programs as benchmarks. We tested LBM predicting the energy consumption of pitzDaily, a case of the OpenFOAM CFD suite, using a very low number of benchmarks (3), obtaining extremely precise predictions

    Adaptive Boltzmann Medical Dataset Machine Learning

    Get PDF
    The RBM is a stochastic energy-based model of an unsupervised neural network (RBM). RBM is a key pre-training for Deep Learning. Structure of RBM includes weights and coefficients for neurons. Better network structure allows us to examine data more thoroughly, which is good. We looked at the variance of parameters in learning on demand to fix the problem. To determine why RBM's energy function fluctuates, we'll look at its parameter variance. A neuron generation and annihilation algorithm is smeared with an adaptive RBM learning method to determine the optimal number of hidden neurons for attribute imputation during training. When the energy function isn't converged and parameter variance is high, a hidden neuron is generated. If the neuron doesn't disrupt learning, it'll destroy the hidden neuron. In this study, some yardstick PIMA data sets were tested

    Power Modeling and Resource Optimization in Virtualized Environments

    Get PDF
    The provisioning of on-demand cloud services has revolutionized the IT industry. This emerging paradigm has drastically increased the growth of data centers (DCs) worldwide. Consequently, this rising number of DCs is contributing to a large amount of world total power consumption. This has directed the attention of researchers and service providers to investigate a power-aware solution for the deployment and management of these systems and networks. However, these solutions could be bene\ufb01cial only if derived from a precisely estimated power consumption at run-time. Accuracy in power estimation is a challenge in virtualized environments due to the lack of certainty of actual resources consumed by virtualized entities and of their impact on applications\u2019 performance. The heterogeneous cloud, composed of multi-tenancy architecture, has also raised several management challenges for both service providers and their clients. Task scheduling and resource allocation in such a system are considered as an NP-hard problem. The inappropriate allocation of resources causes the under-utilization of servers, hence reducing throughput and energy e\ufb03ciency. In this context, the cloud framework needs an e\ufb00ective management solution to maximize the use of available resources and capacity, and also to reduce the impact of their carbon footprint on the environment with reduced power consumption. This thesis addresses the issues of power measurement and resource utilization in virtualized environments as two primary objectives. At \ufb01rst, a survey on prior work of server power modeling and methods in virtualization architectures is carried out. This helps investigate the key challenges that elude the precision of power estimation when dealing with virtualized entities. A di\ufb00erent systematic approach is then presented to improve the prediction accuracy in these networks, considering the resource abstraction at di\ufb00erent architectural levels. Resource usage monitoring at the host and guest helps in identifying the di\ufb00erence in performance between the two. Using virtual Performance Monitoring Counters (vPMCs) at a guest level provides detailed information that helps in improving the prediction accuracy and can be further used for resource optimization, consolidation and load balancing. Later, the research also targets the critical issue of optimal resource utilization in cloud computing. This study seeks a generic, robust but simple approach to deal with resource allocation in cloud computing and networking. The inappropriate scheduling in the cloud causes under- and over- utilization of resources which in turn increases the power consumption and also degrades the system performance. This work \ufb01rst addresses some of the major challenges related to task scheduling in heterogeneous systems. After a critical analysis of existing approaches, this thesis presents a rather simple scheduling scheme based on the combination of heuristic solutions. Improved resource utilization with reduced processing time can be achieved using the proposed energy-e\ufb03cient scheduling algorithm

    Application Driven MOdels for Resource Management in Cloud Environments

    Get PDF
    El despliegue y la ejecución de aplicaciones de gran escala en sistemas distribuidos con unos parametros de Calidad de Servicio adecuados necesita gestionar de manera eficiente los recursos computacionales. Para desacoplar los requirimientos funcionales y los no funcionales (u operacionales) de dichas aplicaciones, se puede distinguir dos niveles de abstracción: i) el nivel funcional, que contempla aquellos requerimientos relacionados con funcionalidades de la aplicación; y ii) el nivel operacional, que depende del sistema distribuido donde se despliegue y garantizará aquellos parámetros relacionados con la Calidad del Servicio, disponibilidad, tolerancia a fallos y coste económico, entre otros. De entre las diferentes alternativas del nivel operacional, en la presente tesis se contempla un entorno cloud basado en la virtualización de contenedores, como puede ofrecer Kubernetes.El uso de modelos para el diseño de aplicaciones en ambos niveles permite garantizar que dichos requerimientos sean satisfechos. Según la complejidad del modelo que describa la aplicación, o el conocimiento que el nivel operacional tenga de ella, se diferencian tres tipos de aplicaciones: i) aplicaciones dirigidas por el modelo, como es el caso de la simulación de eventos discretos, donde el propio modelo, por ejemplo Redes de Petri de Alto Nivel, describen la aplicación; ii) aplicaciones dirigidas por los datos, como es el caso de la ejecución de analíticas sobre Data Stream; y iii) aplicaciones dirigidas por el sistema, donde el nivel operacional rige el despliegue al considerarlas como una caja negra.En la presente tesis doctoral, se propone el uso de un scheduler específico para cada tipo de aplicación y modelo, con ejemplos concretos, de manera que el cliente de la infraestructura pueda utilizar información del modelo descriptivo y del modelo operacional. Esta solución permite rellenar el hueco conceptual entre ambos niveles. De esta manera, se proponen diferentes métodos y técnicas para desplegar diferentes aplicaciones: una simulación de un sistema de Vehículos Eléctricos descrita a través de Redes de Petri; procesado de algoritmos sobre un grafo que llega siguiendo el paradigma Data Stream; y el propio sistema operacional como sujeto de estudio.En este último caso de estudio, se ha analizado cómo determinados parámetros del nivel operacional (por ejemplo, la agrupación de contenedores, o la compartición de recursos entre contenedores alojados en una misma máquina) tienen un impacto en las prestaciones. Para analizar dicho impacto, se propone un modelo formal de una infrastructura operacional concreta (Kubernetes). Por último, se propone una metodología para construir índices de interferencia para caracterizar aplicaciones y estimar la degradación de prestaciones incurrida cuando dos contenedores son desplegados y ejecutados juntos. Estos índices modelan cómo los recursos del nivel operacional son usados por las applicaciones. Esto supone que el nivel operacional maneja información cercana a la aplicación y le permite tomar mejores decisiones de despliegue y distribución.<br /

    Towards Measuring and Understanding Performance in Infrastructure- and Function-as-a-Service Clouds

    Get PDF
    Context. Cloud computing has become the de facto standard for deploying modern software systems, which makes its performance crucial to the efficient functioning of many applications. However, the unabated growth of established cloud services, such as Infrastructure-as-a-Service (IaaS), and the emergence of new services, such as Function-as-a-Service (FaaS), has led to an unprecedented diversity of cloud services with different performance characteristics.Objective. The goal of this licentiate thesis is to measure and understand performance in IaaS and FaaS clouds. My PhD thesis will extend and leverage this understanding to propose solutions for building performance-optimized FaaS cloud applications.Method.\ua0To achieve this goal, quantitative and qualitative research methods are used, including experimental research, artifact analysis, and literature review.Findings.\ua0The thesis proposes a cloud benchmarking methodology to estimate application performance in IaaS clouds, characterizes typical FaaS applications, identifies gaps in literature on FaaS performance evaluations, and examines the reproducibility of reported FaaS performance experiments. The evaluation of the benchmarking methodology yielded promising results for benchmark-based application performance estimation under selected conditions. Characterizing 89 FaaS applications revealed that they are most commonly used for short-running tasks with low data volume and bursty workloads. The review of 112 FaaS performance studies from academic and industrial sources found a strong focus on a single cloud platform using artificial micro-benchmarks and discovered that the majority of studies do not follow reproducibility principles on cloud experimentation.Future Work. Future work will propose a suite of application performance benchmarks for FaaS, which is instrumental for evaluating candidate solutions towards building performance-optimized FaaS applications

    Design, Development and Evaluation of 5G-Enabled Vehicular Services:The 5G-HEART Perspective

    Get PDF
    The ongoing transition towards 5G technology expedites the emergence of a variety of mobile applications that pertain to different vertical industries. Delivering on the key commitment of 5G, these diverse service streams, along with their distinct requirements, should be facilitated under the same unified network infrastructure. Consequently, in order to unleash the benefits brought by 5G technology, a holistic approach towards the requirement analysis and the design, development, and evaluation of multiple concurrent vertical services should be followed. In this paper, we focus on the Transport vertical industry, and we study four novel vehicular service categories, each one consisting of one or more related specific scenarios, within the framework of the “5G Health, Aquaculture and Transport (5G-HEART)” 5G PPP ICT-19 (Phase 3) project. In contrast to the majority of the literature, we provide a holistic overview of the overall life-cycle management required for the realization of the examined vehicular use cases. This comprises the definition and analysis of the network Key Performance Indicators (KPIs) resulting from high-level user requirements and their interpretation in terms of the underlying network infrastructure tasked with meeting their conflicting or converging needs. Our approach is complemented by the experimental investigation of the real unified 5G pilot’s characteristics that enable the delivery of the considered vehicular services and the initial trialling results that verify the effectiveness and feasibility of the presented theoretical analysis

    Model-driven Scheduling for Distributed Stream Processing Systems

    Full text link
    Distributed Stream Processing frameworks are being commonly used with the evolution of Internet of Things(IoT). These frameworks are designed to adapt to the dynamic input message rate by scaling in/out.Apache Storm, originally developed by Twitter is a widely used stream processing engine while others includes Flink, Spark streaming. For running the streaming applications successfully there is need to know the optimal resource requirement, as over-estimation of resources adds extra cost.So we need some strategy to come up with the optimal resource requirement for a given streaming application. In this article, we propose a model-driven approach for scheduling streaming applications that effectively utilizes a priori knowledge of the applications to provide predictable scheduling behavior. Specifically, we use application performance models to offer reliable estimates of the resource allocation required. Further, this intuition also drives resource mapping, and helps narrow the estimated and actual dataflow performance and resource utilization. Together, this model-driven scheduling approach gives a predictable application performance and resource utilization behavior for executing a given DSPS application at a target input stream rate on distributed resources.Comment: 54 page

    Driving Manufacturing Systems for the Fourth Industrial Revolution

    Get PDF
    It has been a long way since the aroused of the Industry 4.0 and the companies' reality is not already align with this new concept. Industry 4.0 is ongoing slowly as it was expected that its maturity level should be higher. The companies´ managers should have a different approach to the adoption of the industry 4.0 enabling technologies on their manufacturing systems to create smart nets along all production process with the connection of elements on the manu-facturing system such as machines, employees, and systems. These smart nets can control and make autonomous decisions efficiently. Moreover, in the industry 4.0 environment, companies can predict problems and failures along all production process and react sooner regarding maintenance or production changes for instance. The industry 4.0 environment is a challenging area because changes the relation between humans and machines. In this way, the scope of this thesis is to contribute to companies adopting the industry 4.0 enabling technologies in their manufacturing systems to improve their competitiveness to face the incoming future. For this purpose, this thesis integrates a research line oriented to i) the understanding of the industry 4.0 concepts, and its enabling technologies to perform the vision of the smart factory, ii) the analysis of the industry 4.0 maturity level on a regional industrial sector and to understand how companies are facing the digital transformation challenges and its barriers, iii) to analyze in deep the industry 4.0 adoption in a company and understand how this company can reach higher maturity levels, and iv) the development of strategic scenarios to help companies on the digital transition, proposing risk mitigations plans and a methodology to develop stra-tegic scenarios. This thesis highlights several barriers to industry 4.0 adoption and also brings new ones to academic and practitioner discussion. The companies' perception related to these barriers Is also discussed in this thesis. The findings of this thesis are of significant interest to companies and managers as they can position themselves along this research line and take advantage of it using all phases of this thesis to perform a better knowledge of this industrial revolution, how to perform better industry 4.0 maturity levels and they can position themselves in the proposed strategic scenarios to take the necessary actions to better face this industrial revolution. In this way, it is proposed this research line for companies to accelerate their digital transformation.Já existe um longo percurso desde o aparecimento da indústria 4.0 e a realidade das empresas ainda não está alinhada com este novo conceito. A indústria 4.0 está em andamento lento, pois era esperado que o seu nível de maturidade fosse maior. Os gestores das empresas devem ter uma abordagem diferente na adoção das tecnologias facilitadoras da indústria 4.0 nos seus sistemas produtivos para criar redes inteligentes ao longo de todo o processo produtivo com a conexão de elementos do sistema produtivo como máquinas, operários e sistemas. Estas redes inteligentes podem controlar e tomar decisões autónomas com eficiência. Além disso, no ambiente da indústria 4.0, as empresas podem prever problemas e falhas ao longo de todo o processo produtivo e reagir mais cedo em relação a manutenções ou mudanças de produção, por exemplo. O ambiente da indústria 4.0 é uma área desafiadora devido às mudanças na relação entre humanos e máquinas. Desta forma, o objetivo desta tese é contribuir para que as empresas adotem as tecnologias facilitadoras das indústria 4.0 nos seus sistemas produtivos por forma a melhorar sua competitividade para enfrentar o futuro que se aproxima. Para isso, esta tese integra uma linha de investigação orientada para i) a compreensão dos conceitos da indústria 4.0, e suas tecnologias facilitadores para realizar a visão da fábrica inteligente, ii) a análise do nível de maturidade da indústria 4.0 num setor industrial regional e entender como as empresas estão enfrentando os desafios da transformação digital e suas barreiras, iii) analisar a fundo a adoção da indústria 4.0 numa empresa e entender como essa empresa pode atingir níveis mais elevados de maturidade, e iv) o desenvolvimento de cenários estratégicos para ajudar as empresas na transição digital, propondo planos de mitigação de riscos e uma metodologia para desenvolver cenários estratégicos. Esta tese destaca várias barreiras à adoção da indústria 4.0 e também traz novas barreiras para a discussão acadêmica e profissional. A perceção das empresas em relação a essas barreiras também é discutida nesta tese. As descobertas nesta tese são de grande interesse para empresas e gestores, pois podem-se posicionar ao longo desta linha de investigação e aproveitá-la utilizando todas as fases desta tese para obter um melhor conhecimento desta revolução industrial, como obter melhores níveis de maturidade da indústria 4.0 e possam posicionar-se nos cenários estratégicos propostos por forma a tomar as ações necessárias para melhorar o envolvimento nesta revolução industrial. Desta forma, propõe-se esta linha de investigação para que as empresas acelerem a sua transformação digital
    corecore