370 research outputs found

    Cloud Computing cost and energy optimization through Federated Cloud SoS

    Get PDF
    2017 Fall.Includes bibliographical references.The two most significant differentiators amongst contemporary Cloud Computing service providers have increased green energy use and datacenter resource utilization. This work addresses these two issues from a system's architectural optimization viewpoint. The proposed approach herein, allows multiple cloud providers to utilize their individual computing resources in three ways by: (1) cutting the number of datacenters needed, (2) scheduling available datacenter grid energy via aggregators to reduce costs and power outages, and lastly by (3) utilizing, where appropriate, more renewable and carbon-free energy sources. Altogether our proposed approach creates an alternative paradigm for a Federated Cloud SoS approach. The proposed paradigm employs a novel control methodology that is tuned to obtain both financial and environmental advantages. It also supports dynamic expansion and contraction of computing capabilities for handling sudden variations in service demand as well as for maximizing usage of time varying green energy supplies. Herein we analyze the core SoS requirements, concept synthesis, and functional architecture with an eye on avoiding inadvertent cascading conditions. We suggest a physical architecture that diminishes unwanted outcomes while encouraging desirable results. Finally, in our approach, the constituent cloud services retain their independent ownership, objectives, funding, and sustainability means. This work analyzes the core SoS requirements, concept synthesis, and functional architecture. It suggests a physical structure that simulates the primary SoS emergent behavior to diminish unwanted outcomes while encouraging desirable results. The report will analyze optimal computing generation methods, optimal energy utilization for computing generation as well as a procedure for building optimal datacenters using a unique hardware computing system design based on the openCompute community as an illustrative collaboration platform. Finally, the research concludes with security features cloud federation requires to support to protect its constituents, its constituents tenants and itself from security risks

    Federated Robust Embedded Systems: Concepts and Challenges

    Get PDF
    The development within the area of embedded systems (ESs) is moving rapidly, not least due to falling costs of computation and communication equipment. It is believed that increased communication opportunities will lead to the future ESs no longer being parts of isolated products, but rather parts of larger communities or federations of ESs, within which information is exchanged for the benefit of all participants. This vision is asserted by a number of interrelated research topics, such as the internet of things, cyber-physical systems, systems of systems, and multi-agent systems. In this work, the focus is primarily on ESs, with their specific real-time and safety requirements. While the vision of interconnected ESs is quite promising, it also brings great challenges to the development of future systems in an efficient, safe, and reliable way. In this work, a pre-study has been carried out in order to gain a better understanding about common concepts and challenges that naturally arise in federations of ESs. The work was organized around a series of workshops, with contributions from both academic participants and industrial partners with a strong experience in ES development. During the workshops, a portfolio of possible ES federation scenarios was collected, and a number of application examples were discussed more thoroughly on different abstraction levels, starting from screening the nature of interactions on the federation level and proceeding down to the implementation details within each ES. These discussions led to a better understanding of what can be expected in the future federated ESs. In this report, the discussed applications are summarized, together with their characteristics, challenges, and necessary solution elements, providing a ground for the future research within the area of communicating ESs

    Smart City IoT Data Management with Proactive Middleware

    Get PDF
    With the increased emergence of cloud-based services, users are frequently perplexed as to which cloud service to use and whether it will be beneficial to them. The user must compare various services, which can be a time-consuming task if the user is unsure of what they might need for their application. This paper proposes a middleware solution for storing Internet of Things (IoT) data produced by various sensors, such as traffic, air quality, temperature, and so on, on multiple cloud service providers depending on the type of data. Standard cloud computing technologies become insufficient to handle the data as the volume of data generated by smart city devices grows. The middleware was created after a comparative study of various existing middleware. The middleware uses the concept of the federal cloud for the purpose of storing data. The middleware solution described in this paper makes it easier to distribute and classify IoT data to various cloud environments based on its type. The middleware was evaluated using a series of tests, which revealed its ability to properly manage smart city data across multiple cloud environments. Overall, this research contributes to the development of middleware solutions that can improve the management of IoT data in settings such as smart cities

    Resource Orchestration of 5G Transport Networks for Vertical Industries

    Get PDF
    The future 5G transport networks are envisioned to support a variety of vertical services through network slicing and efficient orchestration over multiple administrative domains. In this paper, we propose an orchestrator architecture to support vertical services to meet their diverse resource and service requirements. We then present a system model for resource orchestration of transport networks as well as low-complexity algorithms that aim at minimizing service deployment cost and/or service latency. Importantly, the proposed model can work with any level of abstractions exposed by the underlying network or the federated domains depending on their representation of resources.This work has been partially funded by the EU H2020 5G-Transformer Project (grant no. 761536)

    Hybrid Simulation and Test of Vessel Traffic Systems on the Cloud

    Get PDF
    This paper presents a cloud-based hybrid simulation platform to test large-scale distributed System-of-Systems (SoS) for the management and control of maritime traffic, the so-called Vessel Traffic Systems (VTS). A VTS consists of multiple, heterogeneous, distributed and interoperating systems, including radar, automatic identification systems, direction finders, electro-optical sensors, gateways to external VTSs, information systems; identifying, representing and analyzing interactions is a challenge to the evaluation of the real risks for safety and security of the marine environment. The need for reproducing in fabric the system behaviors that could occur in situ demands for the ability of integrating emulated and simulated environments to cope with the different testability requirements of involved systems and to keep testing cost sustainable. The platform exploits hybrid simulation and virtualization technologies, and it is deployable on a private cloud, reducing the cost of setting up realistic and effective testing scenarios

    Artificial Intelligence Models for Scheduling Big Data Services on the Cloud

    Get PDF
    The widespread adoption of Internet of Things (IoT) applications in many critical sectors (e.g., healthcare, unmanned autonomous systems, etc.) and the huge volumes of data that are being generated from such applications have led to an unprecedented reliance on the cloud computing platform to store and process these data. Moreover, cloud providers tend to receive massive waves of demands on their storage and computing resources. To help providers deal with such demands without sacrificing performance, the concept of cloud automation had recently arisen to improve the performance and reduce the manual efforts related to the management of cloud computing workloads. However, several challenges have to be taken into consideration in order to guarantee an optimal performance for big data storage and analytics in cloud computing environments. In this context, we propose in this thesis a smart scheduling model as an automated big data task scheduling approach in cloud computing environments. Our scheduling model combines Deep Reinforcement Learning (DRL), Federated Learning (FL), and Transfer Learning (TL) to automatically predict the IoT devices to which each incoming big data task should be scheduled to as to improve the performance and reduce the execution cost. Furthermore, we solve the long execution time and data shortage problems by introducing a FL-based solution that also ensures privacy-preserving and reduces training and data complexity. The motivation of this thesis stems from four main observations/research gaps that we have drawn through our literature reviews and/or experiments, which are: (1) most of the existing cloud-based scheduling solutions consider the scheduling problem only from the tasks priority viewpoint, which leads to increase the amounts of wasted resources in case of malicious or compromised IoT devices; (2) the existing scheduling solutions in the domain of cloud and edge computing are still ineffective in making real-time decisions concerning the resource allocation and management in cloud systems; (3) it is quite difficult to schedule tasks or learning models from servers in areas that are far from the objects and IoT devices, which entails significant delay and response time for the process of transmitting data; and (4) none of the existing scheduling solutions has yet addressed the issue of dynamic task scheduling automation in complex and large-scale edge computing settings. In this thesis, we address the scheduling challenges related to the cloud and edge computing environment. To this end, we argue that trust should be an integral part of the decision-making process and therefore design a trust establishment mechanism between the edge server and IoT devices. The trust mechanism model aims to detect those IoT devices that over-utilize or under-utilize their resources. Thereafter, we design a smart scheduling algorithm to automate the process of scheduling large-scale workloads onto edge cloud computing resources while taking into account the trust scores, task waiting time, and energy levels of the IoT devices to make appropriate scheduling decisions. Finally, we apply our scheduling strategy in the healthcare domain to investigate its applicability in a real-world scenario (COVID-19)

    Calidad de servicio en computación en la nube: técnicas de modelado y sus aplicaciones

    Get PDF
    Recent years have seen the massive migration of enterprise applications to the cloud. One of the challenges posed by cloud applications is Quality-of-Service (QoS) management, which is the problem of allocating resources to the application to guarantee a service level along dimensions such as performance, availability and reliability. This paper aims at supporting research in this area by providing a survey of the state of the art of QoS modeling approaches suitable for cloud systems. We also review and classify their early application to some decision-making problems arising in cloud QoS management

    Resource Management From Single-domain 5G to End-to-End 6G Network Slicing:A Survey

    Get PDF
    Network Slicing (NS) is one of the pillars of the fifth/sixth generation (5G/6G) of mobile networks. It provides the means for Mobile Network Operators (MNOs) to leverage physical infrastructure across different technological domains to support different applications. This survey analyzes the progress made on NS resource management across these domains, with a focus on the interdependence between domains and unique issues that arise in cross-domain and End-to-End (E2E) settings. Based on a generic problem formulation, NS resource management functionalities (e.g., resource allocation and orchestration) are examined across domains, revealing their limits when applied separately per domain. The appropriateness of different problem-solving methodologies is critically analyzed, and practical insights are provided, explaining how resource management should be rethought in cross-domain and E2E contexts. Furthermore, the latest advancements are reported through a detailed analysis of the most relevant research projects and experimental testbeds. Finally, the core issues facing NS resource management are dissected, and the most pertinent research directions are identified, providing practical guidelines for new researchers.<br/

    Platform as a service integration for scientific computing using DIRAC

    Get PDF
    Cada día crece máis a demanda de recursos de computación requirida polos investigadores, capacidades de cálculo que coexisten co crecente volume de datos xerado actualmente. Estes investigadores están a demandar un servizo de Computación de Altas Prestacións (HPC) que permita a execución das suas simulacións dunha forma na que se deslocalicen os recursos para poder acceder aos máximos posibles, facilitandoo coa forma o máis cómoda e segura para eles. Doutra banda, as universidades están conectadas con centros de investigación con redes que pusuen unha velocidade e fiabilidade que posibilitan a execución de traballos de cálculo científico. As capacidades de computo existentes en universidades van dende aulas informáticas para usos docentes, laboratorios, etc., ata clusters de ordenadores pertencentes a grupos de investigación. Usando tecnoloxías grid e cloud estes recursos computacionais heteroxéneos poderían ser reutilizados polos investigadores para realizar simulacións, aportando unha maior cantidade de cómputo a xa existente e deslocalizando os recursos entre distintos lugares ao redor do planeta. O obxectivo desta tese é adaptar a contorna para computación distribuída DIRAC, desenvolvida para o proxecto LHCb do CERN, para o seu uso por varias comunidades de usuarios baseado nas tecnoloxías cloud e big data. Esta contorna pusuiría repositorios de software centralizados que permitan proveer o software necesario para que a través dos entornos na nube se poidan executar as aplicacións dos investigadores en calquera parte do planeta dunha forma escalable, permitindo aprobeitar tanto recursos dedicados como nondedicados. Avaliando así a execución desta plataforma para a realización de cálculos científicos. Este traballo comezará coa obtención de requisitos, para pasar despois ao proceso de integración básica. Posteriormente, optimizarase o uso do software cientifico empregado para as contornas cloud, tratando de adaptalo aos entornos virtualizados. Para iso, será necesario realizar un estudo estadístico que sexa o máis próximo posible aos entornos en producción para poder determinar e crear as infraestructuras adaptadas evitando así a perda de rendemento dentro de recursos. O seguinte caso sería utilizar as tecnoloxías virtualizadas, adaptando as arquitecturas creadas, para a creación de sistemas que permitan o envío de traballos que requiran de grandes cantidades de datos no eido do big data dunha forma distribuida
    corecore