13 research outputs found

    Technical Report: A Trace-Based Performance Study of Autoscaling Workloads of Workflows in Datacenters

    Get PDF
    To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time- and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and by how much they differ.Comment: Technical Report for the CCGrid 2018 submission "A Trace-Based Performance Study of Autoscaling Workloads of Workflows in Datacenters

    Parallel programming paradigms and frameworks in big data era

    Get PDF
    With Cloud Computing emerging as a promising new approach for ad-hoc parallel data processing, major companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs. We have entered the Era of Big Data. The explosion and profusion of available data in a wide range of application domains rise up new challenges and opportunities in a plethora of disciplines-ranging from science and engineering to biology and business. One major challenge is how to take advantage of the unprecedented scale of data-typically of heterogeneous nature-in order to acquire further insights and knowledge for improving the quality of the offered services. To exploit this new resource, we need to scale up and scale out both our infrastructures and standard techniques. Our society is already data-rich, but the question remains whether or not we have the conceptual tools to handle it. In this paper we discuss and analyze opportunities and challenges for efficient parallel data processing. Big Data is the next frontier for innovation, competition, and productivity, and many solutions continue to appear, partly supported by the considerable enthusiasm around the MapReduce paradigm for large-scale data analysis. We review various parallel and distributed programming paradigms, analyzing how they fit into the Big Data era, and present modern emerging paradigms and frameworks. To better support practitioners interesting in this domain, we end with an analysis of on-going research challenges towards the truly fourth generation data-intensive science.Peer ReviewedPostprint (author's final draft

    Virtual Networks Comparison Solutions for Community Clouds

    Get PDF
    Cloud computing has a huge importance and big impact nowadays on the IT world. The idea of community clouds has emerged recently in order to satisfy several user expectations. Clouds are distributed technology platforms that leverage sophisticated technology innovations to provide highly scalable and resilient environments that can be remotely utilized by organizations in a multitude of powerful ways. To successfully build upon, integrate with, or even create a cloud environment requires an understanding of its common inner mechanics, architectural layers, and models, as well as an understanding of the business and economic factors that result from the adoption and real-world use of cloud-based services. Albanian Cloud Community is an Albanian project that aims to provide a design and implementation of a self-configured, fully distributed, decentralized, scalable and robust cloud for a community of users across a community network. One of the aspects to analyze in this design is which kind of Virtual Private Network (VPN) is going to be used to interconnect the nodes of the community members interested in access cloud services. In this thesis we will study, compare and analyze the possibility of using Tinc, IPOP or SDN-based solutions such as OpenFlow to establish such a VPN

    ACTiCLOUD: Enabling the Next Generation of Cloud Applications

    Get PDF
    Despite their proliferation as a dominant computing paradigm, cloud computing systems lack effective mechanisms to manage their vast amounts of resources efficiently. Resources are stranded and fragmented, ultimately limiting cloud systems' applicability to large classes of critical applications that pose non-moderate resource demands. Eliminating current technological barriers of actual fluidity and scalability of cloud resources is essential to strengthen cloud computing's role as a critical cornerstone for the digital economy. ACTiCLOUD proposes a novel cloud architecture that breaks the existing scale-up and share-nothing barriers and enables the holistic management of physical resources both at the local cloud site and at distributed levels. Specifically, it makes advancements in the cloud resource management stacks by extending state-of-the-art hypervisor technology beyond the physical server boundary and localized cloud management system to provide a holistic resource management within a rack, within a site, and across distributed cloud sites. On top of this, ACTiCLOUD will adapt and optimize system libraries and runtimes (e.g., JVM) as well as ACTiCLOUD-native applications, which are extremely demanding, and critical classes of applications that currently face severe difficulties in matching their resource requirements to state-of-the-art cloud offerings

    Beyond The Cloud, How Should Next Generation Utility Computing Infrastructures Be Designed?

    Get PDF
    To accommodate the ever-increasing demand for Utility Computing (UC) resources, while taking into account both energy and economical issues, the current trend consists in building larger and larger data centers in a few strategic locations. Although such an approach enables to cope with the actual demand while continuing to operate UC resources through centralized software system, it is far from delivering sustainable and efficient UC infrastructures. We claim that a disruptive change in UC infrastructures is required: UC resources should be managed differently, considering locality as a primary concern. We propose to leverage any facilities available through the Internet in order to deliver widely distributed UC platforms that can better match the geographical dispersal of users as well as the unending demand. Critical to the emergence of such locality-based UC (LUC) platforms is the availability of appropriate operating mechanisms. In this paper, we advocate the implementation of a unified system driving the use of resources at an unprecedented scale by turning a complex and diverse infrastructure into a collection of abstracted computing facilities that is both easy to operate and reliable. By deploying and using such a LUC Operating System on backbones, our ultimate vision is to make possible to host/operate a large part of the Internet by its internal structure itself: A scalable and nearly infinite set of resources delivered by any computing facilities forming the Internet, starting from the larger hubs operated by ISPs, government and academic institutions to any idle resources that may be provided by end-users. Unlike previous researches on distributed operating systems, we propose to consider virtual machines (VMs) instead of processes as the basic element. System virtualization offers several capabilities that increase the flexibility of resources management, allowing to investigate novel decentralized schemes.Afin de supporter la demande croissante de calcul utilitaire (UC) tout en prenant en compte les aspects Ă©nergĂ©tique et financier, la tendance actuelle consiste Ă  construire des centres de donnĂ©es (ou centrales numĂ©riques) de plus en plus grands dans un nombre limitĂ© de lieux stratĂ©giques. Cette approche permet sans aucun doute de satisfaire la demande tout en conservant une approche centralisĂ©e de la gestion de ces ressources mais elle reste loin de pouvoir fournir des infrastructures de calcul utilitaire efficaces et durables. AprĂšs avoir indiquĂ© pourquoi cette tendance n'est pas appropriĂ©e, nous proposons au travers de ce rapport, une proposition radicalement diffĂ©rente. De notre point de vue, les ressources de calcul utilitaire doivent ĂȘtre gĂ©rĂ©es de maniĂšre Ă  pouvoir prendre en compte la localitĂ© des demandes dĂšs le dĂ©part. Pour ce faire, nous proposons de tirer parti de tous les Ă©quipements disponibles sur l'Internet afin de fournir des infrastructures de calcul utilitaire qui permettront de part leur distribution de prendre en compte plus efficacement la dispersion gĂ©ographique des utilisateurs et leur demande toujours croissante. Un des aspects critique pour l'Ă©mergence de telles plates-formes de calcul utilitaire ''local'' (LUC) est la disponibilitĂ© de mĂ©canismes de gestion appropriĂ©s. Dans la deuxiĂšme partie de ce document, nous dĂ©fendons la mise en oeuvre d'un systĂšme unifiĂ© gĂ©rant l'utilisation des ressources Ă  une Ă©chelle sans prĂ©cĂ©dent en transformant une infrastructure complexe et hĂ©tĂ©rogĂšne en une collection d'Ă©quipements virtualisĂ©s qui seront Ă  la fois plus simples Ă  gĂ©rer et plus sĂ»rs. En dĂ©ployant un systĂšme de type LUC sur les coeurs de rĂ©seau, notre vision ultime est de rendre possible l'hĂ©bergement et la gestion de l'Internet sur sa propre infrastructure interne: un ensemble de ressources extensible et quasiment infini fourni par n'importe quel Ă©quipement constituant l'Internet, partant des gros noeud rĂ©seaux gĂ©rĂ©s par les ISPs, les gouvernements et les institutions acadĂšmiques jusqu'Ă  n'importe quelle ressource inactive fournie par les utilisateurs finaux. Contrairement aux approches prĂ©cĂ©dentes appliquĂ©es aux systĂšmes distribuĂ©s, nous proposons de considĂ©rer les machines virtuelles comme la granularitĂ© Ă©lĂ©mentaire du systĂšme (Ă  la place des processus). La virtualisation systĂšme offre plusieurs fonctionnalitĂ©s qui amĂ©liorent la flexibilitĂ© de la gestion de ressources, permettant l'Ă©tude de nouveaux schĂ©mas de dĂ©centralisation

    A design pattern for optimizations in data intensive applications using ABS and JAVA 8

    Get PDF
    Cloud environments have become a standard method for enterprises to offer their applications by means of web services, data management systems, or simply renting out computing resources. In our previous work, we presented how we can use a modeling language together with the new features of JAVA 8 to overcome certain drawbacks of data structures and synchronization mechanisms in parallel applications. We extend this solution into a design pattern that allows application-specific optimizations in a distributed setting. We validate this integration using our previous case study of the Prime Sieve of Eratosthenes and illustrate the performance improvements in terms of speed-up and memory co

    An implementation of task processing on 4G-based mobile-edge computing systems

    Get PDF
    Mobile Edge Computing (MEC) is a new technology that facilitates low-latency cloud services to mobile devices (MDs) by pushing mobile computing, storage and network control to the network edge (closer to MDs), thereby prolonging the battery lifetime of MDs. One of the main objectives of MEC is to reduce latency and permit delay-sensitive applications in 4G and in the future, 5G communications. To achieve this feat, MEC aims to build up a computing platform by deploying edge servers (ESs) on the network edge. There is, therefore, a push to test the MEC performance on existinMobile Edge Computing (MEC) is a new technology that facilitates low-latency cloud services to mobile devices (MDs) by pushing mobile computing, storage and network control to the network edge, thereby prolonging the battery lifetime of MDs. Besides, MEC aims to reduce latency and permit delay-sensitive applications in 4G communications. There is, therefore, a push to test MEC performance on existing cellular systems. With the recently available mobile platform for academia SINET, NII can now connect MDs to ESs through 4G. This project focuses on the implementation of a physical 4G-based MEC System for task offloading, in which with the goal of achieving face detection, MD partially offload tasks to the ES under the instructions dictated by the offloading algorithms. Accordingly, the objectives of this thesis are to prove the efficiency of LTE based MEC systems in the real world focusing on its performance in terms of latency and battery consumption

    A design pattern for optimizations in data intensive applications using ABS and JAVA 8

    Get PDF
    Cloud environments have become a standard method for enterprises to offer their applications by means of web services, data management systems, or simply renting out computing resources. In our previous work, we presented how we can use a modeling language together with the new features of JAVA 8 to overcome certain drawbacks of data structures and synchronization mechanisms in parallel applications. We extend this solution into a design pattern that allows application-specific optimizations in a distributed setting. We validate this integration using our previous case study of the Prime Sieve of Eratosthenes and illustrate the performance improvements in terms of speed-up and memory consumption

    A survey of multi-access edge computing in 5G and beyond : fundamentals, technology integration, and state-of-the-art

    Get PDF
    Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research

    Software development by abstract behavioural specification

    Get PDF
    The development process of any software has become extremely important not just in the IT industry, but in almost every business or domain of research. The effort in making this process quick, efficient, reliable and automated has constantly evolved into a flow that delivers software incrementally based on both the developer's best skills and the end user's feedback. Software modeling and modeling languages have the purpose of facilitating product development by designing correct and reliable applications. The concurrency model of the Abstract Behavioural Specification (ABS) Language with features for asynchronous programming and cooperative scheduling is an important example of how modeling contributes to the reliability and robustness of a product. By abstracting from the implementation details, program complexity and inner workings of libraries, software modeling, and specifically ABS, allow for an easier use of formal analysis techniques and proofs to support product design. However there is still a gap that exists between modeling languages and programming languages with the process of software development often going on two separate paths with respect to modeling and implementation. This potentially introduces errors and doubles the development effort. \par The overall objective of this research is bridging the gap between modeling and programming in order to provide a smooth integration between formal methods and two of the most well-known and used languages for software development, the Java and Scala languages. The research focuses mainly on sequential and highly parallelizable applications, but part of the research also involves some theoretical proposals for distributed systems. It is a first step towards having a programming language with support for formal models. Algorithms and the Foundations of Software technolog
    corecore