604 research outputs found

    Infrastructure management in multicloud environments

    Get PDF
    With the increasing number of cloud service providers and data centres around the world, cloud services users are becoming increasingly concerned about where their data is stored and who has access to the data. The legal reach of customers’ countries does not expand over the country’s borders without special agreements that can take a long while to get. Because it is safer for a cloud service customer to use a cloud service provider that is domestically legally accounta-ble, customers are moving to using these cloud service providers. For the case company this causes both a technical problem and a managerial problem. The technical problem is how to manage cloud environments when the business expands to multiple countries, with said countries customers requiring that the data is stored within their country. Different cloud service providers can also be heterogeneous in their features to manage infrastructure, which makes managing and developing the infrastructure even more difficult. For example, application programming interfaces (API) that makes automation easier can vary between providers. From a management point of view, different time zones also make it harder to quickly respond to any issues in the IT infrastruc-ture when the case company employees are working in the same time zone. The objective of this thesis is to address the issue by investigating which tools and functionali-ties are commonly utilized for automating IT infrastructure and are additionally supported by cloud service providers while being compatible with the specific requirements of the organization in question. The research will help the case organization replace and add new tools to help maintain the IT infrastructure. This thesis will not investigate the managerial problem of case company em-ployees working in the same time zone. The thesis will also not research security, version control, desktop and laptop management or log collection tools or produce a code-based solution to set-ting up an IT environment since further research needs to be done after the tools presented in this thesis have been decided upon. The research does also not investigate every cloud service pro-vider in every country as case company business strategies can change and the size of the thesis would grow too much. Qualitative research method is used for this thesis and the data gathered comes from literature and articles from various source. Both literature and article review provided the theoretical aspects of this research. Data was also gathered by looking at a few countries that have companies whose business is cloud service providing and comparing the findings regarding infrastructure management and automatization. The research is divided into five parts. The first part tries to introduce the background, re-search objective and structure of the research., while the second part tries to explain the theoreti-cal background. In the third part of the research methodology is explained as what material was used and how it was gathered and descriptions of the results, fourth part analyses the results, while the fifth and final part concludes the research

    RADON: Rational decomposition and orchestration for serverless computing

    Get PDF
    Emerging serverless computing technologies, such as function as a service (FaaS), enable developers to virtualize the internal logic of an application, simplifying the management of cloud-native services and allowing cost savings through billing and scaling at the level of individual functions. Serverless computing is therefore rapidly shifting the attention of software vendors to the challenge of developing cloud applications deployable on FaaS platforms. In this vision paper, we present the research agenda of the RADON project (http://radon-h2020.eu), which aims to develop a model-driven DevOps framework for creating and managing applications based on serverless computing. RADON applications will consist of fine-grained and independent microservices that can efficiently and optimally exploit FaaS and container technologies. Our methodology strives to tackle complexity in designing such applications, including the solution of optimal decomposition, the reuse of serverless functions as well as the abstraction and actuation of event processing chains, while avoiding cloud vendor lock-in through models

    Continuous QoS-compliant Orchestration in the Cloud-Edge Continuum

    Full text link
    The problem of managing multi-service applications on top of Cloud-Edge networks in a QoS-aware manner has been thoroughly studied in recent years from a decision-making perspective. However, only a few studies addressed the problem of actively enforcing such decisions while orchestrating multi-service applications and considering infrastructure and application variations. In this article, we propose a next-gen orchestrator prototype based on Docker to achieve the continuous and QoS-compliant management of multiservice applications on top of geographically distributed Cloud-Edge resources, in continuity with CI/CD pipelines and infrastructure monitoring tools. Finally, we assess our proposal over a geographically distributed testbed across Italy.Comment: 25 pages, 8 figure

    Orchestration of music emotion recognition services - automating deployment, scaling and management

    Get PDF
    Every day, thousands of new songs are created and distributed over the internet. These ever-increasing databases introduced the need for automatic search and organization methods, that allow users to better filter and browse such collections. However, fundamental research in the MER field is very academic, with the typical work presenting results in the form classification metrics – how good the approach worked in the tested datasets and providing access to the data and methods. In order to overcome this problem, we built and deployed a platform to orchestrate a distributed, resilient, and scalable, music emotion recognition (MER) application using Kubernetes that can be easily expanded in the future. The solution developed is based on a proof of concept that explored the usage of containers and microservices in MER but had some gaps. We reengineered and expanded it, proposing a properly orchestrated, containerbased solution, and adopting a DevOps development culture with continuous integration (CI) and continuous delivery (CD) that in an automated way, makes it easy for the different teams to focus on developing new blocks separately. At the application level, instead of analyzing the audio signal recurring to only three audio features, the system now combines a large number of audio and lyric (text) features, explores different parts of audio (vocals, accompaniment) in segments (e.g., 30-second segments instead of the full song) and uses properly trained machine learning (ML) classifiers, a contribution by Tiago António. At the orchestration level, it uses Kubernetes with Calico as the networking plugin, providing networking for the containers and pods and Rook with Ceph for the persistent block and file storage. To allow external traffic into the cluster, will use HAproxy as an external ingress controller on an external node, with BIRD providing BGP peering with Calico, allowing the communication between the pods and the external node. ArgoCD was selected as the continuous delivery tool, constantly syncing with a git repository, and thus maintaining the state of the cluster manifests up to date, which allows totally abstracting developers from the infrastructure. A monitoring stack combining Prometheus, Alertmanager and Grafana allows the constant monitoring of running iv applications and cluster status, collecting metrics that can help to understand the state of operations. The administration of the cluster can be carried out in a simplified way using Portainer. The continuous implementation pipelines run on GitHub Actions, integrating software and security tests and automatically build new versions of the containers based on tag releases and publish them on DockerHub. This implementation is fully cloud native and backed only by open source software

    Mapping Cloud-Edge-IoT opportunities and challenges in Europe

    Get PDF
    While current data processing predominantly occurs in centralized facilities, with a minor portion handled by smart objects, a shift is anticipated, with a surge in data originating from smart devices. This evolution necessitates reconfiguring the infrastructure, emphasising computing capabilities at the cloud's "edge" closer to data sources. This change symbolises the merging of cloud, edge, and IoT technologies into a unified network infrastructure - a Computing Continuum - poised to redefine tech interactions, offering novel prospects across diverse sectors. The computing continuum is emerging as a cornerstone of tech advancement in the contemporary digital era. This paper provides an in-depth exploration of the computing continuum, highlighting its potential, practical implications, and the adjustments required to tackle existing challenges. It emphasises the continuum's real-world applications, market trends, and its significance in shaping Europe's tech future

    CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework

    Get PDF
    The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results
    • …
    corecore