3,944 research outputs found

    Explorar kubernetes e devOps num contexto de IoT

    Get PDF
    Containerized solutions and container orchestration technologies have recently been of great interest to organizations as a way of accelerating both software development and delivery processes. However, adopting these is a rather complex shift that may impact an organization and teams that were already established. This is where development cultures such as DevOps emerge to ease such shift amongst teams, promoting collaboration and automation of development and deployment processes throughout. The purpose of the current dissertation is to illustrate the path that led to the use of DevOps and containerization as means to support the development and deployment of a proof of concept system, Firefighter Sync – an Internet of Things based solution applied to a firefighting monitoring scenario. The goal, besides implementing Firefighter Sync, was to propose and deploy a development and operations ecosystem based on DevOps practices to achieve a full automation pipeline for both the development and operations processes. Firefighter Sync enabled the exploration of such state-of-the-art solutions such as Kubernetes to support container-based deployment and Jenkins for a fully automated CI/CD pipeline. Firefighter Sync clearly illustrates that addressing the development of a system from a DevOps perspective from the very beginning, although it requires an accentuated learning curve due to the large range of concepts and technologies addressed throughout, has illustrated to effectively impact the development process as well as ease the solution for future evolution. A good example is the automation process pipeline, that whilst allowing an easy integration of new features within a DevOps process – implies addressing the development and operations as a whole – it abstracts specific technological concerns turning these transversals to the traditional stages from development to deployment.Soluções de contentores e orquestração de contentores têm vindo a tornar-se de grande interesse para as organizações como uma forma de acelerar os processos de desenvolvimento e entrega de software. No entanto, adotá-las é uma mudança bastante complexa que pode impactar uma organização e equipas já estabelecidas. É aqui que surgem culturas como o DevOps para facilitar essa mudança, promovendo a colaboração e a automação dos processos de desenvolvimento e deployment entre equipas. O objetivo desta dissertação é ilustrar o caminho que levou ao uso de DevOps e à conteinerização de modo a apoiar o desenvolvimento e o deployment de um sistema como prova de conceito, o Firefighter Sync – uma solução baseada na Internet das Coisas aplicada a um cenário de monitorização de combate a incêndios. Além de implementar o Firefighter Sync, o objetivo era também propor e implementar um ecossistema de desenvolvimento e operações com base nas práticas de DevOps para alcançar uma pipeline de automação completa para os processos de desenvolvimento e operações. O Firefighter Sync permitiu explorar soluções que constituem o estado da arte neste contexto, como o Kubernetes para apoiar o deployment baseado em contentores e o Jenkins para suportar a pipeline de CI/CD totalmente automatizada. O Firefighter Sync ilustra claramente que abordar o desenvolvimento de um sistema a partir da perspectiva de DevOps, embora exija uma curva de aprendizagem acentuada devido à grande variedade de conceitos e tecnologias inerentes ao longo do processo, demonstrou tornar mais eficiente o processo de desenvolvimento, bem como facilitar evolução futura. Um exemplo é a pipeline de automação, que permite uma fácil integração de novos recursos dentro de um processo de DevOps – que implica abordar o desenvolvimento e as operações como um todo – abstraindo assim preocupações tecnológicas específicas, transformando essas transversais nas fases tradicionais do desenvolvimento ao deployment.Mestrado em Engenharia Informátic

    Data-Driven Methods for Data Center Operations Support

    Get PDF
    During the last decade, cloud technologies have been evolving at an impressive pace, such that we are now living in a cloud-native era where developers can leverage on an unprecedented landscape of (possibly managed) services for orchestration, compute, storage, load-balancing, monitoring, etc. The possibility to have on-demand access to a diverse set of configurable virtualized resources allows for building more elastic, flexible and highly-resilient distributed applications. Behind the scenes, cloud providers sustain the heavy burden of maintaining the underlying infrastructures, consisting in large-scale distributed systems, partitioned and replicated among many geographically dislocated data centers to guarantee scalability, robustness to failures, high availability and low latency. The larger the scale, the more cloud providers have to deal with complex interactions among the various components, such that monitoring, diagnosing and troubleshooting issues become incredibly daunting tasks. To keep up with these challenges, development and operations practices have undergone significant transformations, especially in terms of improving the automations that make releasing new software, and responding to unforeseen issues, faster and sustainable at scale. The resulting paradigm is nowadays referred to as DevOps. However, while such automations can be very sophisticated, traditional DevOps practices fundamentally rely on reactive mechanisms, that typically require careful manual tuning and supervision from human experts. To minimize the risk of outages—and the related costs—it is crucial to provide DevOps teams with suitable tools that can enable a proactive approach to data center operations. This work presents a comprehensive data-driven framework to address the most relevant problems that can be experienced in large-scale distributed cloud infrastructures. These environments are indeed characterized by a very large availability of diverse data, collected at each level of the stack, such as: time-series (e.g., physical host measurements, virtual machine or container metrics, networking components logs, application KPIs); graphs (e.g., network topologies, fault graphs reporting dependencies among hardware and software components, performance issues propagation networks); and text (e.g., source code, system logs, version control system history, code review feedbacks). Such data are also typically updated with relatively high frequency, and subject to distribution drifts caused by continuous configuration changes to the underlying infrastructure. In such a highly dynamic scenario, traditional model-driven approaches alone may be inadequate at capturing the complexity of the interactions among system components. DevOps teams would certainly benefit from having robust data-driven methods to support their decisions based on historical information. For instance, effective anomaly detection capabilities may also help in conducting more precise and efficient root-cause analysis. Also, leveraging on accurate forecasting and intelligent control strategies would improve resource management. Given their ability to deal with high-dimensional, complex data, Deep Learning-based methods are the most straightforward option for the realization of the aforementioned support tools. On the other hand, because of their complexity, this kind of models often requires huge processing power, and suitable hardware, to be operated effectively at scale. These aspects must be carefully addressed when applying such methods in the context of data center operations. Automated operations approaches must be dependable and cost-efficient, not to degrade the services they are built to improve. i

    Towards a low-cost universal access cloud framework to assess STEM students

    Full text link
    Government-imposed lockdowns have challenged academic institutions to transition from traditional face-to-face education into hybrid or fully remote learning models. This transition has focused on the technological challenge of guaranteeing the continuity of sound pedagogy and granting safe access to online digital university services. However, a key requisite involves adapting the evaluation process as well. In response to this need, the authors of this paper tailored and implemented a cloud deployment to provide universal access to online summative assessment of university students in a computer programming course that mirrored a traditional in-person monitored computer laboratory under strictly controlled exam conditions. This deployment proved easy to integrate with the university systems and many commercial proctoring tools. This cloud deployment is not only a solution for extraordinary situations; it can also be adapted daily for online collaborative coding assignments, practical lab sessions, formative assessments, and masterclasses where the students connect using their equipment. Connecting from home facilitates access to education for students with physical disabilities. It also allows participation with their students' own adapted equipment in the evaluation processes, simplifying assessment for those with hearing or visual impairments. In addition to these benefits and the evident commitment to the safety rules, this solution has proven cheaper and more flexible than on-premise equivalent installations.Comment: 14 pages, 8 figures, 1 appendix and 1 code repositor

    Artificial intelligence driven anomaly detection for big data systems

    Get PDF
    The main goal of this thesis is to contribute to the research on automated performance anomaly detection and interference prediction by implementing Artificial Intelligence (AI) solutions for complex distributed systems, especially for Big Data platforms within cloud computing environments. The late detection and manual resolutions of performance anomalies and system interference in Big Data systems may lead to performance violations and financial penalties. Motivated by this issue, we propose AI-based methodologies for anomaly detection and interference prediction tailored to Big Data and containerized batch platforms to better analyze system performance and effectively utilize computing resources within cloud environments. Therefore, new precise and efficient performance management methods are the key to handling performance anomalies and interference impacts to improve the efficiency of data center resources. The first part of this thesis contributes to performance anomaly detection for in-memory Big Data platforms. We examine the performance of Big Data platforms and justify our choice of selecting the in-memory Apache Spark platform. An artificial neural network-driven methodology is proposed to detect and classify performance anomalies for batch workloads based on the RDD characteristics and operating system monitoring metrics. Our method is evaluated against other popular machine learning algorithms (ML), as well as against four different monitoring datasets. The results prove that our proposed method outperforms other ML methods, typically achieving 98–99% F-scores. Moreover, we prove that a random start instant, a random duration, and overlapped anomalies do not significantly impact the performance of our proposed methodology. The second contribution addresses the challenge of anomaly identification within an in-memory streaming Big Data platform by investigating agile hybrid learning techniques. We develop TRACK (neural neTwoRk Anomaly deteCtion in sparK) and TRACK-Plus, two methods to efficiently train a class of machine learning models for performance anomaly detection using a fixed number of experiments. Our model revolves around using artificial neural networks with Bayesian Optimization (BO) to find the optimal training dataset size and configuration parameters to efficiently train the anomaly detection model to achieve high accuracy. The objective is to accelerate the search process for finding the size of the training dataset, optimizing neural network configurations, and improving the performance of anomaly classification. A validation based on several datasets from a real Apache Spark Streaming system is performed, demonstrating that the proposed methodology can efficiently identify performance anomalies, near-optimal configuration parameters, and a near-optimal training dataset size while reducing the number of experiments up to 75% compared with naïve anomaly detection training. The last contribution overcomes the challenges of predicting completion time of containerized batch jobs and proactively avoiding performance interference by introducing an automated prediction solution to estimate interference among colocated batch jobs within the same computing environment. An AI-driven model is implemented to predict the interference among batch jobs before it occurs within system. Our interference detection model can alleviate and estimate the task slowdown affected by the interference. This model assists the system operators in making an accurate decision to optimize job placement. Our model is agnostic to the business logic internal to each job. Instead, it is learned from system performance data by applying artificial neural networks to establish the completion time prediction of batch jobs within the cloud environments. We compare our model with three other baseline models (queueing-theoretic model, operational analysis, and an empirical method) on historical measurements of job completion time and CPU run-queue size (i.e., the number of active threads in the system). The proposed model captures multithreading, operating system scheduling, sleeping time, and job priorities. A validation based on 4500 experiments based on the DaCapo benchmarking suite was carried out, confirming the predictive efficiency and capabilities of the proposed model by achieving up to 10% MAPE compared with the other models.Open Acces

    A network application approach towards 5G and beyond critical communications use cases.

    Get PDF
    Low latency and high bandwidth heralded with 5G networks will allow transmission of large amounts of Mission-Critical data over a short time period. 5G hence unlocks several capabilities for novel Public Protection and Disaster Relief (PPDR) applications, developed to support first responders in making faster and more accurate decisions during times of crisis. As various research initiatives are giving shape to the Network Application ecosystem as an interaction layer between vertical applications and the network control plane, in this article we explore how this concept can unlock finer network service management capabilities that can be leveraged by PPDR solution developers. In particular, we elaborate on the role of Network Applications as means for developers to assure prioritization of specific emergency flows of data, such as high-definition video transmission from PPDR field users to remote operators. To demonstrate this potential in future PPDR-over-5G services, we delve into the transfer of network-intensive PPDR solutions to the Network Application model. We then explore novelties in Network Application experimentation platforms, aiming to streamline development and deployment of such integrated systems across existing 5G infrastructures, by providing the reliability and multi-cluster environments they requireThe author(s) declare financial support was received for the research, authorship, and/or publication of this article. This project has received funding from the EU’s Horizon 2020 innovation action program under Grant agreement No 101016521 (5G-EPICENTRE)

    Deployment of NFV and SFC scenarios

    Get PDF
    Aquest ítem conté el treball original, defensat públicament amb data de 24 de febrer de 2017, així com una versió millorada del mateix amb data de 28 de febrer de 2017. Els canvis introduïts a la segona versió són 1) correcció d'errades 2) procediment del darrer annex.Telecommunications services have been traditionally designed linking hardware devices and providing mechanisms so that they can interoperate. Those devices are usually specific to a single service and are based on proprietary technology. On the other hand, the current model works by defining standards and strict protocols to achieve high levels of quality and reliability which have defined the carrier-class provider environment. Provisioning new services represent challenges at different levels because inserting the required devices involve changes in the network topology. This leads to slow deployment times and increased operational costs. To overcome the current burdens network function installation and insertion processes into the current service topology needs to be streamlined to allow greater flexibility. The current service provider model has been disrupted by the over-the-top Internet content providers (Facebook, Netflix, etc.), with short product cycles and fast development pace of new services. The content provider irruption has meant a competition and stress over service providers' infrastructure and has forced telco companies to research new technologies to recover market share with flexible and revenue-generating services. Network Function Virtualization (NFV) and Service Function Chaining (SFC) are some of the initiatives led by the Communication Service Providers to regain the lost leadership. This project focuses on experimenting with some of these already available new technologies, which are expected to be the foundation of the new network paradigms (5G, IOT) and support new value-added services over cost-efficient telecommunication infrastructures. Specifically, SFC scenarios have been deployed with Open Platform for NFV (OPNFV), a Linux Foundation project. Some use cases of the NFV technology are demonstrated applied to teaching laboratories. Although the current implementation does not achieve a production degree of reliability, it provides a suitable environment for the development of new functional improvements and evaluation of the performance of virtualized network infrastructures

    Plataforma de serviços para monitorização da cadeia de valor do pescado

    Get PDF
    Traceability in the food value chain is a topic of interest due to the advantages it brings to both the consumers, producers and regulatory authorities. This thesis describes my contributions during the design and implementation of a microservice based middleware for the Portuguese fish value chain considering current practices in the industry and the requirements of the stakeholders involved in the project, with the goal of integrating all the traceability information available from each operator to provide customers with the full story of the products they purchase. During this project I assumed many roles such as development, operations and even some security allowing me to improve my skills in all these fields and experimenting with the latest cloud native technologies such as containers and with DevOps practices.A rastreabilidade na cadeia de valor alimentar é um tema de interesse pelas vantagens que traz aos consumidores, produtores e autoridades reguladoras. Esta dissertação descreve as minhas contribuições durante a conceção e implementação de um middleware baseado em micro-serviços para a cadeia de valor do pescado portuguesa considerando as práticas atuais da indústria e os requisitos das partes interessadas envolvidas no projeto, com o objetivo de integrar toda a informação de rastreabilidade disponível de cada um dos operadores para fornecer aos clientes a história completa dos produtos que adquirem. Durante este projeto, assumi muitas funções, como desenvolvimento, operações e até mesmo alguma segurança, o que me permitiu melhorar as minhas capacidades em todos essas disciplinas e experimentar as mais recentes tecnologias nativas da nuvem, como contentores e práticas de DevOps.Mestrado em Engenharia Informátic

    Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures

    Get PDF
    One of the significant shifts of the next-generation computing technologies will certainly be in the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD landmark, evolved as a widely deployed BD operating system. Its new features include federation structure and many associated frameworks, which provide Hadoop 3.x with the maturity to serve different markets. This dissertation addresses two leading issues involved in exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely, (i)Scalability that directly affects the system performance and overall throughput using portable Docker containers. (ii) Security that spread the adoption of data protection practices among practitioners using access controls. An Enhanced Mapreduce Environment (EME), OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker (BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for data streaming to the cloud computing are the main contribution of this thesis study

    Hard-Real-Time Computing Performance in a Cloud Environment

    Get PDF
    The United States Department of Defense (DoD) is rapidly working with DoD Services to move from multi-year (e.g., 7-10) traditional acquisition programs to a commercial industrybased approach for software development. While commercial technologies and approaches provide an opportunity for rapid fielding of mission capabilities to pace threats, the suitability of commercial technologies to meet hard-real-time requirements within a surface combat system is unclear. This research establishes technical data to validate the effectiveness and suitability of current commercial technologies to meet the hard-real-time demands of a DoD combat management system. (Moreland Jr., 2013) conducted similar research; however, microservices, containers, and container orchestration technologies were not on the DoD radar at the time. Updated knowledge in this area will inform future DoD roadmaps and investments. A mission-based approach using Mission Engineering will be used to set the context for applied research. A hypothetical yet operationally relevant Strait Transit scenario has been established to provide context for definition of experimental parameters to be set while assessing the hypothesis. System models federated to form a system-of-systems architecture and data from a cloud computing environment are used to collect data for quantitative analysis
    • …
    corecore