50 research outputs found

    Developer’s Perspective on Containerized Development Environments : A Case Study and Review of Gray Literature

    Get PDF
    Context: The advent of Docker containers in 2013 provided developers with a way of bundling code and its dependencies into containers that run identically on any Docker Engine, effectively mitigating platform and dependency related issues. In recent years an interesting trend has emerged of developers attempting to leverage the benefits provided by the Docker container platform in their development environments. Objective: In this thesis we chart the motivations behind the move towards Containerized Development Environments (CDE) and seek to categorize claims made about benefits and challenges experienced by developers after their adoption. The goal of this thesis is to establish the current state of the trend and lay the groundwork for future research. Methods: The study is structured into three parts. In the first part we conduct a systematic review of gray literature, using 27 sources acquired from three different websites. The sources were extracted for relevant quotes that were used for creating a set of higher level concepts for expressed motivations, benefits, and challenges. The second part of the study is a qualitative single-case study where we conduct semi-structured theme interviews with all members of a small-sized software development team that had recently taken a containerized development environment into use. The case team was purposefully selected for its practical relevance as well as convenient access to its members for data collection. In the last part of the study we compare the transcribed interview data against the set of concepts formed in the literature review. Results: Cross-environment consistency and a simplified initial setup driven by a desire to increase developer happiness and productivity were commonly expressed motivations that were also experienced in practice. Decreased performance, required knowledge of Docker, and difficulties in the technical implementation of CDE’s were mentioned as primary challenges. Many developers experienced additional benefits of using the Docker platform for infrastructure provisioning and shared configuration management. The case team additionally used the CDE as a platform for implementing end to end testing, and viewed the correct type of team and management as necessary preconditions for its successful adoption. Conclusions: CDE’s offer many valuable benefits that come at a cost and teams have to weigh the trade-off between consistency and performance, and whether the investment of development resources to its implementation is warranted. The use of the Docker container platform as an infrastructure package manager could be considered a game-changer, enabling development teams to provision new services like databases, load-balancers and message brokers with just a few lines of code. The case study reports one account of an improved onboarding experience and points towards an area for future research. CDE’s would appear to be a good fit for microservice oriented teams that seek to foster a DevOps culture, as indicated by the experience of the case team. The implementation of CDE’s is a non-trivial challenge that requires expertise from the teams and developers using them. Additionally, the case team’s novel use of containers for testing appears to be an interesting research topic in its own right. ACM Computing Classification System (CCS): Software and its engineering →Software creation and management →Software development technique

    Towards Semantic Detection of Smells in Cloud Infrastructure Code

    Full text link
    Automated deployment and management of Cloud applications relies on descriptions of their deployment topologies, often referred to as Infrastructure Code. As the complexity of applications and their deployment models increases, developers inadvertently introduce software smells to such code specifications, for instance, violations of good coding practices, modular structure, and more. This paper presents a knowledge-driven approach enabling developers to identify the aforementioned smells in deployment descriptions. We detect smells with SPARQL-based rules over pattern-based OWL 2 knowledge graphs capturing deployment models. We show the feasibility of our approach with a prototype and three case studies.Comment: 5 pages, 6 figures. The 10 th International Conference on Web Intelligence, Mining and Semantics (WIMS 2020

    Lynx: A knowledge-based AI service platform for content processing, enrichment and analysis for the legal domain

    Get PDF
    The EU-funded project Lynx focuses on the creation of a knowledge graph for the legal domain (Legal Knowledge Graph, LKG) and its use for the semantic processing, analysis and enrichment of documents from the legal domain. This article describes the use cases covered in the project, the entire developed platform and the semantic analysis services that operate on the documents. © 202

    Deployment and Operation of Complex Software in Heterogeneous Execution Environments

    Get PDF
    This open access book provides an overview of the work developed within the SODALITE project, which aims at facilitating the deployment and operation of distributed software on top of heterogeneous infrastructures, including cloud, HPC and edge resources. The experts participating in the project describe how SODALITE works and how it can be exploited by end users. While multiple languages and tools are available in the literature to support DevOps teams in the automation of deployment and operation steps, still these activities require specific know-how and skills that cannot be found in average teams. The SODALITE framework tackles this problem by offering modelling and smart editing features to allow those we call Application Ops Experts to work without knowing low level details about the adopted, potentially heterogeneous, infrastructures. The framework offers also mechanisms to verify the quality of the defined models, generate the corresponding executable infrastructural code, automatically wrap application components within proper execution containers, orchestrate all activities concerned with deployment and operation of all system components, and support on-the-fly self-adaptation and refactoring

    An experimental publish-subscribe monitoring assessment to Beyond 5G networks

    Get PDF
    Collection: Wireless Technologies for the Connectivity of the Future.The fifth generation (5G) of mobile networks is designed to accommodate different types of use cases, each of them with different and stringent requirements and key performance indicators (KPIs). To support the optimization of the network performance and validation of the KPIs, there exist the necessity of a flexible and efficient monitoring system and capable of realizing multi-site and multi-stakeholder scenarios. Nevertheless, for the evolution from 5G to 6G, the network is envisioned as a user-driven, distributed Cloud computing system where the resource pool is foreseen to integrate the participating users. In this paper, we present a distributed monitoring architecture for Beyond 5G multi-site platforms, where different stakeholders share the resource pool in a distributed environment. Taking advantage of the usage of publish-subscribe mechanisms adapted to the Edge, the developed lightweight monitoring solution can manage large amounts of real-time traffic generated by the applications located in the resource pool. We assess the performance of the implemented paradigm, revealing some interesting insights about the platform, such as the effect caused by the throughput of monitoring data in performance parameters such as the latency and packet loss, or the presence of a saturation effect due to software limitations that impacts in the performance of the system under specific conditions. In the end, the performance evaluation process has confirmed that the monitoring platform suits the requirements of the proposed scenarios, being capable of handling similar workloads in real 5G and Beyond 5G scenarios, then discussing how the architecture could be mapped to these real scenarios.This work was partly funded by the European Commission under the European Union's Horizon 2020 program-Grant Agreement Number 815074 (5G EVE project). This work was also partly funded by the Community of Madrid, under the grant approved in the "Convocatoria de 2017 de Ayudas para la Realización de Doctorados Industriales en la Comunidad de Madrid (Orden 3109/2017, de 29 de agosto)", Grant Agreement Number IND2017/TIC-7732. The paper solely reflects the views of the authors. Neither the European Commission nor the Community of Madrid are responsible for the contents of this paper or any use made thereof

    Monitoring and orchestration of network slices for 5G Networks

    Get PDF
    Mención Internacional en el título de doctorEste trabajo se ha realizado bajo la ayuda concedida por la Comunidad de Madrid en la Convocatoria de 2017 de Ayudas para la Realización de Doctorados Industriales en la Comunidad de Madrid (Orden 3109/2017, de 29 de agosto), con referencia IND2017/TIC-7732. This work was partly funded by the European Commission under the European Union’s Horizon 2020 program - grant agreement number 815074 (5G EVE project). The Ph.D thesis solely reflects the views of the author. The Commission is not responsible for the contents of this Ph.D thesis or any use made thereof.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Antonio de la Oliva Delgado.- Secretaria: Elisa Rojas Sánchez.- Vocal: David Manuel Gutiérrez Estéve

    Certification of IoT elements using the blockchain

    Get PDF
    [Abstract]: The non-fungible tokens have been widely used to prove ownership of art and gaming collectibles and used as utility tokens. The use of this tokens in this work is to represent the ownership of the internet of things devices from the manufacturing phase, in the distributed and decentralized public ledger. This physical devices will have attached a token that represent them in the blockchain and the possession of an owner by an unique identifier. Hence, the devices are identified by their public blockchain address and their token that associates them to their owner. Besides, this address allow the Internet of Things devices to participate in the network and establish a shared secret between owner and device. This work, proposes to use the physical unclonable functions to establish a noose between the physical world and the blockchain by deriving the private key of the blockchain address from the physical unclonable functions response. This link is difficult to tamper and can be traced during the lifetime of the token. Moreover, there is no need of using a security module or similar to store the key since the physical unclonable functions response is generated each the private key is needed so that it not stored in a non volatile memory. Once we have the shared secret this are used to cipher the certificates that will be deployed by the owner of the devices on a decentralized storage blockchain like FileCoin or the InterPlanetary File System. This certificates are used to communicate with other devices using standard protocols like Transport Layer Security or Datagram Transport Layer Security. An API called Powergate, is part of the infrastructure of certification of the Internet of Things elements, providing communication with the decentralized storage blockchains.[Resumo]: Os tokens non funxibles utlízanse amplamente para demostrar a propiedade de obxectos de colección de arte e xogos e utilizanse como ”utility tokens”. O uso destes tokens neste traballo é para representar na rede distribuído e descentralizado que é a blockchain, a propiedade dos dispositivos Internet of Things desde o mesmo momento da súa creación, é dicir. durante o proceso de manufactura. A estes dispositivos físicos achégaselles un token que os identifica na blockchain e permite representar a posesión dun propietario mediante un identificador único. Polo tanto, os dispositivos identifícanse pola súa dirección pública na cadea de bloques e o seu token é o que os asocia ao seu propietario. Ademais, esta dirección permite aos dispositivos da Internet of Things participar na rede e establecer un secreto compartido entre propietario e dispositivo. Este traballo, propón utilizar as funcións físicas non clonables para establecer un lazo entre o mundo físico e a blockchain derivando a clave privada da dirección do blockchain a partir da resposta das funcións físicas non clonables. Este vínculo é difícil de manipular e pode ser rastrexado durante a vida do token. Ademais, non é necesario utilizar un módulo de seguridade ou similar para almacenar a clave, xa que a resposta da función física non clonable é xerada durante o proceso de arranque e é guardada nunha memoria non volátil. Unha vez que teñamos o secreto compartido, este utilizarase para cifrar os certificados que serán despregados polo propietario dos dispositivos nunha blockchain de almacenamento descentralizado como FileCoin ou InterPlanetary File System. Estes certificados utilizaranse para comunicarse con outros dispositivos utilizando protocolos estándar como son Datagram Transport Layer Security y Transport Layer Security. Unha API compoñerá a infraestrutura de certificación dos elementos do Internet of Things proporcionando comunicación coas blockchains de almacenamento descentralizadas.Traballo fin de grao (UDC.FIC). Enxeñaría Informática. Curso 2021/202

    Harmonization of strategies for contract testing in microservices UI

    Get PDF
    In microservices world, a reliable continuous integration (CI) and continuous deployment (CD) contributes significantly to the delivery speed and the success of the product. One major contributor to the result of CI-CD pipeline is testing. Microservices consists of a number of small services communicating with each other through a defined interface. Different services might be managed by different teams and people, and thus, the agreed interfaces of the communication between services are potentially violated. A reliable way of testing is needed to prevent such situation. Consumer-driven contract testing (CDC), a fast and reliable test method, is introduced to test the interface of the interaction between two services. The case study project is lacking of the interface testing, which usually leads to mismatch between the expectations of the two services in an interaction. There exists already some API route testing, but these tests do not help catch potential problems as mentioned. The thesis implementation replaces such test with CDC which cover the API route testing, communication interface, and even more. As a new and immature test method, CDC needs to be written in a systematic and robust way to ensure a good outcome. The thesis proposes a set of practices or guideline for the case study project to help bring a systematic way of writing CDC. Some workflows for managing CDC are also introduced. In a big project, a common guideline is highly important to avoid the divergence in the way of working, which would lead to potential errors and bad quality product. It is important to note that there is no panacea, so the guideline is adapted and suitable for the case project only
    corecore