8,920 research outputs found

    The Performance of a Second Generation Service Discovery Protocol In Response to Message Loss

    Get PDF
    We analyze the behavior of FRODO, a second generation service discovery protocol, in response to message loss in the network. Earlier protocols, like UPnP and Jini rely on underlying network layers to enhance their failure recovery. A comparison with UPnP and Jini shows that FRODO performs more efficiently in maintaining consistency, with shorter latency, not relying on lower network layers for robustness and therefore functions correctly on a simple lightweight protocol stack

    On consistency maintenance in service discovery

    Get PDF
    Communication and node failures degrade the ability of a service discovery protocol to ensure Users receive the correct service information when the service changes. We propose that service discovery protocols employ a set of recovery techniques to recover from failures and regain consistency. We use simulations to show that the type of recovery technique a protocol uses significantly impacts the performance. We benchmark the performance of our own service discovery protocol, FRODO against the performance of first generation service discovery protocols, Jini and UPnP during increasing communication and node failures. The results show that FRODO has the best overall consistency maintenance performance

    A Taxonomy of Self-configuring Service Discovery Systems

    Get PDF
    We analyze the fundamental concepts and issues in service discovery. This analysis places service discovery in the context of distributed systems by describing service discovery as a third generation naming system. We also describe the essential architectures and the functionalities in service discovery. We then proceed to show how service discovery fits into a system, by characterizing operational aspects. Subsequently, we describe how existing state of the art performs service discovery, in relation to the operational aspects and functionalities, and identify areas for improvement

    Understanding consistency maintenance in service discovery architectures during communication failure

    Get PDF

    Single-Board-Computer Clusters for Cloudlet Computing in Internet of Things

    Get PDF
    The number of connected sensors and devices is expected to increase to billions in the near future. However, centralised cloud-computing data centres present various challenges to meet the requirements inherent to Internet of Things (IoT) workloads, such as low latency, high throughput and bandwidth constraints. Edge computing is becoming the standard computing paradigm for latency-sensitive real-time IoT workloads, since it addresses the aforementioned limitations related to centralised cloud-computing models. Such a paradigm relies on bringing computation close to the source of data, which presents serious operational challenges for large-scale cloud-computing providers. In this work, we present an architecture composed of low-cost Single-Board-Computer clusters near to data sources, and centralised cloud-computing data centres. The proposed cost-efficient model may be employed as an alternative to fog computing to meet real-time IoT workload requirements while keeping scalability. We include an extensive empirical analysis to assess the suitability of single-board-computer clusters as cost-effective edge-computing micro data centres. Additionally, we compare the proposed architecture with traditional cloudlet and cloud architectures, and evaluate them through extensive simulation. We finally show that acquisition costs can be drastically reduced while keeping performance levels in data-intensive IoT use cases.Ministerio de Economía y Competitividad TIN2017-82113-C2-1-RMinisterio de Economía y Competitividad RTI2018-098062-A-I00European Union’s Horizon 2020 No. 754489Science Foundation Ireland grant 13/RC/209

    Pervasive service discovery in low-power and lossy networks

    Get PDF
    Pervasive Service Discovery (SD) in Low-power and Lossy Networks (LLNs) is expected to play a major role in realising the Internet of Things (IoT) vision. Such a vision aims to expand the current Internet to interconnect billions of miniature smart objects that sense and act on our surroundings in a way that will revolutionise the future. The pervasiveness and heterogeneity of such low-power devices requires robust, automatic, interoperable and scalable deployment and operability solutions. At the same time, the limitations of such constrained devices impose strict challenges regarding complexity, energy consumption, time-efficiency and mobility. This research contributes new lightweight solutions to facilitate automatic deployment and operability of LLNs. It mainly tackles the aforementioned challenges through the proposition of novel component-based, automatic and efficient SD solutions that ensure extensibility and adaptability to various LLN environments. Building upon such architecture, a first fully-distributed, hybrid pushpull SD solution dubbed EADP (Extensible Adaptable Discovery Protocol) is proposed based on the well-known Trickle algorithm. Motivated by EADPs’ achievements, new methods to optimise Trickle are introduced. Such methods allow Trickle to encompass a wide range of algorithms and extend its usage to new application domains. One of the new applications is concretized in the TrickleSD protocol aiming to build automatic, reliable, scalable, and time-efficient SD. To optimise the energy efficiency of TrickleSD, two mechanisms improving broadcast communication in LLNs are proposed. Finally, interoperable standards-based SD in the IoT is demonstrated, and methods combining zero-configuration operations with infrastructure-based solutions are proposed. Experimental evaluations of the above contributions reveal that it is possible to achieve automatic, cost-effective, time-efficient, lightweight, and interoperable SD in LLNs. These achievements open novel perspectives for zero-configuration capabilities in the IoT and promise to bring the ‘things’ to all people everywhere

    The impact of microservices: an empirical analysis of the emerging software architecture

    Get PDF
    Dissertação de mestrado em Informatics EngineeringThe applications’ development paradigm has faced changes in recent years, with modern development being characterized by the need to continuously deliver new software iterations. With great affinity with those principles, microservices is a software architecture which features characteristics that potentially promote multiple quality attributes often required by modern, large-scale applications. Its recent growth in popularity and acceptance in the industry made this architectural style often described as a form of modernizing applications that allegedly solves all the traditional monolithic applications’ inconveniences. However, there are multiple worth mentioning costs associated with its adoption, which seem to be very vaguely described in existing empirical research, being often summarized as "the complexity of a distributed system". The adoption of microservices provides the agility to achieve its promised benefits, but to actually reach them, several key implementation principles have to be honored. Given that it is still a fairly recent approach to developing applications, the lack of established principles and knowledge from development teams results in the misjudgment of both costs and values of this architectural style. The outcome is often implementations that conflict with its promised benefits. In order to implement a microservices-based architecture that achieves its alleged benefits, there are multiple patterns and methodologies involved that add a considerable amount of complexity. To evaluate its impact in a concrete and empirical way, one same e-commerce platform was developed from scratch following a monolithic architectural style and two architectural patterns based on microservices, featuring distinct inter-service communication and data management mechanisms. The effort involved in dealing with eventual consistency, maintaining a communication infrastructure, and managing data in a distributed way portrayed significant overheads not existent in the development of traditional applications. Nonetheless, migrating from a monolithic architecture to a microservicesbased is currently accepted as the modern way of developing software and this ideology is not often contested, nor the involved technical challenges are appropriately emphasized. Sometimes considered over-engineering, other times necessary, this dissertation contributes with empirical data from insights that showcase the impact of the migration to microservices in several topics. From the trade-offs associated with the use of specific patterns, the development of the functionalities in a distributed way, and the processes to assure a variety of quality attributes, to performance benchmarks experiments and the use of observability techniques, the entire development process is described and constitutes the object of study of this dissertation.O paradigma de desenvolvimento de aplicações tem visto alterações nos últimos anos, sendo o desenvolvimento moderno caracterizado pela necessidade de entrega contínua de novas iterações de software. Com grande afinidade com esses princípios, microsserviços são uma arquitetura de software que conta com características que potencialmente promovem múltiplos atributos de qualidade frequentemente requisitados por aplicações modernas de grandes dimensões. O seu recente crescimento em popularidade e aceitação na industria fez com que este estilo arquitetural se comumente descrito como uma forma de modernizar aplicações que alegadamente resolve todos os inconvenientes apresentados por aplicações monolíticas tradicionais. Contudo, existem vários custos associados à sua adoção, aparentemente descritos de forma muito vaga, frequentemente sumarizados como a "complexidade de um sistema distribuído". A adoção de microsserviços fornece a agilidade para atingir os seus benefícios prometidos, mas para os alcançar, vários princípios de implementação devem ser honrados. Dado que ainda se trata de uma forma recente de desenvolver aplicações, a falta de princípios estabelecidos e conhecimento por parte das equipas de desenvolvimento resulta em julgamentos errados dos custos e valores deste estilo arquitetural. O resultado geralmente são implementações que entram em conflito com os seus benefícios prometidos. De modo a implementar uma arquitetura baseada em microsserviços com os benefícios prometidos existem múltiplos padrões que adicionam considerável complexidade. De modo a avaliar o impacto dos microsserviços de forma concreta e empírica, foi desenvolvida uma mesma plataforma e-commerce de raiz segundo uma arquitetura monolítica e duas arquitetura baseadas em microsserviços, contando com diferentes mecanismos de comunicação entre os serviços. O esforço envolvido em lidar com consistência eventual, manter a infraestrutura de comunicação e gerir os dados de uma forma distribuída representaram desafios não existentes no desenvolvimento de aplicações tradicionais. Apesar disso, a ideologia de migração de uma arquitetura monolítica para uma baseada em microsserviços é atualmente aceite como a forma moderna de desenvolver aplicações, não sendo frequentemente contestada nem os seus desafios técnicos são apropriadamente enfatizados. Por vezes considerado overengineering, outras vezes necessário, a presente dissertação visa contribuir com dados práticos relativamente ao impacto da migração para arquiteturas baseadas em microsserviços em diversos tópicos. Desde os trade-offs envolvidos no uso de padrões específicos, o desenvolvimento das funcionalidades de uma forma distribuída e nos processos para assegurar uma variedade de atributos de qualidade, até análise de benchmarks de performance e uso de técnicas de observabilidade, todo o desenvolvimento é descrito e constitui o objeto de estudo da dissertação

    Data storage solutions for the federation of sensor networks

    Get PDF
    In the near future, most of our everyday devices will be accessible via some network and uniquely identified for interconnection over the Internet. This new paradigm, called Internet of Things (IoT), is already starting to influence our society and is now driving developments in many areas. There will be thousands, or even millions, of constrained devices that will be connected using standard protocols, such as Constrained Application Protocol (CoAP), that have been developed under certain specifications appropriate for this type of devices. In addition, there will be a need to interconnect networks of constrained devices in a reliable and scalable way, and federations of sensor networks using the Internet as a medium will be formed. To make the federation of geographically distributed CoAP based sensor networks possible, a CoAP Usage for REsource LOcation And Discovery (RELOAD) was recently proposed. RELOAD is a peer-to-peer (P2P) protocol that ensures an abstract storage and messaging service to its clients, and it relies on a set of cooperating peers that form a P2P overlay network for this purpose. This protocol allows to define so-called Usages for applications to work on top of this overlay network. The CoAP Usage for RELOAD is, therefore, a way for CoAP based devices to store their resources in a distributed P2P overlay. Although CoAP Usage for RELOAD is an important step towards the federation of sensor networks, in the particular case of IoT there will be consistency and efficiency problems. This happens because the resources of CoAP devices/Things can be in multiple data objects stored at the overlay network, called P2P resources. Thus, Thing resource updates can end up being consuming, as multiple P2P resources will have to be modified. Mechanisms to ensure consistency become, therefore, necessary. This thesis contributes to advances in the federation of sensor networks by proposing mechanisms for RELOAD/CoAP architectures that will allow consistency to be ensured. An overlay network service, required for such mechanisms to operate, is also proposed.Num futuro próximo, a maioria dos nossos dispositivos do dia-a-dia estarão acessíveis através de uma rede e serão identificados de forma única para poderem interligar-se através da Internet. Este novo paradigma, conhecido hoje por Internet das Coisas (IoT), já está a começar a influenciar a nossa sociedade e está agora a impulsionar desenvolvimentos em inúmeras áreas. Teremos milhares, ou mesmo milhões, de dispositivos restritos que utilizarão protocolos padrão que foram desenvolvidos de forma a cumprir determinadas especificações associadas a este tipo de dispositivos, especificações essas que têm a ver com o facto destes dispositivos terem normalmente restrições de memória, pouca capacidade de processamento e muitos possuirem limitações energéticas. Surgirá ainda a necessidade de interligar, de forma fiável e escalonável, redes de dispositivos restritos.(…
    corecore