3,770 research outputs found
Exploring heterogeneity of unreliable machines for p2p backup
P2P architecture is a viable option for enterprise backup. In contrast to
dedicated backup servers, nowadays a standard solution, making backups directly
on organization's workstations should be cheaper (as existing hardware is
used), more efficient (as there is no single bottleneck server) and more
reliable (as the machines are geographically dispersed).
We present the architecture of a p2p backup system that uses pairwise
replication contracts between a data owner and a replicator. In contrast to
standard p2p storage systems using directly a DHT, the contracts allow our
system to optimize replicas' placement depending on a specific optimization
strategy, and so to take advantage of the heterogeneity of the machines and the
network. Such optimization is particularly appealing in the context of backup:
replicas can be geographically dispersed, the load sent over the network can be
minimized, or the optimization goal can be to minimize the backup/restore time.
However, managing the contracts, keeping them consistent and adjusting them in
response to dynamically changing environment is challenging.
We built a scientific prototype and ran the experiments on 150 workstations
in the university's computer laboratories and, separately, on 50 PlanetLab
nodes. We found out that the main factor affecting the quality of the system is
the availability of the machines. Yet, our main conclusion is that it is
possible to build an efficient and reliable backup system on highly unreliable
machines (our computers had just 13% average availability)
IDEA: An Infrastructure for Detection-based Adaptive Consistency Control in Replicated Services
In Internet-scale distributed systems, replication-based scheme has been widely deployed to increase the availability and efficiency of services. Hence, consistency maintenance among replicas becomes an important research issue because poor consistency results in poor QoS or even monetary loss. Recent research in this area focuses on enforcing a certain consistency level, instead of perfect consistency, to strike a balance between consistency guarantee and system’s scalability. In this paper, we argue that, besides balancing consistency and scalability, it is equally, if not more, important to achieve adaptability of consistency maintenance. I.e., the system adjusts its consistency level on the fly to suit applications’ ongoing need. This paper then presents the design, implementation, and evaluation of IDEA (an Infrastructure for DEtection-based Adaptive consistency control), which adaptively controls consistency in replicated services by utilizing an inconsistency detection framework that detects inconsistency among nodes in a timely manner. Besides, IDEA achieves high performance of inconsistency resolution in terms of resolution delay. Through two emulated distribution application on Planet-Lab, IDEA is evaluated from two aspects: its adaptive interface and its performance of inconsistency resolution. According the experimentation, IDEA achieves adaptability by adjusting the consistency level according to users’ preference on-demand. As for performance, IDEA achieves low inconsistency resolution delay and communication cost
Doctor of Philosophy
dissertationThe next generation mobile network (i.e., 5G network) is expected to host emerging use cases that have a wide range of requirements; from Internet of Things (IoT) devices that prefer low-overhead and scalable network to remote machine operation or remote healthcare services that require reliable end-to-end communications. Improving scalability and reliability is among the most important challenges of designing the next generation mobile architecture. The current (4G) mobile core network heavily relies on hardware-based proprietary components. The core networks are expensive and therefore are available in limited locations in the country. This leads to a high end-to-end latency due to the long latency between base stations and the mobile core, and limitations in having innovations and an evolvable network. Moreover, at the protocol level the current mobile network architecture was designed for a limited number of smart-phones streaming a large amount of high quality traffic but not a massive number of low-capability devices sending small and sporadic traffic. This results in high-overhead control and data planes in the mobile core network that are not suitable for a massive number of future Internet-of-Things (IoT) devices. In terms of reliability, network operators already deployed multiple monitoring sys- tems to detect service disruptions and fix problems when they occur. However, detecting all service disruptions is challenging. First, there is a complex relationship between the network status and user-perceived service experience. Second, service disruptions could happen because of reasons that are beyond the network itself. With technology advancements in Software-defined Network (SDN) and Network Func- tion Virtualization (NFV), the next generation mobile network is expected to be NFV-based and deployed on NFV platforms. However, in contrast to telecom-grade hardware with built-in redundancy, commodity off-the-shell (COTS) hardware in NFV platforms often can't be comparable in term of reliability. Availability of Telecom-grade mobile core network hardwares is typically 99.999% (i.e., "five-9s" availability) while most NFV platforms only guarantee "three-9s" availability - orders of magnitude less reliable. Therefore, an NFV-based mobile core network needs extra mechanisms to guarantee its availability. This Ph.D. dissertation focuses on using SDN/NFV, data analytics and distributed system techniques to enhance scalability and reliability of the next generation mobile core network. The dissertation makes the following contributions. First, it presents SMORE, a practical offloading architecture that reduces end-to-end latency and enables new functionalities in mobile networks. It then presents SIMECA, a light-weight and scalable mobile core network designed for a massive number of future IoT devices. Second, it presents ABSENCE, a passive service monitoring system using customer usage and data analytics to detect silent failures in an operational mobile network. Lastly, it presents ECHO, a distributed mobile core network architecture to improve availability of NFV-based mobile core network in public clouds
Traffic Optimization in Data Center and Software-Defined Programmable Networks
L'abstract è presente nell'allegato / the abstract is in the attachmen
The impact of microservices: an empirical analysis of the emerging software architecture
Dissertação de mestrado em Informatics EngineeringThe applications’ development paradigm has faced changes in recent years, with modern development being
characterized by the need to continuously deliver new software iterations. With great affinity with those principles,
microservices is a software architecture which features characteristics that potentially promote multiple quality
attributes often required by modern, large-scale applications. Its recent growth in popularity and acceptance in
the industry made this architectural style often described as a form of modernizing applications that allegedly
solves all the traditional monolithic applications’ inconveniences. However, there are multiple worth mentioning
costs associated with its adoption, which seem to be very vaguely described in existing empirical research, being
often summarized as "the complexity of a distributed system". The adoption of microservices provides the
agility to achieve its promised benefits, but to actually reach them, several key implementation principles have
to be honored. Given that it is still a fairly recent approach to developing applications, the lack of established
principles and knowledge from development teams results in the misjudgment of both costs and values of this
architectural style. The outcome is often implementations that conflict with its promised benefits. In order to
implement a microservices-based architecture that achieves its alleged benefits, there are multiple patterns and
methodologies involved that add a considerable amount of complexity. To evaluate its impact in a concrete and
empirical way, one same e-commerce platform was developed from scratch following a monolithic architectural
style and two architectural patterns based on microservices, featuring distinct inter-service communication and
data management mechanisms. The effort involved in dealing with eventual consistency, maintaining a communication
infrastructure, and managing data in a distributed way portrayed significant overheads not existent in the
development of traditional applications. Nonetheless, migrating from a monolithic architecture to a microservicesbased
is currently accepted as the modern way of developing software and this ideology is not often contested,
nor the involved technical challenges are appropriately emphasized. Sometimes considered over-engineering,
other times necessary, this dissertation contributes with empirical data from insights that showcase the impact
of the migration to microservices in several topics. From the trade-offs associated with the use of specific patterns,
the development of the functionalities in a distributed way, and the processes to assure a variety of quality
attributes, to performance benchmarks experiments and the use of observability techniques, the entire development
process is described and constitutes the object of study of this dissertation.O paradigma de desenvolvimento de aplicações tem visto alterações nos últimos anos, sendo o desenvolvimento
moderno caracterizado pela necessidade de entrega contínua de novas iterações de software. Com
grande afinidade com esses princípios, microsserviços são uma arquitetura de software que conta com características
que potencialmente promovem múltiplos atributos de qualidade frequentemente requisitados por aplicações
modernas de grandes dimensões. O seu recente crescimento em popularidade e aceitação na industria
fez com que este estilo arquitetural se comumente descrito como uma forma de modernizar aplicações que
alegadamente resolve todos os inconvenientes apresentados por aplicações monolíticas tradicionais. Contudo,
existem vários custos associados à sua adoção, aparentemente descritos de forma muito vaga, frequentemente
sumarizados como a "complexidade de um sistema distribuído". A adoção de microsserviços fornece a agilidade
para atingir os seus benefícios prometidos, mas para os alcançar, vários princípios de implementação
devem ser honrados. Dado que ainda se trata de uma forma recente de desenvolver aplicações, a falta de
princípios estabelecidos e conhecimento por parte das equipas de desenvolvimento resulta em julgamentos
errados dos custos e valores deste estilo arquitetural. O resultado geralmente são implementações que entram
em conflito com os seus benefícios prometidos. De modo a implementar uma arquitetura baseada em
microsserviços com os benefícios prometidos existem múltiplos padrões que adicionam considerável complexidade.
De modo a avaliar o impacto dos microsserviços de forma concreta e empírica, foi desenvolvida uma
mesma plataforma e-commerce de raiz segundo uma arquitetura monolítica e duas arquitetura baseadas em
microsserviços, contando com diferentes mecanismos de comunicação entre os serviços. O esforço envolvido
em lidar com consistência eventual, manter a infraestrutura de comunicação e gerir os dados de uma forma distribuída
representaram desafios não existentes no desenvolvimento de aplicações tradicionais. Apesar disso, a
ideologia de migração de uma arquitetura monolítica para uma baseada em microsserviços é atualmente aceite
como a forma moderna de desenvolver aplicações, não sendo frequentemente contestada nem os seus desafios
técnicos são apropriadamente enfatizados. Por vezes considerado overengineering, outras vezes necessário,
a presente dissertação visa contribuir com dados práticos relativamente ao impacto da migração para arquiteturas
baseadas em microsserviços em diversos tópicos. Desde os trade-offs envolvidos no uso de padrões
específicos, o desenvolvimento das funcionalidades de uma forma distribuída e nos processos para assegurar
uma variedade de atributos de qualidade, até análise de benchmarks de performance e uso de técnicas de
observabilidade, todo o desenvolvimento é descrito e constitui o objeto de estudo da dissertação
- …