17 research outputs found

    Can Component/Service-Based Systems Be Proved Correct?

    Get PDF
    Component-oriented and service-oriented approaches have gained a strong enthusiasm in industries and academia with a particular interest for service-oriented approaches. A component is a software entity with given functionalities, made available by a provider, and used to build other application within which it is integrated. The service concept and its use in web-based application development have a huge impact on reuse practices. Accordingly a considerable part of software architectures is influenced; these architectures are moving towards service-oriented architectures. Therefore applications (re)use services that are available elsewhere and many applications interact, without knowing each other, using services available via service servers and their published interfaces and functionalities. Industries propose, through various consortium, languages, technologies and standards. More academic works are also undertaken concerning semantics and formalisation of components and service-based systems. We consider here both streams of works in order to raise research concerns that will help in building quality software. Are there new challenging problems with respect to service-based software construction? Besides, what are the links and the advances compared to distributed systems?Comment: 16 page

    A Fault-Tolerant Algorithm For Distributed Resource Allocation

    Get PDF
    Resource allocation is a usual problem that must be faced during a distributed system design. Despite the large number of algorithms proposed in literature to solve this problem, most papers lack of detailed descriptions about how to turn these algorithms into real-world reliable protocols. This article presents a fault-tolerant algorithm for distributed resource allocation named SLOTS which is implemented as an executable protocol. It allocates resources among members in a fairly manner using simple heuristics and employing a donation approach. SLOTS supports the dynamic behavior of clusters and provides high availability services. It bases its fault-tolerance properties and membership changes in atomic sets of operations (like transactions) using services provided by an underlying Group Communication System.Facultad de Informátic

    A fault-tolerant algorithm for distributed resource allocation

    Get PDF
    Resource allocation is a usual problem that must be faced during a distributed system design. Despite the large number of algorithms proposed in literature to solve this problem, most papers lack of detailed descriptions about how to turn these algorithms into real-world reliable protocols. This article presents a fault-tolerant algorithm for distributed resource allocation named SLOTS which is implemented as an executable protocol. It allocates resources among members in a fairly manner using simple heuristics and employing a donation approach. SLOTS supports the dynamic behavior of clusters and provides high availability services. It bases its fault-tolerance properties and membership changes in atomic sets of operations (like transactions) using services provided by an underlying Group Communication System.La participación de Toni Cortes en este trabajo ha sido financiada por el Gobierno de España (subvención SEV2015-0493 del programa Severo Ochoa) por el Ministerio Español de Ciencia e Innovación (contrato TIN2015-65316) y la Generalitat de Catalunya (contrato 2014-SGR-1051). La participación de Fernando G. Tinetti en este trabajo ha sido financiada por la UNLP (Facultad de Informática) y la CIC Provincia de Buenos Aires, Argentina.Peer ReviewedPostprint (author's final draft

    A Fault-Tolerant Algorithm For Distributed Resource Allocation

    Get PDF
    Resource allocation is a usual problem that must be faced during a distributed system design. Despite the large number of algorithms proposed in literature to solve this problem, most papers lack of detailed descriptions about how to turn these algorithms into real-world reliable protocols. This article presents a fault-tolerant algorithm for distributed resource allocation named SLOTS which is implemented as an executable protocol. It allocates resources among members in a fairly manner using simple heuristics and employing a donation approach. SLOTS supports the dynamic behavior of clusters and provides high availability services. It bases its fault-tolerance properties and membership changes in atomic sets of operations (like transactions) using services provided by an underlying Group Communication System.Facultad de Informátic

    Designing peer-to-peer overlays:a small-world perspective

    Get PDF
    The Small-World phenomenon, well known under the phrase "six degrees of separation", has been for a long time under the spotlight of investigation. The fact that our social network is closely-knitted and that any two people are linked by a short chain of acquaintances was confirmed by the experimental psychologist Stanley Milgram in the sixties. However, it was only after the seminal work of Jon Kleinberg in 2000 that it was understood not only why such networks exist, but also why it is possible to efficiently navigate in these networks. This proved to be a highly relevant discovery for peer-to-peer systems, since they share many fundamental similarities with the social networks; in particular the fact that the peer-to-peer routing solely relies on local decisions, without the possibility to invoke global knowledge. In this thesis we show how peer-to-peer system designs that are inspired by Small-World principles can address and solve many important problems, such as balancing the peer load, reducing high maintenance cost, or efficiently disseminating data in large-scale systems. We present three peer-to-peer approaches, namely Oscar, Gravity, and Fuzzynet, whose concepts stem from the design of navigable Small-World networks. Firstly, we introduce a novel theoretical model for building peer-to-peer systems which supports skewed node distributions and still preserves all desired properties of Kleinberg's Small-World networks. With such a model we set a reference base for the design of data-oriented peer-to-peer systems which are characterized by non-uniform distribution of keys as well as skewed query or access patterns. Based on this theoretical model we introduce Oscar, an overlay which uses a novel scalable network sampling technique for network construction, for which we provide a rigorous theoretical analysis. The simulations of our system validate the developed theory and evaluate Oscar's performance under typical conditions encountered in real-life large-scale networked systems, including participant heterogeneity, faults, as well as skewed and dynamic load-distributions. Furthermore, we show how by utilizing Small-World properties it is possible to reduce the maintenance cost of most structured overlays by discarding a core network connectivity element – the ring invariant. We argue that reliance on the ring structure is a serious impediment for real life deployment and scalability of structured overlays. We propose an overlay called Fuzzynet, which does not rely on the ring invariant, yet has all the functionalities of structured overlays. Fuzzynet takes the idea of lazy overlay maintenance further by eliminating the need for any explicit connectivity and data maintenance operations, relying merely on the actions performed when new Fuzzynet peers join the network. We show that with a sufficient amount of neighbors, even under high churn, data can be retrieved in Fuzzynet with high probability. Finally, we show how peer-to-peer systems based on the Small-World design and with the capability of supporting non-uniform key distributions can be successfully employed for large-scale data dissemination tasks. We introduce Gravity, a publish/subscribe system capable of building efficient dissemination structures, inducing only minimal dissemination relay overhead. This is achieved through Gravity's property to permit non-uniform peer key distributions which allows the subscribers to be clustered close to each other in the key space where data dissemination is cheap. An extensive experimental study confirms the effectiveness of our system under realistic subscription patterns and shows that Gravity surpasses existing approaches in efficiency by a large margin. With the peer-to-peer systems presented in this thesis we fill an important gap in the family of structured overlays, bringing into life practical systems, which can play a crucial role in enabling data-oriented applications distributed over wide-area networks

    Un modelo de arquitectura para un sistema de virtualización distribuido

    Get PDF
    Si bien los Sistemas Operativos disponen de características de seguridad, protección, gestión de recursos, etc. éstas parecen ser insuficientes para satisfacer los requerimientos de los sistemas informáticos que suelen estar permanente y globalmente conectados. Las actuales tecnologías de virtualización han sido y continúan siendo masivamente adoptadas para cubrir esas necesidades de sistemas y aplicaciones por sus características de particionado de recursos, aislamiento, capacidad de consolidación, seguridad, soporte de aplicaciones heredadas, facilidades de administración, etc. Una de sus restricciones es que el poder de cómputo de una Máquina Virtual (o un Contenedor) está acotado al poder de cómputo de la máquina física que la contiene. Esta tesis propone superar esta restricción abordando la problemática con el enfoque de un sistema distribuido. Para poder alcanzar mayores niveles de rendimiento y escalabilidad, los programadores de aplicaciones nativas para la Nube deben partirlas en diferentes componentes distribuyendo su ejecución en varias Máquinas Virtuales (o Contenedores). Dichos componentes se comunican mediante interfaces bien definidas tales como las interfaces de Web Services. Las Máquinas Virtuales (o Contenedores) deben configurarse, asegurarse y desplegarse para poder ejecutar la aplicación. Esto se debe, en parte, a que los diferentes componentes no comparten la misma instancia de Sistema Operativo por lo que no comparten los mismos recursos abstractos tales como colas de mensajes, mutexes, archivos, pipes, etc. El defecto de esta modalidad de desarrollo de aplicaciones es que impide una visión integral y generalizada de los recursos. En ella, el programador debe planificar la asignación de recursos a cada componente de su aplicación y, por lo tanto, no solo debe programar su aplicación sino también gestionar la distribución de esos recursos. En este trabajo se propone un modelo de arquitectura para un Sistema de Virtualización Distribuido (DVS) que permite expandir los límites de un dominio de ejecución más allá de una máquina física, explotando el poder de cómputo de un cluster de computadores. En un DVS se combinan e integran tecnologías de Virtualización, de Sistemas Operativos y de Sistemas Distribuidos, donde cada una de ellas le aporta sus mejores características. Esta arquitectura, por ejemplo, le brinda al programador una visión integrada de los recursos distribuidos que dispone para su aplicación relevándolo de la responsabilidad de gestionarlos. El modelo de DVS propuesto dispone de aquellas características que son requeridas por los proveedores de servicios de infraestructura en la Nube, tales como: mayor rendimiento, disponibilidad, escalabilidad, elasticidad, capacidad de replicación y migración de procesos, balanceo de carga, entre otras. Las aplicaciones heredadas pueden migrarse más fácilmente, dado que es posible disponer de la misma instancia de un Sistema Operativo Virtual en cada nodo del cluster de virtualización. Las aplicaciones desarrolladas bajo las nuevas metodologías para el diseño y desarrollo de software para la Nube también se benefician adaptándose su utilización a un sistema que es inherentemente distribuido.Esta tesis está reseñada en Sedici (ver documento relacionado).Facultad de Informátic

    QuickSilver Scalable Multicast

    Full text link
    Reliable multicast is useful for replication and in support of publish-subscribe notification. However, many of the most interesting applications give rise to huge numbers of multicast groups with heavily overlapping sets of receivers, large groups, or high rates of dynamism. Existing multicast systems scale poorly in one or more of these respects. This paper describes QuickSilver Scalable Multicast (QSM), a platform exhibiting significantly improved scalability. Key advances involve new ways of handling time and scheduling, adaptive response to observed traffic patterns, and better handling of disturbances

    D.: QuickSilver Scalable Multicast (QSM

    No full text
    QSM is a multicast engine designed to support a style of distributed programming in which application objects are replicated among clients and updated via multicast. The model requires platforms that scale in dimensions previously unexplored; in particular, to large numbers of multicast groups. Prior systems weren’t optimized for such scenarios and can’t take advantage of regular group overlap patterns, a key feature of our application domain. Furthermore, little is known about performance and scalability of such systems in modern managed environments. We shed light on these issues and offer architectural insights based on our experience building QSM. 1

    The Power of Indirection: Achieving Multicast Scalability by Mapping Groups to Regional Underlays

    Full text link
    Reliable multicast is a powerful primitive, useful for data replication, event notification (publish-subscribe), fault tolerance and other purposes. Yet many of the most interesting applications give rise to huge numbers of heavily overlapping groups, some of which may be large. Existing multicast systems scale scale poorly in one or both respects. We propose the QuickSilver Scalable Multicast protocol (QSM), a novel solution that delivers performance almost independent of the number of groups and introduces newmechanisms that scale well in the number of nodes with minimal performance and delay penalties when loss occurs. Key to the solution is a level of indirection: a mapping of groups to regions of group overlap in which communication associated with different protocols can be merged. The core of QSM is a new regional multicast protocol that offers scalability and performance benefits over a wide range of region sizes
    corecore