13 research outputs found

    Verification of JavaSpaces (TM) Parallel Programs

    Get PDF

    FT-GReLoSSS: a Skeletal-Based Approach towards Application Parallelization and Low-Overhead Fault Tolerance

    Get PDF
    International audienceFT-GReLoSSS (FTG) is a C++/MPI framework to ease the development of fault-tolerant parallel applications belonging to a SPMD family termed GReLoSSS. The originality of FTG is to rely on the MoLOToF programming model principles to facilitate the addition of an efficient checkpoint-based fault tolerance at the application level. Main features of MoLOToF encompass a structured application development based on fault tolerant "skeletons" and lay emphasis on collaborations. The latter exist between the programmer, the framework and the underlying runtime middleware/environment. Together with the structured approach they contribute into achieving reduced checkpoint sizes, as well as reduced checkpoint and recovery overhead at runtime. This paper introduces the main principles of MoLOToF and the design of the FTG framework. To properly assess the framework's ease of use for a programmer as well as fault tolerance efficiency, a series of benchmarks were conducted up to 128 nodes on a multicore PC cluster. These benchmarks involved an existing parallel financial application for gas storage valuation, originally developed in collaboration with EDF company, and a rewritten version which made use of the FTG framework and its features. Experiments results display low-overhead compared to existing system-level counterparts

    DPAC: an object-oriented distributed and parallel computing framework for manufacturing applications

    Full text link

    Modal Abstraction and Replication of Processes with Data

    Get PDF
    Fokkink, W.J. [Promotor]Pol, J.C. van de [Copromotor

    Coordinated collaboration for e-commerce based on the multiagent paradigm.

    Get PDF
    Lee Ting-on.Thesis (M.Phil.)--Chinese University of Hong Kong, 2000.Includes bibliographical references (leaves 116-121).Abstracts in English and Chinese.Acknowledgments --- p.iAbstract --- p.iiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Roadmap to the Thesis --- p.5Chapter 2 --- Software Agents and Agent Frameworks --- p.7Chapter 2.1 --- Software Agent --- p.7Chapter 2.1.1 --- Advantages of Agent --- p.10Chapter 2.1.2 --- Roles of Agent --- p.11Chapter 2.2 --- Agent Frameworks --- p.13Chapter 2.3 --- Communication Services and Concepts --- p.15Chapter 2.3.1 --- Message Channel --- p.15Chapter 2.3.2 --- Remote Procedure Call --- p.16Chapter 2.3.3 --- Event Channel --- p.17Chapter 2.4 --- Component --- p.18Chapter 3 --- Related Work --- p.20Chapter 3.1 --- Collaboration Behaviors --- p.20Chapter 3.2 --- Direct Coordination --- p.22Chapter 3.3 --- Meeting-oriented Coordination --- p.23Chapter 3.4 --- Blackboard-based Coordination --- p.24Chapter 3.5 --- Linda-like Coordination --- p.25Chapter 3.6 --- Reactive Tuple Spaces --- p.26Chapter 4 --- Background and Foundations --- p.27Chapter 4.1 --- Choice of Technologies --- p.27Chapter 4.2 --- Jini Technology --- p.28Chapter 4.2.1 --- The Lookup Service --- p.29Chapter 4.2.2 --- Proxy --- p.31Chapter 4.3 --- JavaSpaces --- p.32Chapter 4.4 --- Grasshopper Architecture --- p.33Chapter 5 --- The CoDAC Framework --- p.36Chapter 5.1 --- Requirements for Enabling Collaboration --- p.37Chapter 5.1.1 --- Consistent Group Membership --- p.37Chapter 5.1.2 --- Atomic Commitment --- p.39Chapter 5.1.3 --- Uniform Reliable Multicast --- p.40Chapter 5.1.4 --- Fault Tolerance --- p.40Chapter 5.2 --- System Components --- p.41Chapter 5.2.1 --- Distributed Agent Adapter --- p.42Chapter 5.2.2 --- CollaborationCore --- p.44Chapter 5.3 --- System Infrastructure --- p.45Chapter 5.3.1 --- Agent --- p.45Chapter 5.3.2 --- Distributed Agent Manager --- p.46Chapter 5.3.3 --- Collaboration Manager --- p.46Chapter 5.3.4 --- Kernel --- p.46Chapter 5.4 --- Collaboration --- p.47Chapter 5.5.1 --- Global Collaboration --- p.48Chapter 5.5.2 --- Local Collaboration --- p.48Chapter 6 --- Collaboration Life Cycle --- p.50Chapter 6.1 --- Initialization --- p.50Chapter 6.2 --- Resouces Gathering --- p.53Chapter 6.3 --- Results Delivery --- p.54Chapter 7 --- Protocol Suite --- p.55Chapter 7.1 --- The Group Membership Protocol --- p.56Chapter 7.1.1 --- Join Protocol --- p.56Chapter 7.1.2 --- Leave Protocol --- p.57Chapter 7.1.3 --- Recovery Protocol --- p.59Chapter 7.1.4 --- Proof --- p.61Chapter 7.2 --- Atomic Commitment Protocol --- p.62Chapter 7.3 --- Uniform Reliable Multicast --- p.63Chapter Chapter 8 --- Implementation --- p.66Chapter 8.1 --- Interfaces and Classes --- p.66Chapter 8.1.1 --- The CoDACAdapterInterface --- p.66Chapter 8.1.2 --- The CoDACEventListener --- p.69Chapter 8.1.3 --- The DAAdapter --- p.71Chapter 8.1.4 --- The DAManager --- p.75Chapter 8.1.5 --- The CoDACInternalEventListener --- p.77Chapter 8.1.6 --- The CollaborationManager --- p.77Chapter 8.1.7 --- The CollaborationCore --- p.78Chapter 8.2 --- Messaging Mechanism --- p.79Chapter 8.3 --- Nested Transaction --- p.84Chapter 8.4 --- Fault Detection --- p.85Chapter 8.5 --- Atomic Commitment Protocol --- p.88Chapter 8.5.1 --- Message Flow --- p.89Chapter 8.5.2 --- Timeout Actions --- p.91Chapter Chapter 9 --- Example --- p.93Chapter 9.1 --- System Model --- p.93Chapter 9.2 --- Auction Lifecycle --- p.94Chapter 9.2.1 --- Initialization --- p.94Chapter 9.2.2 --- Resource Gathering --- p.98Chapter 9.2.3 --- Results Delivery --- p.100Chapter Chapter 10 --- Discussions --- p.104Chapter 10.1 --- Compatibility --- p.104Chapter 10.2 --- Hierarchical Group Infrastructure --- p.106Chapter 10.3 --- Flexibility --- p.107Chapter 10.4 --- Atomicity --- p.108Chapter 10.5 --- Fault Tolerance --- p.109Chapter Chapter 11 --- Conclusion and Future Work --- p.111Chapter 11.1 --- Conclusion --- p.111Chapter 11.2 --- Future Work --- p.112Chapter 11.2.1 --- Electronic Commerce --- p.112Chapter 11.2.2 --- Workflow Management --- p.114Bibliography --- p.116Publication List --- p.12

    Scientific High Performance Computing (HPC) Applications On The Azure Cloud Platform

    Get PDF
    Cloud computing is emerging as a promising platform for compute and data intensive scientific applications. Thanks to the on-demand elastic provisioning capabilities, cloud computing has instigated curiosity among researchers from a wide range of disciplines. However, even though many vendors have rolled out their commercial cloud infrastructures, the service offerings are usually only best-effort based without any performance guarantees. Utilization of these resources will be questionable if it can not meet the performance expectations of deployed applications. Additionally, the lack of the familiar development tools hamper the productivity of eScience developers to write robust scientific high performance computing (HPC) applications. There are no standard frameworks that are currently supported by any large set of vendors offering cloud computing services. Consequently, the application portability among different cloud platforms for scientific applications is hard. Among all clouds, the emerging Azure cloud from Microsoft in particular remains a challenge for HPC program development both due to lack of its support for traditional parallel programming support such as Message Passing Interface (MPI) and map-reduce and due to its evolving application programming interfaces (APIs). We have designed newer frameworks and runtime environments to help HPC application developers by providing them with easy to use tools similar to those known from traditional parallel and distributed computing environment set- ting, such as MPI, for scientific application development on the Azure cloud platform. It is challenging to create an efficient framework for any cloud platform, including the Windows Azure platform, as they are mostly offered to users as a black-box with a set of application programming interfaces (APIs) to access various service components. The primary contributions of this Ph.D. thesis are (i) creating a generic framework for bag-of-tasks HPC applications to serve as the basic building block for application development on the Azure cloud platform, (ii) creating a set of APIs for HPC application development over the Azure cloud platform, which is similar to message passing interface (MPI) from traditional parallel and distributed setting, and (iii) implementing Crayons using the proposed APIs as the first end-to-end parallel scientific application to parallelize the fundamental GIS operations

    Services in pervasive computing environments : from design to delivery

    Get PDF
    The work presented in this thesis is based on the assumption that modern computer technologies are already potentially pervasive: CPUs are embedded in any sort of device; RAM and storage memory of a modern PDA is comparable to those of a ten years ago Unix workstation; Wi-Fi, GPRS, UMTS are leveraging the development of the wireless Internet. Nevertheless, computing is not pervasive because we do not have a clear conceptual model of the pervasive computer and we have not tools, methodologies, and middleware to write and to seamlessly deliver at once services over a multitude of heterogeneous devices and different delivery contexts. Our thesis addresses these issues starting from the analysis of forces in a pervasive computing environment: user mobility, user profile, user position, and device profile. The conceptual model, or metaphor, we use to drive our work is to consider the environment as surrounded by a multitude of services and objects and devices as the communicating gates between the real world and the virtual dimension of pervasive computing around us. Our thesis is thus built upon three main “pillars”. The first pillar is a domain-object-driven methodology which allows developer to abstract from low level details of the final delivery platform, and provides the user with the ability to access services in a multi-channel way. The rationale is that domain objects are self-contained pieces of software able to represent data and to compute functions and procedures. Our approach fills the gap between users and domain objects building an appropriate user interface which is both adapted to the domain object and to the end user device. As example, we present how to design, implement and deliver an electronic mail application over various platforms. The second pillar of this thesis analyzes in more details the forces that make direct object manipulation inadequate in a pervasive context. These forces are the user profile, the device profile, the context of use, and the combinatorial explosion of domain objects. From the analysis of the electronic mail application presented as example, we notice that according to the end user device, or according to particular circumstances during the access to the service (for instance if the user access the service by the interactive TV while he is having his breakfast) some functionalities are not compulsory and do not fit an adequate task sequence. So we decided to make task models explicit in the design of a service and to integrate the capability to automatically generate user interfaces for domain objects with the formal definition of task models adapted to the final delivery context. Finally, the third pillar of our thesis is about the lifecycle of services in a pervasive computing environment. Our solutions are based upon an existing framework, the Jini connection technology, and enrich this framework with new services and architectures for the deployment and discovery of services, for the user session management, and for the management of offline agents

    Engineering Self-Adaptive Collective Processes for Cyber-Physical Ecosystems

    Get PDF
    The pervasiveness of computing and networking is creating significant opportunities for building valuable socio-technical systems. However, the scale, density, heterogeneity, interdependence, and QoS constraints of many target systems pose severe operational and engineering challenges. Beyond individual smart devices, cyber-physical collectives can provide services or solve complex problems by leveraging a “system effect” while coordinating and adapting to context or environment change. Understanding and building systems exhibiting collective intelligence and autonomic capabilities represent a prominent research goal, partly covered, e.g., by the field of collective adaptive systems. Therefore, drawing inspiration from and building on the long-time research activity on coordination, multi-agent systems, autonomic/self-* systems, spatial computing, and especially on the recent aggregate computing paradigm, this thesis investigates concepts, methods, and tools for the engineering of possibly large-scale, heterogeneous ensembles of situated components that should be able to operate, adapt and self-organise in a decentralised fashion. The primary contribution of this thesis consists of four main parts. First, we define and implement an aggregate programming language (ScaFi), internal to the mainstream Scala programming language, for describing collective adaptive behaviour, based on field calculi. Second, we conceive of a “dynamic collective computation” abstraction, also called aggregate process, formalised by an extension to the field calculus, and implemented in ScaFi. Third, we characterise and provide a proof-of-concept implementation of a middleware for aggregate computing that enables the development of aggregate systems according to multiple architectural styles. Fourth, we apply and evaluate aggregate computing techniques to edge computing scenarios, and characterise a design pattern, called Self-organising Coordination Regions (SCR), that supports adjustable, decentralised decision-making and activity in dynamic environments.Con lo sviluppo di informatica e intelligenza artificiale, la diffusione pervasiva di device computazionali e la crescente interconnessione tra elementi fisici e digitali, emergono innumerevoli opportunità per la costruzione di sistemi socio-tecnici di nuova generazione. Tuttavia, l'ingegneria di tali sistemi presenta notevoli sfide, data la loro complessità—si pensi ai livelli, scale, eterogeneità, e interdipendenze coinvolti. Oltre a dispositivi smart individuali, collettivi cyber-fisici possono fornire servizi o risolvere problemi complessi con un “effetto sistema” che emerge dalla coordinazione e l'adattamento di componenti fra loro, l'ambiente e il contesto. Comprendere e costruire sistemi in grado di esibire intelligenza collettiva e capacità autonomiche è un importante problema di ricerca studiato, ad esempio, nel campo dei sistemi collettivi adattativi. Perciò, traendo ispirazione e partendo dall'attività di ricerca su coordinazione, sistemi multiagente e self-*, modelli di computazione spazio-temporali e, specialmente, sul recente paradigma di programmazione aggregata, questa tesi tratta concetti, metodi, e strumenti per l'ingegneria di ensemble di elementi situati eterogenei che devono essere in grado di lavorare, adattarsi, e auto-organizzarsi in modo decentralizzato. Il contributo di questa tesi consiste in quattro parti principali. In primo luogo, viene definito e implementato un linguaggio di programmazione aggregata (ScaFi), interno al linguaggio Scala, per descrivere comportamenti collettivi e adattativi secondo l'approccio dei campi computazionali. In secondo luogo, si propone e caratterizza l'astrazione di processo aggregato per rappresentare computazioni collettive dinamiche concorrenti, formalizzata come estensione al field calculus e implementata in ScaFi. Inoltre, si analizza e implementa un prototipo di middleware per sistemi aggregati, in grado di supportare più stili architetturali. Infine, si applicano e valutano tecniche di programmazione aggregata in scenari di edge computing, e si propone un pattern, Self-Organising Coordination Regions, per supportare, in modo decentralizzato, attività decisionali e di regolazione in ambienti dinamici

    Distributed Information Systems and Data Mining in Self-Organizing Networks

    Get PDF
    The diffusion of sensors and devices to generate and collect data is capillary. The infrastructure that envelops the smart city has to react to the contingent situations and to changes in the operating environment. At the same time, the complexity of a distributed system, consisting of huge amounts of components fixed and mobile, can generate unsustainable costs and latencies to ensure robustness, scalability, and reliability, with type architectures middleware. The distributed system must be able to self-organize and self-restore adapting its operating strategies to optimize the use of resources and overall efficiency. Peer-to-peer systems (P2P) can offer solutions to face the requirements of managing, indexing, searching and analyzing data in scalable and self-organizing fashions, such as in cloud services and big data applications, just to mention two of the most strategic technologies for the next years. In this thesis we present G-Grid, a multi-dimensional distributed data indexing able to efficiently execute arbitrary multi-attribute exact and range queries in decentralized P2P environments. G-Grid is a foundational structure and can be effectively used in a wide range of application environments, including grid computing, cloud and big data domains. Nevertheless we proposed some improvements on the basic structure introducing a bit of randomness by using Small World networks, whereas are structures derived from social networks and show an almost uniform traffic distribution. This produced huge advantages in efficiency, cutting maintenance costs, without losing efficacy. Experiments show how this new hybrid structure obtains the best performance in traffic distribution and it a good settlement for the overall performance on the requirements desired in the modern data systems
    corecore