12 research outputs found

    The Eureka Programming Model for Speculative Task Parallelism

    Get PDF
    In this paper, we describe the Eureka Programming Model (EuPM) that simplifies the expression of speculative parallel tasks, and is especially well suited for parallel search and optimization applications. The focus of this work is to provide a clean semantics for, and efficiently support, such "eureka-style" computations (EuSCs) in general structured task parallel programming models. In EuSCs, a eureka event is a point in a program that announces that a result has been found. A eureka triggered by a speculative task can cause a group of related speculative tasks to become redundant, and enable them to be terminated at well-defined program points. Our approach provides a bound on the additional work done in redundant speculative tasks after such a eureka event occurs. We identify various patterns that are supported by our eureka construct, which include search, optimization, convergence, and soft real-time deadlines. These different patterns of computations can also be safely combined or nested in the EuPM, along with regular task-parallel constructs, thereby enabling high degrees of composability and reusability. As demonstrated by our implementation, the EuPM can also be implemented efficiently. We use a cooperative runtime that uses delimited continuations to manage the termination of redundant tasks and their synchronization at join points. In contrast to current approaches, EuPM obviates the need for cumbersome manual refactoring by the programmer that may (for example) require the insertion of if checks and early return statements in every method in the call chain. Experimental results show that solutions using the EuPM simplify programmability, achieve performance comparable to hand-coded speculative task-based solutions and out-perform non-speculative task-based solutions

    Workshop on Modelling of Objects, Components, and Agents, Aarhus, Denmark, August 27-28, 2001

    Get PDF
    This booklet contains the proceedings of the workshop Modelling of Objects, Components, and Agents (MOCA'01), August 27-28, 2001. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark and the "Theoretical Foundations of Computer Science" Group at the University of Hamburg, Germany. The papers are also available in electronic form via the web pages: http://www.daimi.au.dk/CPnets/workshop01

    Proceedings of the 2nd International Workshop on Security in Mobile Multiagent Systems

    Get PDF
    This report contains the Proceedings of the Second Workshop on Security on Security of Mobile Multiagent Systems (SEMAS2002). The Workshop was held in Montreal, Canada as a satellite event to the 5th International Conference on Autonomous Agents in 2001. The far reaching influence of the Internet has resulted in an increased interest in agent technologies, which are poised to play a key role in the implementation of successful Internet and WWW-based applications in the future. While there is still considerable hype concerning agent technologies, there is also an increasing awareness of the problems involved. In particular, that these applications will not be successful unless security issues can be adequately handled. Although there is a large body of work on cryptographic techniques that provide basic building-blocks to solve specific security problems, relatively little work has been done in investigating security in the multiagent system context. Related problems are secure communication between agents, implementation of trust models/authentication procedures or even reflections of agents on security mechanisms. The introduction of mobile software agents significantly increases the risks involved in Internet and WWW-based applications. For example, if we allow agents to enter our hosts or private networks, we must offer the agents a platform so that they can execute correctly but at the same time ensure that they will not have deleterious effects on our hosts or any other agents / processes in our network. If we send out mobile agents, we should also be able to provide guarantees about specific aspects of their behaviour, i.e., we are not only interested in whether the agents carry out-out their intended task correctly. They must defend themselves against attacks initiated by other agents, and survive in potentially malicious environments. Agent technologies can also be used to support network security. For example in the context of intrusion detection, intelligent guardian agents may be used to analyse the behaviour of agents on a firewall or intelligent monitoring agents can be used to analyse the behaviour of agents migrating through a network. Part of the inspiration for such multi-agent systems comes from primitive animal behaviour, such as that of guardian ants protecting their hill or from biological immune systems

    Coordinating Resource Use in Open Distributed Systems

    Get PDF
    In an open distributed system, computational resources are peer-owned, and distributed over time and space. The system is open to interactions with its environment, and the resources can dynamically join or leave the system, or can be discovered at runtime. This dynamicity leads to opportunities to carry out computations without statically owned resources, harnessing the collective compute power of the resources connected by the Internet. However, realizing this potential requires efficient and scalable resource discovery, coordination, and control, which present challenges in a dynamic, open environment. In this thesis, I present an approach to address these challenges by separating the functionality concerns of concurrent computations from those of coordinating their resource use, with the purpose of reducing programming complexity, and aiding development of correct, efficient, and resource-aware concurrent programs. As a first step towards effectively coordinating distributed resources, I developed DREAM, a Distributed Resource Estimation and Allocation Model, which enables computations to reason about future availability of resources. I then developed a fine-grained resource coordination scheme for distributed computations. The coordination scheme integrates DREAM-based resource reasoning into a distributed scheduler, for deciding and enforcing fine-grained resource-use schedules for distributed computations. To control the overhead caused by the coordination, a tuner is implemented which explicitly balances the overhead of the control mechanisms against the extent of control exercised. The effectiveness and performance of the resource coordination approach have been evaluated using a number of case studies. Experimental results show that the approach can effectively schedule computations for supporting various types of coordination objectives, such as ensuring Quality-of-Service, power-efficient execution, and dynamic load balancing. The overhead caused by the coordination mechanism is relatively modest, and adjustable through the tuner. In addition, the coordination mechanism does not add extra programming complexity to computations

    An Autonomic Cross-Platform Operating Environment for On-Demand Internet Computing

    Get PDF
    The Internet has evolved into a global and ubiquitous communication medium interconnecting powerful application servers, diverse desktop computers and mobile notebooks. Along with recent developments in computer technology, such as the convergence of computing and communication devices, the way how people use computers and the Internet has changed people´s working habits and has led to new application scenarios. On the one hand, pervasive computing, ubiquitous computing and nomadic computing become more and more important since different computing devices like PDAs and notebooks may be used concurrently and alternately, e.g. while the user is on the move. On the other hand, the ubiquitous availability and pervasive interconnection of computing systems have fostered various trends towards the dynamic utilization and spontaneous collaboration of available remote computing resources, which are addressed by approaches like utility computing, grid computing, cloud computing and public computing. From a general point of view, the common objective of this development is the use of Internet applications on demand, i.e. applications that are not installed in advance by a platform administrator but are dynamically deployed and run as they are requested by the application user. The heterogeneous and unmanaged nature of the Internet represents a major challenge for the on demand use of custom Internet applications across heterogeneous hardware platforms, operating systems and network environments. Promising remedies are autonomic computing systems that are supposed to maintain themselves without particular user or application intervention. In this thesis, an Autonomic Cross-Platform Operating Environment (ACOE) is presented that supports On Demand Internet Computing (ODIC), such as dynamic application composition and ad hoc execution migration. The approach is based on an integration middleware called crossware that does not replace existing middleware but operates as a self-managing mediator between diverse application requirements and heterogeneous platform configurations. A Java implementation of the Crossware Development Kit (XDK) is presented, followed by the description of the On Demand Internet Computing System (ODIX). The feasibility of the approach is shown by the implementation of an Internet Application Workbench, an Internet Application Factory and an Internet Peer Federation. They illustrate the use of ODIX to support local, remote and distributed ODIC, respectively. Finally, the suitability of the approach is discussed with respect to the support of ODIC

    Gestion autonomique d'applications dynamiques sûres et résilientes

    Get PDF
    Service-Oriented architectures (SOA) are considered the most advanced way to develop and integrate modular and flexible applications.There are many SOA platforms available for software developers and architects; the most evolved of them being SCA and OSGi.An application based on one of these platforms can be assembled with only the components required for the execution of its tasks, which helps decreasing its resource consumption and increasing its maintainability.Furthermore, those platforms allow adding plug-ins at runtime, even if they were not known during the early stages of the development of the application.Thus, they allow updating, extending and adapting the features of the base product or of the technical services required for its execution, continuously and without outage.Those capabilities are applied in the DevOps paradigm and, more generally, to implement the continuous deployment of artifacts.However, the extensibility provided by those platforms can decrease the overall reliability of the system: a strong tendency in software development is the assembly of third-parties components.Such components may be of unknown or even questionable quality.In case of error, deterioration of performance, ... it is difficult to identify the implicated components or combinations of components.It becomes essential for the software producer to determine the responsibility of the various components involved in a malfunction.This thesis aims to provide a platform, Cohorte, to design and implement scalable software products, resilient to malfunctions of unqualified extensions.The components of such products may be developed in various programming languages and be deployed continuously (adding, updating and withdrawal) and without interruption of service.Our proposal adopts the principle of isolating the components considered unstable or insecure.The choice of the components to be isolated may be decided by the development team and the operational team, from their expertise, or determined from a combination of indicators.The latters evolve over time to reflect the reliability of components.For example, components can be considered reliable after a quarantine period; an update may result in deterioration of stability, ...Therefore, it is essential to question the initial choices in isolating components to limit, in the first case, the scope of communications between components and, in the second case, to maintain the reliability of the critical core of the product.Les architectures orientées services (SOA) sont considérées comme le moyen le plus avancé pour réaliser et intégrer rapidement des applications modulaires et flexibles.Dans ce domaine, les plates-formes SOA à disposition des développeurs et des architectes de produits logiciels sont multiples; les deux plus évoluées d'entre elles étant SCA et OSGi.Une application s'appuyant sur l'une de ces plates-formes peut ainsi être assemblée avec le minimum de composants nécessaires à la réalisation de ses tâches, afin de réduire sa consommation de ressources et d'augmenter sa maintenabilité.De plus, ces plates-formes autorisent l'ajout de composants greffons qui n'étaient pas connus lors des phases initiales de la réalisation du produit.Elles permettent ainsi de mettre à jour, d'étendre et d'adapter continuellement les fonctionnalités du produit de base ou des services techniques nécessaires à sa mise en production, sans interruption de service.Ces capacités sont notamment utilisées dans le cadre du paradigme DevOps et, plus généralement, pour mettre en œuvre le déploiement continu d'artefacts.Cependant, l'extensibilité offerte par ces plates-formes peut diminuer la fiabilité globale du système: une tendance forte pour développer un produit est l'assemblage de composants provenant de tierces-parties. De tels composants peuvent être d'une qualité inconnue voire douteuse.En cas d'erreur, de détérioration des performances, etc., il est difficile de diagnostiquer les composants ou combinaisons de composants incriminés.Il devient indispensable pour le producteur d'un logiciel de déterminer la responsabilité des différents composants impliqués dans un dysfonctionnement.Cette thèse a pour objectif de fournir une plate-forme, Cohorte, permettant de concevoir et d'exécuter des produits logiciels extensibles et résilients aux dysfonctionnements d'extensions non qualifiées.Les composants de tels produits pourront être développés dans différents langages de programmation et être déployés (ajout, mise à jour et retrait) en continu et sans interruption de service.Notre proposition adopte pour principe d'isoler les composants considérés comme instables ou peu sûrs.Le choix des composants à isoler peut être décidé par l'équipe de développement et l'équipe opérationnelle, à partir de leur expertise, ou bien déterminé à partir d'une combinaison d'indicateurs.Ces derniers évoluent au cours du temps pour refléter la fiabilité des composants.Par exemple, des composants peuvent être considérés fiables après une période de quarantaine; une mise à jour peut entraîner la dégradation de leur stabilité, etc..Par conséquent, il est indispensable de remettre en cause les choix initiaux dans l'isolation des composants afin, dans le premier cas, de limiter le coup des communications entre composants et, dans le deuxième cas, de maintenir le niveau de fiabilité du noyau critique du produit
    corecore