705 research outputs found
The Rise of an Emerging Diplomatic Actor? Assessing the Role of the EU Delegation to the African Union. EU Diplomacy Paper 13 / 2017
Every three years the European Union (EU) and the African Union (AU) hold a Summit of Heads of States and Governments to take stock of the progress made in the implementation of the Africa-EU Partnership. The 5th African Union-EU Summit will take place on 29-30 November 2017 in Abidjan. On this occasion, this paper aims to analyse the interplay between the EU Delegation (EUDEL) and the permanent missions of the EU member states to the African Union in Addis Ababa. To what extent has the EUDEL emerged as a post-Westphalian diplomatic actor that centralizes, complements or competes with the diplomatic activities of member states’ permanent missions? I argue that the EUDEL and its member states have created an ‘umbrella regional diplomacy’, where member states embed their bilateral diplomatic relations in the overall European approach towards the AU. However, since it is up to the AU to grant access to its meetings, the interplay between the EUDEL and its member states’ permanent missions is importantly shaped by the AU’s preferences for its diplomatic counterpart(s)
Dependability Metrics : Research Workshop Proceedings
Justifying reliance in computer systems is based on some form of evidence about such systems. This in turn implies the existence of scientific techniques to derive such evidence from given systems or predict such evidence of systems. In a general sense, these techniques imply a form of measurement. The workshop Dependability Metrics'', which was held on November 10, 2008, at the University of Mannheim, dealt with all aspects of measuring dependability
04511 Abstracts Collection -- Architecting Systems with Trustworthy Components
From 12.12.04 to 17.12.04, the Dagstuhl Seminar 04511 ``Architecting Systems with Trustworthy Components\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Optimisation of patch distribution strategies for AMR applications
As core counts increase in the world's most powerful supercomputers, applications are becoming limited not only by computational power, but also by data availability. In the race to exascale, efficient and effective communication policies are key to achieving optimal application performance. Applications using adaptive mesh refinement (AMR) trade off communication for computational load balancing, to enable the focused computation of specific areas of interest. This class of application is particularly susceptible to the communication performance of the underlying architectures, and are inherently difficult to scale efficiently. In this paper we present a study of the effect of patch distribution strategies on the scalability of an AMR code. We demonstrate the significance of patch placement on communication overheads, and by balancing the computation and communication costs of patches, we develop a scheme to optimise performance of a specific, industry-strength, benchmark application
Counter-constrained finite state machines: modelling component protocols with resource-dependencies
This report deals with the specification of software component
protocols (i.e., the set of service call sequences). The
contribution of this report is twofold: (a) We discuss specific
requirements of real-world protocols, especially in the presence
of components wich make use of limited resources. (b) We define
counter-constrained finite state machines (CC-FSMs), a novel
extension of finite state machines, specifically created to
model protocols having dependencies between services due to
their access to shared resources. We provide a theoretical
framework for reasoning and analysing CC-FSMs. Opposed to finite
state machines and other approaches, CC-FSMs combine two
valuable properties: (a) CC-FSMs are powerful enough to model
realistic component protocols with resource allocation, usage,
and de-allocation dependencies between methods (as occurring in
common abstract datatypes such as stacks or queues) and (b)
CC-FSMs have a decidabile equivalence- and inclusion problem as
proved in this report by providing algorithms for efficient
checking equivalence and inclusion. These algorithms directly
lead to efficient checks for component interoperability and
substitutability.
Keywords: software component protocols, finite state machine
extension, decidable inclusion check, interoperability,
substitutability
Simulation of MPI applications with time-independent traces
International audienceAnalyzing and understanding the performance behavior of parallel applications on parallel computing platforms is a long-standing concern in the High Performance Computing community. When the targeted platforms are not available , simulation is a reasonable approach to obtain objective performance indicators and explore various hypothetical scenarios. In the context of applications implemented with the Message Passing Interface, two simulation methods have been proposed, on-line simulation and off-line simulation, both with their own drawbacks and advantages. In this work we present an off-line simulation framework, i.e., one that simulates the execution of an application based on event traces obtained from an actual execution. The main novelty of this work, when compared to previously proposed off-line simulators, is that traces that drive the simulation can be acquired on large, distributed, heterogeneous , and non-dedicated platforms. As a result the scalability of trace acquisition is increased, which is achieved by enforcing that traces contain no time-related information. Moreover, our framework is based on an state-of-the-art scalable, fast, and validated simulation kernel. We introduce the notion of performing off-line simulation from time-independent traces, propose and evaluate several trace acquisition strategies, describe our simulation framework, and assess its quality in terms of trace acquisition scalability, simulation accuracy, and simulation time
SKaMPI: the special Karlsruher MPI-benchmark. User manual
SKaMPI is the Special Karlsruher MPI-Benchmark. SKaMPI measures
the performance of MPI implementations, and of course of the
underlying hardware. It performs various measurements of several
MPI functions. SKaMPI\u27s primary goal is giving support to
software developers. The knowledge of MPI function\u27s
performance has several benefits: The software developer knows
the right way of implementing a program for a given machine,
without (or with shortening) the tedious time costly tuning,
which usually has to take place. The developer has not to wait
until the code is written, performance issues can also
be considered during the design stage. Developing for
performance even can take place, also if the considered target
machine is not accessible.
MPI performance knowledge is especially important, when
developing portable parallel programs. So the code can be
developed for all considered target platforms in an optimal
manner. So we achieve performance portability, which means that
code runs without time consuming tuning after recompilation on a
new platform
SKaLib: SKaMPI as a library
SKaLib is a library to support the development of benchmarks.
It offsprings from the SKaMPI-project. SKaMPI is a benchmark to
measure the performance of MPI-operations. Many mechanisms and
function of the SKaMPI-benchmark program are also useful when
benchmarking other functions than MPI\u27s. The goal of SKaLibis to
offer the benchmarking mechanisms of SKaMPI to a broader range
of applications. The mechanisms are: precision adjustable
measurement of time, controlled standard error, automatic
parameter refinement, and merging results of several
benchmarking runs.
This documents fulfills two purposes: on the one hand it should
be a manual to use the library SKaLib and explains how to
benchmark an operation. On the other hand this report
complements the SKaMPI-user manual. The latter report explains
the configurations and the output of SKaMPI, whereas this
reports gives a detailed description of the internal data
structures and operations used in the SKaMPI-benchmark.
There is also a scientific section which motivates and describes
the algorithms and underlying formulas used by SKaMPI
Multilevel Contracts for Trusted Components
This article contributes to the design and the verification of trusted
components and services. The contracts are declined at several levels to cover
then different facets, such as component consistency, compatibility or
correctness. The article introduces multilevel contracts and a
design+verification process for handling and analysing these contracts in
component models. The approach is implemented with the COSTO platform that
supports the Kmelia component model. A case study illustrates the overall
approach.Comment: In Proceedings WCSI 2010, arXiv:1010.233
- …
