93,307 research outputs found
Update Consistency for Wait-free Concurrent Objects
In large scale systems such as the Internet, replicating data is an essential
feature in order to provide availability and fault-tolerance. Attiya and Welch
proved that using strong consistency criteria such as atomicity is costly as
each operation may need an execution time linear with the latency of the
communication network. Weaker consistency criteria like causal consistency and
PRAM consistency do not ensure convergence. The different replicas are not
guaranteed to converge towards a unique state. Eventual consistency guarantees
that all replicas eventually converge when the participants stop updating.
However, it fails to fully specify the semantics of the operations on shared
objects and requires additional non-intuitive and error-prone distributed
specification techniques. This paper introduces and formalizes a new
consistency criterion, called update consistency, that requires the state of a
replicated object to be consistent with a linearization of all the updates. In
other words, whereas atomicity imposes a linearization of all of the
operations, this criterion imposes this only on updates. Consequently some read
operations may return out-dated values. Update consistency is stronger than
eventual consistency, so we can replace eventually consistent objects with
update consistent ones in any program. Finally, we prove that update
consistency is universal, in the sense that any object can be implemented under
this criterion in a distributed system where any number of nodes may crash.Comment: appears in International Parallel and Distributed Processing
Symposium, May 2015, Hyderabad, Indi
Maintaining consistency in distributed systems
In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability
Complex Actions for Event Processing
Automatic reactions triggered by complex events have been
deployed with great success in particular domains, among
others, in algorithmic trading, the automatic reaction to realtime
analysis of marked data. However, to date, reactions
in complex event processing systems are often still limited
to mere modifications of internal databases or are realized
by means similar to remote procedure calls.
In this paper, we argue that expressive complex actions
with support for composite work
ows and integration of
so called external actions are desirable for a wide range
of real-world applications among other emergency management.
This article investigates the particularities of external
actions needed in emergency management, which are initiated
inside the event processing system but which are actually
executed by external actuators, and discuss the implications
of these particularities on composite actions. Based
on these observations, we propose versatile complex actions
with temporal dependencies and a seamless integration of
complex events and external actions. This article also investigates
how the proposed integrated approach towards
complex events and complex actions can be evaluated based
on simple reactive rules. Finally, it is shown how complex actions
can be deployed for a complex event processing system
devoted to emergency management
Analytical/ML Mixed Approach for Concurrency Regulation in Software Transactional Memory
In this article we exploit a combination of analytical and Machine Learning (ML) techniques in order to build a performance model allowing to dynamically tune the level of concurrency of applications based on Software Transactional Memory (STM). Our mixed approach has the advantage of reducing the training time of pure machine learning methods, and avoiding approximation errors typically affecting pure analytical approaches. Hence it allows very fast construction of highly reliable performance models, which can be promptly and effectively exploited for optimizing actual application runs. We also present a real implementation of a concurrency regulation architecture, based on the mixed modeling approach, which has been integrated with the open source Tiny STM package, together with experimental data related to runs of applications taken from the STAMP benchmark suite demonstrating the effectiveness of our proposal. © 2014 IEEE
The AI Bus architecture for distributed knowledge-based systems
The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security
- …