10,879 research outputs found
Distributed interoperable workflow support for electronic commerce.
Abstract. This paper describes a flexible distributed transactional workflow environment based on an extensible object-oriented framework built around class libraries, application programming interfaces, and shared services. The purpose of this environment is to support a range of EC-like business activities including the support of financial transactions and electronic contracts. This environment has as its aim to provide key infrastructure services for mediating and monitoring electronic commerce.
HaTS: Hardware-Assisted Transaction Scheduler
In this paper we present HaTS, a Hardware-assisted Transaction Scheduler. HaTS improves performance of concurrent applications by classifying the executions of their atomic blocks (or in-memory transactions) into scheduling queues, according to their so called conflict indicators. The goal is to group those transactions that are conflicting while letting non-conflicting transactions proceed in parallel. Two core innovations characterize HaTS. First, HaTS does not assume the availability of precise information associated with incoming transactions in order to proceed with the classification. It relaxes this assumption by exploiting the inherent conflict resolution provided by Hardware Transactional Memory (HTM). Second, HaTS dynamically adjusts the number of the scheduling queues in order to capture the actual application contention level. Performance results using the STAMP benchmark suite show up to 2x improvement over state-of-the-art HTM-based scheduling techniques
An Analysis of Service Ontologies
Services are increasingly shaping the world’s economic activity. Service provision and consumption have been profiting from advances in ICT, but the decentralization and heterogeneity of the involved service entities still pose engineering challenges. One of these challenges is to achieve semantic interoperability among these autonomous entities. Semantic web technology aims at addressing this challenge on a large scale, and has matured over the last years. This is evident from the various efforts reported in the literature in which service knowledge is represented in terms of ontologies developed either in individual research projects or in standardization bodies. This paper aims at analyzing the most relevant service ontologies available today for their suitability to cope with the service semantic interoperability challenge. We take the vision of the Internet of Services (IoS) as our motivation to identify the requirements for service ontologies. We adopt a formal approach to ontology design and evaluation in our analysis. We start by defining informal competency questions derived from a motivating scenario, and we identify relevant concepts and properties in service ontologies that match the formal ontological representation of these questions. We analyze the service ontologies with our concepts and questions, so that each ontology is positioned and evaluated according to its utility. The gaps we identify as the result of our analysis provide an indication of open challenges and future work
Recommended from our members
Towards an aspect weaving BPEL engine
This position paper proposes the use of dynamic aspects and
the visitor design pattern to obtain a highly configurable and
extensible BPEL engine. Using these two techniques, the
core of this infrastructural software can be customised to
meet new requirements and add features such as debugging,
execution monitoring, or changing to another Web Service
selection policy. Additionally, it can easily be extended to
cope with customer-specific BPEL extensions. We propose
the use of dynamic aspects not only on the engine itself
but also on the workflow in order to tackle the problems of
Web Service hot deployment and hot fixes to long running
processes. In this way, composing aWeb Service "on-the-fly"
means weaving its choreography interface into the workflow
Artful Good Faith: An Essay on Law, Custom, and Intermediaries in Art Markets
This Essay explores relationships between custom and law in the United States in the context of markets for art objects. The Essay argues that these relationships are dynamic, not static, and that law can prompt evolution in customary practice well beyond the law\u27s formal requirements. Understanding these relationships in the context of art markets requires due attention to two components distinctive to art markets: the role of dealers and auction houses as transactional intermediaries as well as the role of museums as end-collectors. In the last decade, the business practices of major transactional intermediaries reflected a significant shift in customary practice, with attention newly focused on the provenance (ownership history) of objects consigned for sale and on long-standing concerns with an object\u27s condition and authorship. During the same time major museums developed new policies and practices applicable to new acquisitions and objects already in held in collections, focused in particular on archaeological objects and ancient art, as well as paintings present in European countries subject to the Nazi regime between 1932 and 1945. The Essay argues that, in both cases, law furnished the backdrop to significant shifts in customary practice, augmented by heightened public knowledge and concern. Custom evolved in response to salient episodes of enforcement of the law, which furnished further rallying points for newly broadened or awakened public interest and concern.
The relationships explored in this Essay are relevant to ongoing debate about the merits of the underlying law. In the United States, it has long been true that nemo dat quod non habet—no one can give what one does not have—with the consequence that a thief cannot convey good title. The subsequent transferees lack good title and are not insulated against claims by the rightful owner even when the transferees acted in good faith. To be sure, an elapsed statute of limitations may furnish a defense, as may the equitable doctrine of laches. Prior scholarship notes that the United States is unusual, but not unique, because it does not recognize any good-faith purchaser defense in this context and because it does not require that the rightful owner of a stolen object compensate the good-faith purchaser as a condition of obtaining the return of the object. However, this scholarship does not acknowledge (or does not emphasize) the significance of transactional intermediaries within art markets or the operation of customary practices of museums and transactional intermediaries. This Essay thus adds the context requisite to evaluating the merits of the relevant law
MDCC: Multi-Data Center Consistency
Replicating data across multiple data centers not only allows moving the data
closer to the user and, thus, reduces latency for applications, but also
increases the availability in the event of a data center failure. Therefore, it
is not surprising that companies like Google, Yahoo, and Netflix already
replicate user data across geographically different regions.
However, replication across data centers is expensive. Inter-data center
network delays are in the hundreds of milliseconds and vary significantly.
Synchronous wide-area replication is therefore considered to be unfeasible with
strong consistency and current solutions either settle for asynchronous
replication which implies the risk of losing data in the event of failures,
restrict consistency to small partitions, or give up consistency entirely. With
MDCC (Multi-Data Center Consistency), we describe the first optimistic commit
protocol, that does not require a master or partitioning, and is strongly
consistent at a cost similar to eventually consistent protocols. MDCC can
commit transactions in a single round-trip across data centers in the normal
operational case. We further propose a new programming model which empowers the
application developer to handle longer and unpredictable latencies caused by
inter-data center communication. Our evaluation using the TPC-W benchmark with
MDCC deployed across 5 geographically diverse data centers shows that MDCC is
able to achieve throughput and latency similar to eventually consistent quorum
protocols and that MDCC is able to sustain a data center outage without a
significant impact on response times while guaranteeing strong consistency
Multiparty interactions in dependable distributed systems
PhD ThesisWith the expansion of computer networks, activities involving computer communication
are becoming more and more distributed. Such distribution can
include processing, control, data, network management, and security. Although
distribution can improve the reliability of a system by replicating
components, sometimes an increase in distribution can introduce some undesirable
faults. To reduce the risks of introducing, and to improve the chances
of removing and tolerating faults when distributing applications, it is important
that distributed systems are implemented in an organized way.
As in sequential programming, complexity in distributed, in particular
parallel, program development can be managed by providing appropriate
programming language constructs. Language constructs can help both by
supporting encapsulation so as to prevent unwanted interactions between
program components and by providing higher-level abstractions that reduce
programmer effort by allowing compilers to handle mundane, error-prone
aspects of parallel program implementation.
A language construct that supports encapsulation of interactions between
multiple parties (objects or processes) is referred in the literature as multiparty
interaction. In a multiparty interaction, several parties somehow "come
together" to produce an intermediate and temporary combined state, use this
state to execute some activity, and then leave the interaction and continue
their normal execution.
There has been a lot of work in the past years on multiparty interaction,
but most of it has been concerned with synchronisation, or handshaking,
between parties rather than the encapsulation of several activities executed
in parallel by the interaction participants. The programmer is therefore left
responsible for ensuring that the processes involved in a cooperative activity
do not interfere with, or suffer interference from, other processes not involved
in the activity.
Furthermore, none of this work has discussed the provision of features
that would facilitate the design of multiparty interactions that are expected
to cope with faults - whether in the environment that the computer system
has to deal with, in the operation of the underlying computer hardware or
software, or in the design of the processes that are involved in the interaction.
In this thesis the concept of multiparty interaction is integrated with
the concept of exception handling in concurrent activities. The final result
is a language in which the concept of multiparty interaction is extended
by providing it with a mechanism to handle concurrent exceptions. This
extended concept is called dependable multiparty interaction.
The features and requirements for multiparty interaction and exception
handling provided in a set of languages surveyed in this thesis, are integrated
to describe the new dependable multiparty interaction construct. Additionally,
object-oriented architectures for dependable multiparty interactions are
described, and a full implementation of one of the architectures is provided.
This implementation is then applied to a set of case studies. The case studies
show how dependable multiparty interactions can be used to design and
implement a safety-critical system, a multiparty programming abstraction,
and a parallel computation model.Brazilian Research Agency CNPq
An Error Handling Framework for the ORBWork Workflow Enactment Service of METEOR
Workflow Management Systems (WFMSs) can be used to re-engineer, streamline, automate, and track organizational processes involving humans and automated information systems. However, the state-of-the-art in workflow technology suffers from a number of limitations that prevent it from being widely used in large-scale mission critical applications. Error handling is one such issue. What makes the task of error handling challenging is the need to deal with errors that appear in various components of a complex distributed application execution environment, including various WFMS components, workflow application tasks of different types, and the heterogeneous computing infrastructure.
In this paper, we discuss a top-down approach towards dealing with errors in the context of ORBWork, a CORBA-based fully distributed workflow enactment service for the METEOR2 WFMS. The paper discusses the types of errors that might occur including those involving the infrastructure of the enactment environment, system architecture of the workflow enactment service. In the context of the underlying workflow model for METEOR, we then present a three-level error model to provide a unified approach to specification, detection, and runtime recovery of errors in ORBWork. Implementation issues are also discussed. We expect the model and many of the techniques to be relevant and adaptable to other WFMS implementations
- …