20 research outputs found
Recovery within long running transactions
As computer systems continue to grow in complexity, the possibilities of failure increase. At the
same time, the increase in computer system pervasiveness in day-to-day activities brought along
increased expectations on their reliability. This has led to the need for effective and automatic error
recovery techniques to resolve failures. Transactions enable the handling of failure propagation
over concurrent systems due to dependencies, restoring the system to the point before the failure
occurred. However, in various settings, especially when interacting with the real world, reversal
is not possible. The notion of compensations has been long advocated as a way of addressing this
issue, through the specification of activities which can be executed to undo partial transactions.
Still, there is no accepted standard theory; the literature offers a plethora of distinct formalisms
and approaches.
In this survey, we review the compensations from a theoretical point of view by: (i) giving a
historic account of the evolution of compensating transactions; (ii) delineating and describing a
number of design options involved; (iii) presenting a number of formalisms found in the literature,
exposing similarities and differences; (iv) comparing formal notions of compensation correctness;
(v) giving insights regarding the application of compensations in practice; and (vi) discussing
current and future research trends in the area.peer-reviewe
Web Service Transaction Correctness
In our research we investigate the problem of providing consistency, availability and durability for Web Service transactions. First, we show that the popular lazy replica update propagation method is vulnerable to loss of transactional updates in the presence of hardware failures. We propose an extension to the lazy update propagation approach to reduce the risk of data loss. Our approach is based on the buddy system, requiring that updates are preserved synchronously in two replicas, called buddies. The rest of the replicas are updated using lazy update propagation protocols. Our method provides a balance between durability (i.e., effects of the transaction are preserved even if the server, executing the transaction, crashes before the update can be propagated to the other replicas) and efficiency (i.e., our approach requires a synchronous update between two replicas only, adding a minimal overhead to the lazy replication protocol). Moreover, we show that our method of selecting the buddies ensures correct execution and can be easily extended to balance workload, and reduce latency observable by the client.
Second, we consider Web Service transactions that consume anonymous and attribute based resources. We show that the availability of the popular lazy replica update propagation method can be achieved while increasing its durability and consistency. Our system provides a new consistency constraint, Capacity Constraint, which allows the system to guarantee that resources are not over consumed and also allows for higher distribution of the consumption. Our method provides; 1.) increased availability through the distribution of element master\u27s by using all available clusters, 2.) consistency by performing the complete transaction on a single set of clusters, and 3.) guaranteed durability by updating two clusters synchronously with the transaction.
Third, we consider each transaction as a black box. We model the corresponding metadata, i.e., transaction semantics, as UML specifications. We refer to these WS-transactions as coarse grained WS-transactions. We propose an approach that guarantees the availability of the popular lazy replica update propagation method while increasing the durability and consistency. In this section we extend the Buddy System to handle course grained WS-transactions, using UML stereotypes that allow scheduling semantics to be embedded into the design model. This design model is the then exported and consumed by a service dispatcher to provide: 1.) High availability by distributing service requests across all available clusters. 2.) Consistency by performing the complete transaction on a single set of clusters. 3.) Durability by updating two clusters synchronously.
Finally, we consider enforcement of integrity constraints in a way that increases availability while guaranteeing the correctness specified in the constraint. We organize these integrity constraints into three categories: entity, domain and hierarchical constraints. Hierarchical constraints offer an opportunity for optimization because of an expensive aggregation calculation required in the enforcement of the constraint. We propose an approach that guarantees that the constraint cannot be violated but it also allows the distribution of write operations among many clusters to increase availability. In our previous work, we proposed a replica update propagation method, called the Buddy System, which guaranteed durability and increased availability of web services. In this section we extend the Buddy System to enforce the hierarchical data integrity constraints
Spring framework in smart proxy transaction model
This paper explores adoption of open source application framework in Smart Proxy (sProxy) Transaction model for transaction support. An open source application framework - Spring Framework is plugged into the Smart Proxy (sProxy) Transactional model to support transactional properties. Spring Framework in the sProxy Transaction model increases the transactional interoperability in Web Services context. © 2009 IEEE
Recovery Management of Long Running eBusiness Transactions
eBusiness collaboration and an eBusiness process are introduced as a context of a long running eBusiness transaction. The nature of the eBusiness collaboration sets requirements for the long running transactions. The ACID properties of the classical database transaction must be relaxed for the eBusiness transaction. Many techniques have been developed to take care of the execution of the long running business transactions such as the classical Saga and a business transaction model (BTM) of the business transaction framework. Those classic techniques cannot adequately take into account the recovery needs of the long running eBusiness transactions and they need to be further improved and developed.
The expectations for a new service composition and recovery model are defined and described. The DeltaGrid service composition and recovery model (DGM) and the Constraint rules-based recovery mechanism (CM) are introduced as examples of the new model. The classic models and the new models are compared to each other and it is analysed how the models answer to the expectations.
Neither new model uses the unaccustomed classification of atomicity even if the BTM includes the unaccustomed classifying of atomicity. A recovery model of the new models has improved the ability to take into account the data and control dependencies in the backward recovery. The new models present two different kinds of strategies to recover a failed service. The strategy of the CM increases the flexibility and the efficiency compared to the Saga or the BTF. The DGM defines characteristics that the CM does not have: a Delta-Enabled rollback, mechanisms for a pre-commit recoverability and for a post-commit recoverability and extends the concepts of a shallow compensation and a deep compensation. The use of them guarantees that an eBusiness process recovers always in a consistent state which is something the Saga, the BTM and the CM could not proof. The DGM offers also the algorithms of the important mechanisms.
ACM Computing Classification System (CCS): C.2.4 [Distributed Systems]: Distributed application
Conceptual modelling of adaptive web services based on high-level petri nets
Service technology geared by its SOA architecture and enabling Web services is
rapidly gaining in maturity and acceptance. Consequently, most worldwide
(private and corporate) cross-organizations are embracing this paradigm by
publishing, requesting and composing their businesses and applications in the
form of (web-)services. Nevertheless, to face harsh competitiveness such service oriented
cross-organizational applications are increasingly pressed to be highly
composite, adaptive, knowledge-intensive and very reliable. In contrast to that,
Web service standards such as WSDL, WSBPEL, WS-CDL and many others
offer just static, manual, purely process-centric and ad-hoc techniques to deploy
such services.
The main objective of this thesis consists therefore in leveraging the development
of service-driven applications towards more reliability, dynamically
and adaptable knowledge-intensiveness. This thesis puts forward an innovative
framework based on distributed high-level Petri nets and event-driven business
rules. More precisely, we developed a new variant of high-level Petri Nets formalism
called Service-based Petri nets (CSrv-Nets), that exhibits the following
potential characteristics. Firstly, the framework is supported by a stepwise
methodology that starts with diagrammatical UML-class diagrams and business
rules and leads to dynamically adaptive services specifications. Secondly, the
framework soundly integrates behavioural event-driven business rules and stateful
services both at the type and instance level and with an inherent distribution.
Thirdly, the framework intrinsically permits validation through guided graphical
animation. Fourthly, the framework explicitly separates between orchestrations
for modelling rule-intensive single services and choreography for cooperating
several services through their governing interactive business rules. Fifthly, the
framework is based on a two-level conceptualization: (1) the modelling of any
rule-centric service with CSrv-Nets; (2) the smooth upgrading of this service
modelling with an adaptability-level that allows for dynamically shifting up and
down any rule-centric behavior of the running business activities
Distributed Handler Architecture
Thesis (PhD) - Indiana University, Computer Sciences, 2007Over the last couple of decades, distributed systems have been demonstrated an architectural evolvement based on models including client/server, multi-tier, distributed objects, messaging and peer-to-peer. One recent evolutionary step is Service Oriented Architecture (SOA), whose goal is to achieve loose-coupling among the interacting software applications for scalability and interoperability. The SOA model is engendered in Web Services, which provide software platforms to build applications as services and to create seamless and loosely-coupled interactions. Web Services utilize supportive functionalities such as security, reliability, monitoring, logging and so forth. These functionalities are typically provisioned as handlers, which incrementally add new capabilities to the services by building an execution chain. Even though handlers are very important to the service, the way of utilization is very crucial to attain the potential benefits. Every attempt to support a service with an additive functionality increases the chance of having an overwhelmingly crowded chain: this makes Web Service fat. Moreover, a handler may become a bottleneck because of having a comparably higher processing time.
In this dissertation, we present Distributed Handler Architecture (DHArch) to provide an efficient, scalable and modular architecture to manage the execution of the handlers. The system distributes the handlers by utilizing a Message Oriented Middleware and orchestrates their execution in an efficient fashion. We also present an empirical evaluation of the system to demonstrate the suitability of this architecture to cope with the issues that exist in the conventional Web Service handler structures
Recommended from our members
ACTAS: Adaptive Composition and Trading with Agents for Services
Mainly in business domains, the vision of gaining flexible, adaptive service environments is based on the standardization and practical proliferation of (Semantic) Web Services, ontologies, and agents. The standards of Web Services and their Service-oriented Architectures (SOA) became the standard paradigm for software component integration. Dynamic changes and the permanently increasing amount of available e-services of different domains are a challenge of Service Discovery and Composition. Mediation between different approaches and expert knowledge is often necessary for the composition of services of different domains. Semantic enhancements, Autonomic Service Discovery, and the research for more holistic concepts for the classification of e-services are current attempts of overcoming this challenge, in order to reach the ultimate goal of Autonomic SOC