971 research outputs found
Theory and Practice of Transactional Method Caching
Nowadays, tiered architectures are widely accepted for constructing large
scale information systems. In this context application servers often form the
bottleneck for a system's efficiency. An application server exposes an object
oriented interface consisting of set of methods which are accessed by
potentially remote clients. The idea of method caching is to store results of
read-only method invocations with respect to the application server's interface
on the client side. If the client invokes the same method with the same
arguments again, the corresponding result can be taken from the cache without
contacting the server. It has been shown that this approach can considerably
improve a real world system's efficiency.
This paper extends the concept of method caching by addressing the case where
clients wrap related method invocations in ACID transactions. Demarcating
sequences of method calls in this way is supported by many important
application server standards. In this context the paper presents an
architecture, a theory and an efficient protocol for maintaining full
transactional consistency and in particular serializability when using a method
cache on the client side. In order to create a protocol for scheduling cached
method results, the paper extends a classical transaction formalism. Based on
this extension, a recovery protocol and an optimistic serializability protocol
are derived. The latter one differs from traditional transactional cache
protocols in many essential ways. An efficiency experiment validates the
approach: Using the cache a system's performance and scalability are
considerably improved
Detecting Ontological Conflicts in Protocols between Semantic Web Services
The task of verifying the compatibility between interacting web services has
traditionally been limited to checking the compatibility of the interaction
protocol in terms of message sequences and the type of data being exchanged.
Since web services are developed largely in an uncoordinated way, different
services often use independently developed ontologies for the same domain
instead of adhering to a single ontology as standard. In this work we
investigate the approaches that can be taken by the server to verify the
possibility to reach a state with semantically inconsistent results during the
execution of a protocol with a client, if the client ontology is published.
Often database is used to store the actual data along with the ontologies
instead of storing the actual data as a part of the ontology description. It is
important to observe that at the current state of the database the semantic
conflict state may not be reached even if the verification done by the server
indicates the possibility of reaching a conflict state. A relational algebra
based decision procedure is also developed to incorporate the current state of
the client and the server databases in the overall verification procedure
Implementation of Distributed Transactions in BPEL
Cílem této bakalářské práce je implementovat podporu distribuovaných transakcí do projektu RiftSaw tak, aby webové služby mohly být volány v rámci distribuovaných transakcí podnikovými procesy. A to pouze v tom případě, že operace webové služby vyžaduje být provedena v rámci distribuované transakce. Oproti již funkčním implementacím přináší podporu specifikace WS-BusinessActivity a jiný způsob kontroly, zda má podnikový proces použít distribuované transakce u volaných webových služeb.The goal of this work is to implement a support of distributed transactions into the project RiftSaw so that web services can be invoked within distributed transactions by business processes. And only if a web service operation requires to be performed within a distributed transaction. Comparing to already working implementations, the presented sulution brings support of WS-BusinessActivity specification and a different way of checking that a business process use distributed transactions for invoked web services.
Unification of Transactions and Replication in Three-Tier Architectures Based on CORBA
In this paper, we describe a software infrastructure that unifies transactions and replication in three-tier architectures and provides data consistency and high availability for enterprise applications. The infrastructure uses transactions based on the CORBA object transaction service to protect the application data in databases on stable storage, using a roll-backward recovery strategy, and replication based on the fault tolerant CORBA standard to protect the middle-tier servers, using a roll-forward recovery strategy. The infrastructure replicates the middle-tier servers to protect the application business logic processing. In addition, it replicates the transaction coordinator, which renders the two-phase commit protocol nonblocking and, thus, avoids potentially long service disruptions caused by failure of the coordinator. The infrastructure handles the interactions between the replicated middle-tier servers and the database servers through replicated gateways that prevent duplicate requests from reaching the database servers. It implements automatic client-side failover mechanisms, which guarantee that clients know the outcome of the requests that they have made, and retries aborted transactions automatically on behalf of the clients
Architecture for Provenance Systems
This document covers the logical and process architectures of provenance systems. The logical architecture identifies key roles and their interactions, whereas the process architecture discusses distribution and security. A fundamental aspect of our presentation is its technology-independent nature, which makes it reusable: the principles that are exposed in this document may be applied to different technologies
Run-time Variability with First-class Contexts
Software must be regularly updated to keep up with changing requirements. Unfortunately, to install an update, the system must usually be restarted, which is inconvenient and costly. In this dissertation, we aim at overcoming the need for restart by enabling run-time changes at the programming language level. We argue that the best way to achieve this goal is to improve the support for encapsulation, information hiding and late binding by contextualizing behavior. In our approach, behavioral variations are encapsulated into context objects that alter the behavior of other objects locally. We present three contextual language features that demonstrate our approach. First, we present a feature to evolve software by scoping variations to threads. This way, arbitrary objects can be substituted over time without compromising safety. Second, we present a variant of dynamic proxies that operate by delegation instead of forwarding. The proxies can be used as building blocks to implement contextualization mechanisms from within the language. Third, we contextualize the behavior of objects to intercept exchanges of references between objects. This approach scales information hiding from objects to aggregates. The three language features are supported by formalizations and case studies, showing their soundness and practicality. With these three complementary language features, developers can easily design applications that can accommodate run-time changes
Recommended from our members
Making Data Storage Efficient in the Era of Cloud Computing
We enter the era of cloud computing in the last decade, as many paradigm shifts are happening on how people write and deploy applications. Despite the advancement of cloud computing, data storage abstractions have not evolved much, causing inefficiencies in performance, cost, and security.
This dissertation proposes a novel approach to make data storage efficient in the era of cloud computing by building new storage abstractions and systems that bridge the gap between cloud computing and data storage and simplify development. We build four systems to address four data inefficiencies in cloud computing.
The first system, Grandet, solves the data storage inefficiency caused by the paradigm shift from upfront provisioning to a variety of pay-as-you-go cloud services. Grandet is an extensible storage system that significantly reduces storage costs for web applications deployed in the cloud. Under the hood, it supports multiple heterogeneous stores and unifies them by placing each data object at the store deemed most economical. Our results show that Grandet reduces their costs by an average of 42.4%, and it is fast, scalable, and easy to use.
The second system, Unic, solves the data inefficiency caused by the paradigm shift from single-tenancy to multi-tenancy. Unic securely deduplicates general computations. It exports a cache service that allows cloud applications running on behalf of mutually distrusting users to memoize and reuse computation results, thereby improving performance. Unic achieves both integrity and secrecy through a novel use of code attestation, and it provides a simple yet expressive API that enables applications to deduplicate their own rich computations. Our results show that Unic is easy to use, speeds up applications by an average of 7.58x, and with little storage overhead.
The third system, Lambdata, solves the data inefficiency caused by the paradigm shift to serverless computing, where developers only write core business logic, and cloud service providers maintain all the infrastructure. Lambdata is a novel serverless computing system that enables developers to declare a cloud function's data intents, including both data read and data written. Once data intents are made explicit, Lambdata performs a variety of optimizations to improve speed, including caching data locally and scheduling functions based on code and data locality. Our results show that Lambdata achieves an average speedup of 1.51x on the turnaround time of practical workloads and reduces monetary cost by 16.5%.
The fourth system, CleanOS, solves the data inefficiency caused by the paradigm shift from desktop computers to smartphones always connected to the cloud. CleanOS is a new Android-based operating system that manages sensitive data rigorously and maintains a clean environment at all times. It identifies and tracks sensitive data, encrypts it with a key, and evicts that key to the cloud when the data is not in active use on the device. Our results show that CleanOS limits sensitive-data exposure drastically while incurring acceptable overheads on mobile networks
Programming Language Abstractions for the Global Network
Increasing demand for Internet-based applications motivates the development of programming models that ease their implementation. With the research presented in this thesis, we aim to improve understanding of what is involved when programming applications for the global network, and in particular the Web. We are primarily concerned with the development of language-level programming abstractions that address issues arising from the failure and performance properties of the Web. Frequent failure and unpredictable performance are ever-present aspects of any Web computation, so we must bring the properties of the Web into the semantic domain of our program systems. Our primary goal is to enable concise and intuitive expression of failure semantics in the context of concurrency, which is necessary for efficient Web computation given the large overhead in every network access. The main scientific contribution of this thesis is the development of a Web programming model for which a major design goal is the integration of domain concepts, failure interpretation, concurrency, and a mechanism for flow of control after failure. Our model is the first to successfully achieve a clean integration. We develop a programming language called Focus, which incorporates two complimentary abstractions. Persistent relative observables allow reasoning about the dynamic behaviour of computations in the context of past behaviours. Examples of observables are the rate, elapsed time, and success probability of http fetches. The mechanics of our observables mechanism allows the generalisation of the observables concept to all computation, and not just Web fetches. This generalisation is key in our design approach to supervisors, which are abstractions over concurrency designed for the specification of failure semantics and concurrency for computations that contain Web fetches. In essence, supervisors monitor and control the behaviour of arbitrary concurrent computations, which are passed as parameters, while retaining a strict separation of computational logic and control logic. In conjunction with observables, supervisors allow the writing of general control functions, parameterisable both by value and computation. Observables are abstract values that fluctuate dynamically, and all computations export the same set of observables. Observables allow genericity in supervisor control, since the mechanism constrains the value of observables within a pattern of fluctuation around a single number. Whatever the activity of a computation, information about its behaviour can be obtained within a range of values in the observables. This means that supervisors can be applied independently of knowledge of the program logic for supervised computations. Supervisors and observables are useful in the context of the Web due to the multiplicity of possible failure modes, many of which require interpretation, and the need for complex flow of control in the presence of concurrency
- …