32 research outputs found
Management of concurrency in a reliable object-oriented computing system
PhD ThesisModern computing systems support concurrency as a means of increasing
the performance of the system. However, the potential for increased performance
is not without its problems. For example, lost updates and inconsistent retrieval
are but two of the possible consequences of unconstrained concurrency. Many
concurrency control techniques have been designed to combat these problems;
this thesis considers the applicability of some of these techniques in the context of
a reliable object-oriented system supporting atomic actions.
The object-oriented programming paradigm is one approach to handling the
inherent complexity of modern computer programs. By modeling entities from
the real world as objects which have well-defined interfaces, the interactions in
the system can be carefully controlled. By structuring sequences of such
interactions as atomic actions, then the consistency of the system is assured.
Objects are encapsulated entities such that their internal representation is not
externally visible. This thesis postulates that this encapsulation should also
include the capability for an object to be responsible for its own concurrency
control.
Given this latter assumption, this thesis explores the means by which the
property of type-inheritance possessed by object-oriented languages can be
exploited to allow programmers to explicitly control the level of concurrency an
object supports. In particular, a object-oriented concurrency controller based
upon the technique of two-phase locking is described and implemented using
type-inheritance. The thesis also shows how this inheritance-based approach is
highly flexible such that the basic concurrency control capabilities can be adopted
unchanged or overridden with more type-specific concurrency control if requiredUK Science and Engineering Research Council,
Serc/Alve
A comparative study of the performance of concurrency control algorithms in a centralised database
Abstract unavailable. Please refer to PDF
Extending Complex Event Processing for Advanced Applications
Recently numerous emerging applications, ranging from on-line financial transactions, RFID based supply chain management, traffic monitoring to real-time object monitoring, generate high-volume event streams. To meet the needs of processing event data streams in real-time, Complex Event Processing technology (CEP) has been developed with the focus on detecting occurrences of particular composite patterns of events. By analyzing and constructing several real-world CEP applications, we found that CEP needs to be extended with advanced services beyond detecting pattern queries. We summarize these emerging needs in three orthogonal directions. First, for applications which require access to both streaming and stored data, we need to provide a clear semantics and efficient schedulers in the face of concurrent access and failures. Second, when a CEP system is deployed in a sensitive environment such as health care, we wish to mitigate possible privacy leaks. Third, when input events do not carry the identification of the object being monitored, we need to infer the probabilistic identification of events before feed them to a CEP engine. Therefore this dissertation discusses the construction of a framework for extending CEP to support these critical services. First, existing CEP technology is limited in its capability of reacting to opportunities and risks detected by pattern queries. We propose to tackle this unsolved problem by embedding active rule support within the CEP engine. The main challenge is to handle interactions between queries and reactions to queries in the high-volume stream execution. We hence introduce a novel stream-oriented transactional model along with a family of stream transaction scheduling algorithms that ensure the correctness of concurrent stream execution. And then we demonstrate the proposed technology by applying it to a real-world healthcare system and evaluate the stream transaction scheduling algorithms extensively using real-world workload. Second, we are the first to study the privacy implications of CEP systems. Specifically we consider how to suppress events on a stream to reduce the disclosure of sensitive patterns, while ensuring that nonsensitive patterns continue to be reported by the CEP engine. We formally define the problem of utility-maximizing event suppression for privacy preservation. We then design a suite of real-time solutions that eliminate private pattern matches while maximizing the overall utility. Our first solution optimally solves the problem at the event-type level. The second solution, at event-instance level, further optimizes the event-type level solution by exploiting runtime event distributions using advanced pattern match cardinality estimation techniques. Our experimental evaluation over both real-world and synthetic event streams shows that our algorithms are effective in maximizing utility yet still efficient enough to offer near real time system responsiveness. Third, we observe that in many real-world object monitoring applications where the CEP technology is adopted, not all sensed events carry the identification of the object whose action they report on, so called €œnon-ID-ed€� events. Such non-ID-ed events prevent us from performing object-based analytics, such as tracking, alerting and pattern matching. We propose a probabilistic inference framework to tackle this problem by inferring the missing object identification associated with an event. Specifically, as a foundation we design a time-varying graphic model to capture correspondences between sensed events and objects. Upon this model, we elaborate how to adapt the state-of-the-art Forward-backward inference algorithm to continuously infer probabilistic identifications for non-ID-ed events. More important, we propose a suite of strategies for optimizing the performance of inference. Our experimental results, using large-volume streams of a real-world health care application, demonstrate the accuracy, efficiency, and scalability of the proposed technology
Design and implementation of a transaction manager for a relational database
1vf ulti-user database management systems are in great demand because of the information
requirements of our modern industrial society. A clear requirement is that
database resources be shared by many users at the same time. Transaction management
aims to manage concurrent database access by multiple users while preserving
the consistency of the database.
In this thesis a single-user relational database management system, REQUIEM,
is used as a vehicle to investigate improved methods for achieving this. A module,
called the REQUIEM Transaction Manager (RTM), is built on top of the original
REQUIEM to achieve a multi-user database management system.
The design work of the present thesis is founded upon various techniques for
transaction management proposed in published literature which are critically assessed
and a mechanism which combines appealing features from existing methodologies.
The problems of transaction management considered in this thesis are:
1. concurrency control,
2. granularity control,
3. deadlock control, and
4. recovery control.
The RTM is also compared with the transaction management facilities of conventional
commercial systems such as DB2, INGRES and ORACLE
The Design of Secure Mobile Databases: An Evaluation of Alternative Secure Access Models
This research considers how mobile databases can be designed to be both secure and usable. A mobile database is one that is accessed and manipulated via mobile information devices over a wireless medium. A prototype mobile database was designed and then tested against secure access control models to determine if and how these models performed in securing a mobile database. The methodology in this research consisted of five steps. Initially, a preliminary analysis was done to delineate the environment the prototypical mobile database would be used in. Requirements definitions were established to gain a detailed understanding of the users and function of the database system. Conceptual database design was then employed to produce a database design model. In the physical database design step, the database was denormalized in order to reflect some unique computing requirements of the mobile environment. Finally, this mobile database design was tested against three secure access control models and observations made
Recommended from our members
Data Management Solutions for Tackling Big Data Variety
Variety is one of the three defining characteristics of Big Data; the others being Volume and Velocity. There are several aspects of this data variety: diversity in data formats (text, video, audio) and structure (relational, graph etc), variety in access methodologies(OLTP, OLAP), and distribution heterogeneity within the workloads (read-heavy, high contention). Data management solutions for modern-day applications need to tackle this variety.This dissertation provides an understanding of the challenges associated with the different elements of variety, and proposes several solutions for efficiently handling its various aspects. First, the dissertation studies the challenges related to variety in data structure and access methodologies, and the resultant heterogeneity at the data infrastructure level. Applications now employ several data-processing engines with different underlying representations, like row, column, graph etc., to process their data. We propose Janus, which introduces a novel data-movement pipeline, which enables the use of different representations to support both high throughput of transactions and diverse analytics, while still ensuring consistent real-time analytics in a scale-out setting. Janus partitions the data at different representations, and allows distributed transactions and diverse partitioning strategies at the representations. Then, we propose Typhon and Cerberus, which define and enforce consistency semantics for application data spread across representations. Second, this dissertation proposes solutions for handling distribution heterogeneity within the workloads. Workloads can have have skewed distribution in terms of operation-type, data access or temporal variation. We propose strongly-consistent quorum reads for Raft-like consensus protocols, which can be utilized to scale read-heavy workloads. For supporting high contention transaction workloads, we integrate an existing dynamic timestamp allocation based concurrency control mechanism in a distributed OLTP setting, and analyze its performance. Third, we study IoT applications, which have to deal with both physical heterogeneity of the sensors, as well asdiverse data-processing demands. We propose a multi-representation based architecture catering to IoT applications, and also present the initial design of M-stream, a computation framework for enabling integration and monitoring of uncertain data from multiplesensors. Through analysis, illustrative examples and extensive evaluation of the proposed protocols, this dissertation demonstrates that the proposed solutions can be employed for efficiently handling the different aspects of variety of data-intensive applications
Performance Optimizations for Software Transactional Memory
The transition from single-core processors to multi-core processors demands a change from sequential programming to concurrent programming for mainstream programmers. However, concurrent programming has long been widely recognized as being notoriously difficult. A major reason for its difficulty is that existing concurrent programming constructs provide low-level programming abstractions. Using these constructs forces programmers to consider many low level details. Locks, the dominant programming construct for mutual exclusion, suffer several well known problems, such as deadlock, priority inversion, and convoying, and are directly related to the difficulty of concurrent programming. The alternative to locks, i.e. non-blocking programming, not only is extremely error-prone, but also does not produce consistently good performance. Better programming constructs are critical to reduce the complexity of concurrent programming, increase productivity, and expose the computing power in multi-core processors. Transactional memory has emerged recently as one promising programming construct for supporting atomic operations on shared data. By eliminating the need to consider a huge number of possible interactions among concurrent transactions, Transactional memory greatly reduces the complexity of concurrent programming and vastly improves programming productivity. Software transactional memory systems implement a transactional memory abstraction in software. Unfortunately, existing designs of Software Transactional Memory systems incur significant performance overhead that could potentially prevent it from being widely used. Reducing STM's overhead will be critical for mainstream programmers to improve productivity while not suffering performance degradation. My thesis is that the performance of STM can be significantly improved by intelligently designing validation and commit protocols, by designing the time base, and by incorporating application-specific knowledge. I present four novel techniques for improving performance of STM systems to support my thesis. First, I propose a time-based STM system based on a runtime tuning strategy that is able to deliver performance equal to or better than existing strategies. Second, I present several novel commit phase designs and evaluate their performance. Then I propose a new STM programming interface extension that enables transaction optimizations using fast shared memory reads while maintaining transaction composability. Next, I present a distributed time base design that outperforms existing time base designs for certain types of STM applications. Finally, I propose a novel programming construct to support multi-place isolation. Experimental results show the techniques presented here can significantly improve the STM performance. We expect these techniques to help STM be accepted by more programmers
Exploiting replication in distributed systems
Techniques are examined for replicating data and execution in directly distributed systems: systems in which multiple processes interact directly with one another while continuously respecting constraints on their joint behavior. Directly distributed systems are often required to solve difficult problems, ranging from management of replicated data to dynamic reconfiguration in response to failures. It is shown that these problems reduce to more primitive, order-based consistency problems, which can be solved using primitives such as the reliable broadcast protocols. Moreover, given a system that implements reliable broadcast primitives, a flexible set of high-level tools can be provided for building a wide variety of directly distributed application programs