64 research outputs found
A comparative study of transaction management services in multidatabase heterogeneous systems
Multidatabases are being actively researched as a relatively new area in which many aspects are not yet fully understood. This area of transaction management in multidatabase systems still has many unresolved problems. The problem areas which this dissertation addresses are classification of multidatabase systems, global concurrency control, correctness criterion in a multidatabase environment, global deadlock detection, atomic commitment and crash recovery. A core group of research addressing these problems was identified and studied. The dissertation contributes to the multidatabase transaction management topic by introducing an alternative classification method for such multiple database systems; assessing existing research into
transaction management schemes and based on this assessment, proposes a transaction
processing model founded on the optimal properties of transaction management identified during
the course of this research.ComputingM. Sc. (Computer Science
Transaction management in mobile multidatabases.
This dissertation studies transaction management in the mobile Multidatabase environment. That is, it studies the management of transactions within the context of the mobile and Multidatabase environments. Two new transaction management techniques for the mobile Multidatabase environment i.e., the PS and Semantic-PS techniques are proposed. These techniques define two now states (Disconnected and Suspended) to address the disconnectivity of the mobile user. A new Partial Global Serialization Graph algorithm is introduced to verify the isolation property of global transactions. This algorithm verifies the serializability of a global transaction by constructing a partial global serialization graph. This algorithm relies on the propagation of (serialization) information to ensure that the partial graph contains sufficient information to verify serializability of global transactions. The unfair treatment of mobile transactions due to their prolonged execution time is minimized through pre-serialization. Pre-serialization allows mobile transactions to establish their serialization order prior to completing their execution.The Internet and advances in wireless communication technology have transformed many facets of the computer environment. Virtual connectivity through the internet has lead to a new genre of software systems, i.e., cooperating autonomous systems---systems that cooperate with each other to provide extended services to the user. Multidatabase systems---a set of databases that cooperate with each other in order to provide a single logical view of the underlying information---is an example of such systems. Advances in wireless communication technology dictate that the services available to the wired user be made available to the mobile user.Finally, analytical evaluation and simulation is carried out to study the performance of these techniques and to compare their performance to that of the Kangaroo [DHB97] technique. Although the PS and Semantic-PS techniques enforce the isolation property, the evaluation results establish that the service time for these techniques in not significantly greater than that of the Kangaroo technique. In addition, the simulation establishes that pre-serialization effectively minimizes the unfair treatment of mobile transactions
An evaluation of methodological issues in workflow management
Ankara : Department of Computer Engineering and Information Science and the Institute of Engineering and Science of Bilkent University, 1998.Thesis (Master's) -- Bilkent University, 1998.Includes bibliographical references leaves 75-79.Sotnikova, AnastasiaM.S
Performance assessment of real-time data management on wireless sensor networks
Technological advances in recent years have allowed the maturity of Wireless Sensor Networks
(WSNs), which aim at performing environmental monitoring and data collection. This sort of
network is composed of hundreds, thousands or probably even millions of tiny smart computers
known as wireless sensor nodes, which may be battery powered, equipped with sensors, a radio
transceiver, a Central Processing Unit (CPU) and some memory. However due to the small size and
the requirements of low-cost nodes, these sensor node resources such as processing power, storage
and especially energy are very limited.
Once the sensors perform their measurements from the environment, the problem of data
storing and querying arises. In fact, the sensors have restricted storage capacity and the on-going
interaction between sensors and environment results huge amounts of data. Techniques for data
storage and query in WSN can be based on either external storage or local storage. The external
storage, called warehousing approach, is a centralized system on which the data gathered by the
sensors are periodically sent to a central database server where user queries are processed. The
local storage, in the other hand called distributed approach, exploits the capabilities of sensors
calculation and the sensors act as local databases. The data is stored in a central database server
and in the devices themselves, enabling one to query both.
The WSNs are used in a wide variety of applications, which may perform certain operations on
collected sensor data. However, for certain applications, such as real-time applications, the sensor
data must closely reflect the current state of the targeted environment. However, the environment
changes constantly and the data is collected in discreet moments of time. As such, the collected
data has a temporal validity, and as time advances, it becomes less accurate, until it does not
reflect the state of the environment any longer. Thus, these applications must query and analyze
the data in a bounded time in order to make decisions and to react efficiently, such as industrial
automation, aviation, sensors network, and so on. In this context, the design of efficient real-time
data management solutions is necessary to deal with both time constraints and energy consumption.
This thesis studies the real-time data management techniques for WSNs. It particularly it focuses
on the study of the challenges in handling real-time data storage and query for WSNs and on the
efficient real-time data management solutions for WSNs.
First, the main specifications of real-time data management are identified and the available
real-time data management solutions for WSNs in the literature are presented. Secondly, in order to
provide an energy-efficient real-time data management solution, the techniques used to manage
data and queries in WSNs based on the distributed paradigm are deeply studied. In fact, many
research works argue that the distributed approach is the most energy-efficient way of managing
data and queries in WSNs, instead of performing the warehousing. In addition, this approach can provide quasi real-time query processing because the most current data will be retrieved from the
network.
Thirdly, based on these two studies and considering the complexity of developing, testing, and
debugging this kind of complex system, a model for a simulation framework of the real-time
databases management on WSN that uses a distributed approach and its implementation are
proposed. This will help to explore various solutions of real-time database techniques on WSNs
before deployment for economizing money and time. Moreover, one may improve the proposed
model by adding the simulation of protocols or place part of this simulator on another available
simulator. For validating the model, a case study considering real-time constraints as well as energy
constraints is discussed.
Fourth, a new architecture that combines statistical modeling techniques with the distributed
approach and a query processing algorithm to optimize the real-time user query processing are
proposed. This combination allows performing a query processing algorithm based on admission
control that uses the error tolerance and the probabilistic confidence interval as admission
parameters. The experiments based on real world data sets as well as synthetic data sets
demonstrate that the proposed solution optimizes the real-time query processing to save more
energy while meeting low latency.Fundação para a Ciência e Tecnologi
Fault-tolerant software: dependability/performance trade-offs, concurrency and system support
PhD ThesisAs the use of computer systems becomes more and more widespread in applications
that demand high levels of dependability, these applications themselves are growing in
complexity in a rapid rate, especially in the areas that require concurrent and distributed
computing. Such complex systems are very prone to faults and errors. No matter how
rigorously fault avoidance and fault removal techniques are applied, software design
faults often remain in systems when they are delivered to the customers. In fact,
residual software faults are becoming the significant underlying cause of system
failures and the lack of dependability. There is tremendous need for systematic
techniques for building dependable software, including the fault tolerance techniques
that ensure software-based systems to operate dependably even when potential faults
are present. However, although there has been a large amount of research in the area of
fault-tolerant software, existing techniques are not yet sufficiently mature as a practical
engineering discipline for realistic applications. In particular, they are often inadequate
when applied to highly concurrent and distributed software.
This thesis develops new techniques for building fault-tolerant software, addresses the
problem of achieving high levels of dependability in concurrent and distributed object
systems, and studies system-level support for implementing dependable software. Two
schemes are developed - the t/(n-l)-VP approach is aimed at increasing software
reliability and controlling additional complexity, while the SCOP approach presents an
adaptive way of dynamically adjusting software reliability and efficiency aspects. As a
more general framework for constructing dependable concurrent and distributed
software, the Coordinated Atomic (CA) Action scheme is examined thoroughly. Key
properties of CA actions are formalized, conceptual model and mechanisms for
handling application level exceptions are devised, and object-based diversity
techniques are introduced to cope with potential software faults. These three schemes
are evaluated analytically and validated by controlled experiments. System-level
support is also addressed with a multi-level system architecture. An architectural
pattern for implementing fault-tolerant objects is documented in detail to capture
existing solutions and our previous experience. An industrial safety-critical application,
the Fault-Tolerant Production Cell, is used as a case study to examine most of the
concepts and techniques developed in this research.ESPRIT
Verbundprojekt PoliFlow : Abschlußbericht
In den letzten Jahren fanden Groupware- und Workflow-Systeme ein große Beachtung bei Herstellern, Anwendern und Wissenschaftlern. Nach anfänglich unkritischer Euphorie wurden in den letzten Jahren jedoch auch einige Schwachstellen der noch jungen Technologien erkannt. Von der Behebung dieser Schwachstellen wird die weitere Entwicklung maßgeblich beeinflußt werden. In der Förderinitiative POLIKOM wurde untersucht, wie diese Technologien im Anwendungsbereich der öffentlichen Verwaltung effektiv und effizient eingesetzt werden können. Im Projekt PoliFlow wurden Mechanismen und Modelle entworfen, mit denen einige existentiellen Mängel in den Bereichen Sicherheit und Flexibilität behoben werden konnten. Hierbei wurden die Beschreibungsmodelle für Workflows um spezifische Aspekte und die Ausführungsmodelle um entsprechende Funktionalitäten erweitert. Um diese erweiterte Funktionalität in verschiedene bestehende Systeme integrieren zu können, wurden Referenzarchitekturen entworfen, die auf eine Vielzahl bestehender Modelle und Systeme übertragbar sind. Weitere erfolgreiche Konzepte wurden zur Integration von Workflow und synchroner Telekooperation sowie zur zuverlässigen Ausführung langlebiger Prozesse entwickelt. Eine weitere Schwachstelle der Technologie war die mangelnde Unterstützung heterogener System- und Anwendungsumgebungen. Um eine große Verbreitung dieser strategischen und hoch integrierten Informationssysteme zu erreichen, müssen die beteiligten Personen von unterschiedlichen Rechnern und Netzen eine entsprechende Zugangsmöglichkeit erhalten. Mit der Realisierung des Stuttgarter Workflow- und Telekooperationssystems (SWATS), bei dem neueste Intra-/Internet-Technologien (wie Java und CORBA) berücksichtigt wurden, konnten auch diese Anforderungen erfüllt werden. Darüber hinaus bildete das Grundsystem von SWATS die Basis zur Integration der Prototypen aus den oben genannten Arbeitsbereichen
Supporting effective unexpected exception handling in workflow management systems within organizaional contexts
Tese de doutoramento em Informática (Engenharia Informática), apresentada à Universidade de Lisboa através da Faculdade de Ciências, 2008Workflow Management Systems (WfMS) support the execution of organizational processes within organizations. Processes are modelled using high level languages specifying the sequence of tasks the organization has to perform. However, organizational processes do not have always a smooth flow conforming to any possible designed model and exceptions to the rule happen often. Organizations require flexibility to react to situations not predicted in the model. The required flexibility should be complemented with robustness to guarantee system reliability even in extreme situations. In our work, we have introduced the concept of WfMS resilience that comprises these two facets: robustness and flexibility. The main objective of our work is to increase resilience in WfMSs. From the events demanding for WfMS resilience, we focused on ad hoc effective unexpected exceptions as those for which no previous knowledge exist is the organization to derive the handling procedure and no plan can be a priori established. These exceptions usually require human intervention and problem solving activities, since the concrete situation may not be entirely understood before humans start reacting to the event. After discussing existing approaches to increase WfMS resilience, we have identified five levels of conformity. The fifth level, being the most demanding one, requires unrestricted humanistic interventions to workflow execution. In this thesis, we propose a system to support unrestricted users' interventions to the WfMS and we characterize the interventions as unstructured activities. The system has two modes of operation: it usually works under model control and changes to unstructured activities support when an exception is detected. The exception handling activities are carried out until the system is placed back into a coherent mode, where work may proceed undermodel execution control
- …