3 research outputs found
Prefetching techniques for client server object-oriented database systems
The performance of many object-oriented database applications suffers from the page fetch latency which is determined by the expense of disk access. In this work we suggest several prefetching techniques to avoid, or at least to reduce, page fetch latency. In practice no prediction technique is perfect and no prefetching technique can entirely eliminate delay due to page fetch latency. Therefore we are interested in the trade-off between the level of accuracy required for obtaining good results in terms of elapsed time reduction and the processing overhead needed to achieve this level of accuracy. If prefetching accuracy is high then the total elapsed time of an application can be reduced significantly otherwise if the prefetching accuracy is low, many incorrect pages are prefetched and the extra load on the client, network, server and disks decreases the whole system performance. Access pattern of object-oriented databases are often complex and usually hard to predict accurately. The ..
Automatic Generation of Distributed Runtime Infrastructure for Internet of Things
Ph. D. ThesisThe Internet of Things (IoT) represents a network of connected devices that are able to
cooperate and interact with each other in order to reach a particular goal. To attain this,
the devices are equipped with identifying, sensing, networking and processing capabilities.
Cloud computing, on the other hand, is the delivering of on-demand computing services โ
from applications, to storage, to processing power โ typically over the internet. Clouds
bring a number of advantages to distributed computing because of highly available pool of
virtualized computing resource. Due to the large number of connected devices, real-world
IoT use cases may generate overwhelmingly large amounts of data. This prompts the use
of cloud resources for processing, storage and analysis of the data. Therefore, a typical IoT
system comprises of a front-end (devices that collect and transmit data), and back-end โ
typically distributed Data Stream Management Systems (DSMSs) deployed on the cloud
infrastructure, for data processing and analysis.
Increasingly, new IoT devices are being manufactured to provide limited execution
environment on top of their data sensing and transmitting capabilities. This consequently
demands a change in the way data is being processed in a typical IoT-cloud setup. The
traditional, centralised cloud-based data processing model โ where IoT devices are used
only for data collection โ does not provide an efficient utilisation of all available resources.
In addition, the fundamental requirements of real-time data processing such as short
response time may not always be met. This prompts a new processing model which is
based on decentralising the data processing tasks. The new decentralised architectural
pattern allows some parts of data streaming computation to be executed directly on edge
devices โ closer to where the data is collected. Extending the processing capabilities to the
IoT devices increases the robustness of applications as well as reduces the communication
overhead between different components of an IoT system. However, this new pattern poses new challenges in the development, deployment and management of IoT applications.
Firstly, there exists a large resource gap between the two parts of a typical IoT system (i.e.
clouds and IoT devices); hence, prompting a new approach for IoT applications deployment
and management. Secondly, the new decentralised approach necessitates the deployment
of DSMS on distributed clusters of heterogeneous nodes resulting in unpredictable runtime
performance and complex fault characteristics. Lastly, the environment where DSMSs are
deployed is very dynamic due to user or device mobility, workload variation, and resource
availability.
In this thesis we present solutions to address the aforementioned challenges. We
investigate how a high-level description of a data streaming computation can be used
to automatically generate a distributed runtime infrastructure for Internet of Things.
Subsequently, we develop a deployment and management system capable of distributing
different operators of a data streaming computation onto different IoT gateway devices
and cloud infrastructure.
To address the other challenges, we propose a non-intrusive approach for performance
evaluation of DSMSs and present a protocol and a set of algorithms for dynamic migration
of stateful data stream operators. To improve our migration approach, we provide an
optimisation technique which provides minimal application downtime and improves the
accuracy of a data stream computation
Anomalies and Adaptation in the Analysis and Development of Prepaging Policies
rk was supported in part by a gift from Novell, Inc. y Current address: Dept. of Computer Science, University of Wisconsin at Madison. if there is not enough memory. So, for example, an LRU ordering always evicts the Least Recently Used page; for a memory of size n, only the n most recently used pages are kept in memory. The priority ordering of pages is implemented as a list (or "queue") of page numbers, ordered by the recency of the latest touch to each page; a page's position in the queue signifies its priority. Memory references reorder the queue in a way that is independent of any particular memory size, but whose interpretation varies depending on what memory sizes are of interest. 1.1 Background---LRU simulation The goal of trace-driven memory simulation is to simulate the behavior of memories of different sizes, by processing a reference trace (i.e., a list of the memory reads and write