56 research outputs found

    A Concurrency Control Method Based on Commitment Ordering in Mobile Databases

    Full text link
    Disconnection of mobile clients from server, in an unclear time and for an unknown duration, due to mobility of mobile clients, is the most important challenges for concurrency control in mobile database with client-server model. Applying pessimistic common classic methods of concurrency control (like 2pl) in mobile database leads to long duration blocking and increasing waiting time of transactions. Because of high rate of aborting transactions, optimistic methods aren`t appropriate in mobile database. In this article, OPCOT concurrency control algorithm is introduced based on optimistic concurrency control method. Reducing communications between mobile client and server, decreasing blocking rate and deadlock of transactions, and increasing concurrency degree are the most important motivation of using optimistic method as the basis method of OPCOT algorithm. To reduce abortion rate of transactions, in execution time of transactions` operators a timestamp is assigned to them. In other to checking commitment ordering property of scheduler, the assigned timestamp is used in server on time of commitment. In this article, serializability of OPCOT algorithm scheduler has been proved by using serializability graph. Results of evaluating simulation show that OPCOT algorithm decreases abortion rate and waiting time of transactions in compare to 2pl and optimistic algorithms.Comment: 15 pages, 13 figures, Journal: International Journal of Database Management Systems (IJDMS

    The network is the database

    Get PDF
    Master'sMASTER OF SCIENC

    Pervasive Data Access in Wireless and Mobile Computing Environments

    Get PDF
    The rapid advance of wireless and portable computing technology has brought a lot of research interests and momentum to the area of mobile computing. One of the research focus is on pervasive data access. with wireless connections, users can access information at any place at any time. However, various constraints such as limited client capability, limited bandwidth, weak connectivity, and client mobility impose many challenging technical issues. In the past years, tremendous research efforts have been put forth to address the issues related to pervasive data access. A number of interesting research results were reported in the literature. This survey paper reviews important works in two important dimensions of pervasive data access: data broadcast and client caching. In addition, data access techniques aiming at various application requirements (such as time, location, semantics and reliability) are covered

    Consistent data aggregate retrieval for sensor network systems.

    Get PDF
    Lee Lok Hang.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 87-93).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Sensors and Sensor Networks --- p.3Chapter 1.2 --- Sensor Network Deployment --- p.7Chapter 1.3 --- Motivations --- p.7Chapter 1.4 --- Contributions --- p.9Chapter 1.5 --- Thesis Organization --- p.10Chapter 2 --- Literature Review --- p.11Chapter 2.1 --- Data Cube --- p.11Chapter 2.2 --- Data Aggregation in Sensor Networks --- p.12Chapter 2.2.1 --- Hierarchical Data Aggregation --- p.13Chapter 2.2.2 --- Gossip-based Aggregation --- p.13Chapter 2.2.3 --- Hierarchical Gossip Aggregation --- p.13Chapter 2.3 --- GAF Algorithm --- p.14Chapter 2.4 --- Concurrency Control --- p.17Chapter 2.4.1 --- Two-phase Locking --- p.17Chapter 2.4.2 --- Timestamp Ordering --- p.18Chapter 3 --- Building Distributed Data Cubes in Sensor Network --- p.20Chapter 3.1 --- Aggregation Operators --- p.21Chapter 3.2 --- Distributed Prefix (PS) Sum Data Cube --- p.22Chapter 3.2.1 --- Prefix Sum (PS) Data Cube --- p.22Chapter 3.2.2 --- Notations --- p.24Chapter 3.2.3 --- Querying a PS Data Cube --- p.25Chapter 3.2.4 --- Building Distributed PS Data Cube --- p.27Chapter 3.2.5 --- Time Bounds --- p.32Chapter 3.2.6 --- Fast Aggregate Queries on Multiple Regions --- p.37Chapter 3.2.7 --- Simulation Results --- p.43Chapter 3.3 --- Distributed Local Prefix Sum (LPS) Data Cube --- p.50Chapter 3.3.1 --- Local Prefix Sum Data Cube --- p.52Chapter 3.3.2 --- Notations --- p.55Chapter 3.3.3 --- Querying an LPS Data Cube --- p.56Chapter 3.3.4 --- Building Distributed LPS Data Cube --- p.61Chapter 3.3.5 --- Time Bounds --- p.63Chapter 3.3.6 --- Fast Aggregate Queries on Multiple Regions --- p.67Chapter 3.3.7 --- Simulation Results --- p.68Chapter 3.3.8 --- Distributed PS Data Cube Vs Distributed LPS Data Cube --- p.74Chapter 4 --- Concurrency Control and Consistency in Sensor Networks --- p.76Chapter 4.1 --- Data Inconsistency in Sensor Networks --- p.76Chapter 4.2 --- Traditional Concurrency Control Protocols and Sensor Networks --- p.80Chapter 4.3 --- The Consistent Retrieval of Data from Distributed Data Cubes --- p.81Chapter 5 --- Conclusions --- p.85References --- p.87Appendix --- p.94A Publications --- p.9

    Blockchain in maritime cybersecurity

    Get PDF
    Blockchain technologies can be used for many different purposes from handling large amounts of data to creating better solutions for privacy protection, user authentication and a tamper proof ledger which lead to growing interest among industries. Smart contracts, fog nodes and different consensus methods create a scalable environment to secure multi-party connections with equal trust of participanting nodes’ identity. Different blockchains have multiple options for methodologies to use in different environments. This thesis has focused on Ethereum based open-source solutions that fit the remote pilotage environment the best. Autonomous vehicular networks and remote operatable devices have been a popular research topic in the last few years. Remote pilotage in maritime environment is persumed to reach its full potential with fully autonomous vessels in ten years which makes the topic interesting for all researchers. However cybersecurity in these environments is especially important because incidents can lead to financial loss, reputational damage, loss of customer and industry trust and environmental damage. These complex environments also have multiple attack vectors because of the systems wireless nature. Denial-of-service (DoS), man-in-the-middle (MITM), message or executable code injection, authentication tampering and GPS spoofing are one of the most usual attacks against large IoT systems. This is why blockchain can be used for creating a tamper proof environment with no single point-of-failure. After extensive research about best performing blockchain technologies Ethereum seemed the most preferable for decentralised maritime environment. In comparison to most of 2021 blockchain technologies that have focused on financial industries and cryptocurrencies, Ethereum has focused on decentralizing applications within many different industries. This thesis provides three Ethereum based blockchain protocol solutions and one operating system for these protocols. All have different features that add to the base blockchain technology but after extensive comparison two of these protocols perform better in means of concurrency and privacy. Hyperledger Fabric and Quorum provide many ways of tackling privacy, concurrency and parallel execution issues with consistent high throughput levels. However Hyperledger Fabric has far better throughput and concurrency management. This makes the solution of Firefly operating system with Hyperledger Fabric blockchain protocol the most preferable solution in complex remote pilotage fairway environment

    High performance deferred update replication

    Get PDF
    Replication is a well-known approach to implementing storage systems that can tolerate failures. Replicated storage systems are designed such that the state of the system is kept at several replicas. A replication protocol ensures that the failure of a replica is masked by the rest of the system, in a way that is transparent to its users. Replicated storage systems are among the most important building blocks in the design of large scale applications. Applications at scale are often deployed on top of commodity hardware, store a vast amount of data, and serve a large number of users. The larger the system, the higher its vulnerability to failures. The ability to tolerate failures is not the only desirable feature in a replicated system. Storage systems need to be efficient in order to accommodate requests from a large user base while achieving low response times. In that respect, replication can leverage multiple replicas to parallelize the execution of user requests. This thesis focuses on Deferred Update Replication (DUR), a well-established database replication approach. It provides high availability in that every replica can execute client transactions. In terms of performance, it is better than other replication techniques in that only one replica executes a given transaction while the other replicas only apply state changes. However, DUR suffers from the following drawback: each replica stores a full copy of the database, which has consequences in terms of performance. The first consequence is that DUR cannot take advantage of the aggregated memory available to the replicas. Our first contribution is a distributed caching mechanism that addresses the problem. It makes efficient use of the main memory of an entire cluster of machines, while guaranteeing strong consistency. The second consequence is that DUR cannot scale with the number of replicas. The throughput of a fully replicated system is inherently limited by the number of transactions that a single replica can apply to its local storage. We propose a scalable version of the DUR approach where the system state is partitioned in smaller replica sets. Transactions that access disjoint partitions are parallelized. The last part of the thesis focuses on latency. We show that the scalable DUR-based approach may have detrimental effects on response time, especially when replicas are geographically distributed. The thesis considers different deployments and their implications on latency. We propose optimizations that provide substantial gains in geographically distributed environments

    Improving network reliability by exploiting path diversity in ad hoc networks with bursty losses

    Get PDF
    In wireless mobile ad hoc networks, end-to-end connections are often subject to failures which do not make the connection non-operational indefinitely but interrupt the communication for intermittent short periods of time. These intermittent failures usually arise from the mobility of hosts, dynamics of the wireless medium or energy-saving mechanisms, and cause bursty packet losses. Reliable communication in this kind of an environment is becoming more important with the emerging use of ad hoc networks for carrying diverse multimedia applications such as voice, video and data. In this thesis, we present a new path reliability model that captures intermittent availability of the paths, and we devise a routing strategy based on our path reliability model in order to improve the network reliability. Our routing strategy takes the advantage of path diversity in the network and uses a diversity coding scheme in order not to compromise efficiency. In diversity coding scheme, if the original information is encoded by using a (N,K) code, then it is enough for the destination to receive any K bits correctly out of N bits to successfully decode the original information. In our scheme, the original information is divided into N subpackets and subpackets are distributed among the available disjoint paths in the network. The distribution of subpackets among the diverse paths is a crucial decision. The subpackets should be distributed 'intelligently' so that the probability of successful reconstruction of the original information is maximized. Given the failure statistics of the paths, and the code rate (N, K), our strategy determines the allocation of subpackets to each path in such a manner that the probability of reconstruction of the original information at the destination is maximized. Simulation results justify the accuracy and efficiency of our approach. Additionally, simulation results show that our multipath routing strategy improves the network reliability substantially compared to the single path routing. In wireless networks, a widely used strategy is to place the nodes into a low energy consuming sleep mode in order to prolong the battery life. In this study, we also consider the cases where the intermittent availability of the nodes is due to the sleep/awake cycles of wireless nodes. A sleep/awake scheduling strategy is proposed which minimizes the packet latency while satisfying the energy saving ratio specified by the energy saving mechanism

    Intelligent Data Receiver Mechanism for Wireless Broadcasting System

    Full text link

    Transactional concurrency control for resource constrained applications

    Get PDF
    PhD ThesisTransactions have long been used as a mechanism for ensuring the consistency of databases. Databases, and associated transactional approaches, have always been an active area of research as different application domains and computing architectures have placed ever more elaborate requirements on shared data access. As transactions typically provide consistency at the expense of timeliness (abort/retry) and resource (duplicate shared data and locking), there has been substantial efforts to limit these two aspects of transactions while still satisfying application requirements. In environments where clients are geographically distant from a database the consistency/performance trade-off becomes acute as any retrieval of data over a network is not only expensive, but relatively slow compared to co-located client/database systems. Furthermore, for battery powered clients the increased overhead of transactions can also be viewed as a significant power overhead. However, for all their drawbacks transactions do provide the data consistency that is a requirement for many application types. In this Thesis we explore the solution space related to timely transactional systems for remote clients and centralised databases with a focus on providing a solution, that, when compared to other's work in this domain: (a) maintains consistency; (b) lowers latency; (c) improves throughput. To achieve this we revisit a technique first developed to decrease disk access times via local caching of state (for aborted transactions) to tackle the problems prevalent in real-time databases. We demonstrate that such a technique (rerun) allows a significant change in the typical structure of a transaction (one never before considered, even in rerun systems). Such a change itself brings significant performance success not only in the traditional rerun local database solution space, but also in the distributed solution space. A byproduct of our improvements also, one can argue, brings about a "greener" solution as less time coupled with improved throughput affords improved battery life for mobile devices
    corecore