127 research outputs found

    Data Replication for Improving Data Accessibility in Ad Hoc Networks

    Get PDF
    In ad hoc networks, due to frequent network partition, data accessibility is lower than that in conventional fixed networks. In this paper, we solve this problem by replicating data items on mobile hosts. First, we propose three replica allocation methods assuming that each data item is not updated. In these three methods, we take into account the access frequency from mobile hosts to each data item and the status of the network connection. Then, we extend the proposed methods by considering aperiodic updates and integrating user profiles consisting of mobile users\u27\u27 schedules, access behavior, and read/write patterns. We also show the results of simulation experiments regarding the performance evaluation of our proposed method

    Cache Serializability: Reducing Inconsistency in Edge Transactions

    Full text link
    Read-only caches are widely used in cloud infrastructures to reduce access latency and load on backend databases. Operators view coherent caches as impractical at genuinely large scale and many client-facing caches are updated in an asynchronous manner with best-effort pipelines. Existing solutions that support cache consistency are inapplicable to this scenario since they require a round trip to the database on every cache transaction. Existing incoherent cache technologies are oblivious to transactional data access, even if the backend database supports transactions. We propose T-Cache, a novel caching policy for read-only transactions in which inconsistency is tolerable (won't cause safety violations) but undesirable (has a cost). T-Cache improves cache consistency despite asynchronous and unreliable communication between the cache and the database. We define cache-serializability, a variant of serializability that is suitable for incoherent caches, and prove that with unbounded resources T-Cache implements this new specification. With limited resources, T-Cache allows the system manager to choose a trade-off between performance and consistency. Our evaluation shows that T-Cache detects many inconsistencies with only nominal overhead. We use synthetic workloads to demonstrate the efficacy of T-Cache when data accesses are clustered and its adaptive reaction to workload changes. With workloads based on the real-world topologies, T-Cache detects 43-70% of the inconsistencies and increases the rate of consistent transactions by 33-58%.Comment: Ittay Eyal, Ken Birman, Robbert van Renesse, "Cache Serializability: Reducing Inconsistency in Edge Transactions," Distributed Computing Systems (ICDCS), IEEE 35th International Conference on, June~29 2015--July~2 201

    Transactional concurrency control for resource constrained applications

    Get PDF
    PhD ThesisTransactions have long been used as a mechanism for ensuring the consistency of databases. Databases, and associated transactional approaches, have always been an active area of research as different application domains and computing architectures have placed ever more elaborate requirements on shared data access. As transactions typically provide consistency at the expense of timeliness (abort/retry) and resource (duplicate shared data and locking), there has been substantial efforts to limit these two aspects of transactions while still satisfying application requirements. In environments where clients are geographically distant from a database the consistency/performance trade-off becomes acute as any retrieval of data over a network is not only expensive, but relatively slow compared to co-located client/database systems. Furthermore, for battery powered clients the increased overhead of transactions can also be viewed as a significant power overhead. However, for all their drawbacks transactions do provide the data consistency that is a requirement for many application types. In this Thesis we explore the solution space related to timely transactional systems for remote clients and centralised databases with a focus on providing a solution, that, when compared to other's work in this domain: (a) maintains consistency; (b) lowers latency; (c) improves throughput. To achieve this we revisit a technique first developed to decrease disk access times via local caching of state (for aborted transactions) to tackle the problems prevalent in real-time databases. We demonstrate that such a technique (rerun) allows a significant change in the typical structure of a transaction (one never before considered, even in rerun systems). Such a change itself brings significant performance success not only in the traditional rerun local database solution space, but also in the distributed solution space. A byproduct of our improvements also, one can argue, brings about a "greener" solution as less time coupled with improved throughput affords improved battery life for mobile devices

    AN ENERGY-EFFICIENT CONCURRENCY CONTROL ALGORITHM FOR MOBILE AD-HOC NETWORK DATABASES

    Get PDF
    With the rapid growth of the wireless networking technology and mobile computing devices, there is an increasing demand for processing mobile database transactions in mission-critical applications such as disaster rescue and military operations that do not require a fixed infrastructure, so that mobile users can access and manipulate the database anytime and anywhere. A Mobile Ad-hoc Network (MANET) is a collection of mobile, wireless and battery-powered nodes without a fixed infrastructure; therefore it fits well in such applications. However, when a node runs out of energy or has insufficient energy to function, communication may fail, disconnections may happen, execution of transactions may be prolonged, and thus time-critical transactions may be aborted if they missed their deadlines. In order to guarantee timely and correct results for multiple concurrent transactions, energy-efficient database concurrency control (CC) techniques become critical. Due to the characteristics of MANET databases, existing CC algorithms cannot work effectively.In this dissertation, an energy-efficient CC algorithm, called Sequential Order with Dynamic Adjustment (SODA), is developed for mission-critical MANET databases in a clustered network architecture where nodes are divided into clusters, each of which has a node, called a cluster head, responsible for the processing of all nodes in the cluster. The cluster structure is constructed using a novel weighted clustering algorithm, called MEW (Mobility, Energy, and Workload), that uses node mobility, remaining energy and workload to group nodes into clusters and select cluster heads. In SODA, in order to conserve energy and balance energy consumption among servers so that the lifetime of the network is prolonged, cluster heads are elected to work as coordinating servers. SODA is based on optimistic CC to offer high transaction concurrency and avoid unbounded blocking time. It utilizes the sequential order of committed transactions to simplify the validation process and dynamically adjusts the sequential order of committed transactions to reduce transaction aborts and improve system throughput.Besides correctness proof and theoretical analysis, comprehensive simulation experiments were conducted to study the performance of MEW and SODA. The simulation results confirm that MEW prolongs the lifetime of MANETs and has a lower cluster head change rate and re-affiliation rate than the existing algorithm MOBIC. The simulation results also show the superiority of SODA over the existing techniques, SESAMO and S2PL, in terms of transaction abort rate, system throughput, total energy consumption by all servers, and degree of balancing energy consumption among servers

    Improving Transaction Acceptance of Incoherent Updates Using Dynamic Merging In a Relational Database

    Get PDF
    Title from PDF of title page, viewed on March 23, 2016Thesis advisor: Vijay KumarVitaIncludes bibliographical references (pages 205-206)Thesis (M.S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2015Despite its tenure, mobile computing continues to move to the forefront of technology and business. This ever expansive field holds no shortages of opportunity for either party. Its benefits and demand are abundant but it is not without its challenges. Maintaining both data consistency and availability is one of the most challenging prospects for mobile computing. These difficulties are exacerbated by the unique ability of mobile platforms to disconnect for extended periods of time while continuing to function normally. Data collected and modified while in such a state poses considerable risk of abandon as there exists no static algorithm to determine that it is consistent when integrated back to the server. This thesis proposes a mechanism to improve transaction acceptance without sacrificing consistency of the related data on both the client and server. Particular consideration is placed towards honoring data which a client may produce or modify while in a disconnected state. The underlying framework leverages merging strategies to resolve conflicts in data using a custom tiered dynamic merge granularity. The merge process is aided by a custom lock promotion scheme applied in the application layer at the server. The improved incoherence resolution process is then examined for impacts to the fate of such transactions and related bandwidth utilization.Introduction -- Related work -- Approach -- Implementation -- Evaluation -- Conclusion -- Appendix A. Client API documentation -- Appendix B. Client code -- Appendix C. Server code -- Appendix D. Property performance dat

    Adaptive Caching of Distributed Components

    Get PDF
    Die Zugriffslokalität referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. Gegenwärtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum Unterstützung für diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frühzeitige Modellierung und spätere Wiederverwendung caching-spezifischer Metadaten gewährleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezüglich der Cachebarkeit von Daten adaptiv an geändertes Nutzungsverhalten anpassen.Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach

    The Design of Secure Mobile Databases: An Evaluation of Alternative Secure Access Models

    Get PDF
    This research considers how mobile databases can be designed to be both secure and usable. A mobile database is one that is accessed and manipulated via mobile information devices over a wireless medium. A prototype mobile database was designed and then tested against secure access control models to determine if and how these models performed in securing a mobile database. The methodology in this research consisted of five steps. Initially, a preliminary analysis was done to delineate the environment the prototypical mobile database would be used in. Requirements definitions were established to gain a detailed understanding of the users and function of the database system. Conceptual database design was then employed to produce a database design model. In the physical database design step, the database was denormalized in order to reflect some unique computing requirements of the mobile environment. Finally, this mobile database design was tested against three secure access control models and observations made
    corecore