120 research outputs found

    Transactional Agents for Pervasive Computing

    Get PDF
    Pervasive computing enables seamless integration of computing technology into everyday life to make upto- date information and services proactively available to the users based on their needs and behaviors. We aim to develop a transaction management scheme as a pertinent component for such environment supported by either structured or ad hoc networks. We propose Transactional Agents for Pervasive COmputing (TAPCO), which utilizes a dynamic hierarchical meta data structure that captures the semantic contents of the underlying heterogeneous data sources. Mobile agents process the transactions collaboratively, to preserve ACID properties without violating local autonomy of the data sources. TAPCO is simulated and compared against Decentralized Serialization Graph Testing (DSGT) protocol. The results show that TAPCO outperforms DSGT in several ways. In contrast to DSGT that did not consider local transactions, TAPCO supports both local and global transactions without violating the local autonomy

    The Atomic Manifesto: a Story in Four Quarks

    Get PDF
    This report summarizes the viewpoints and insights gathered in the Dagstuhl Seminar on Atomicity in System Design and Execution, which was attended by 32 people from four different scientific communities: database and transaction processing systems, fault tolerance and dependable systems, formal methods for system design and correctness reasoning, and hardware architecture and programming languages. Each community presents its position in interpreting the notion of atomicity and the existing state of the art, and each community identifies scientific challenges that should be addressed in future work. In addition, the report discusses common themes across communities and strategic research problems that require multiple communities to team up for a viable solution. The general theme of how to specify, implement, compose, and reason about extended and relaxed notions of atomicity is viewed as a key piece in coping with the pressing issue of building and maintaining highly dependable systems that comprise many components with complex interaction patterns

    Optimistic replication

    Get PDF
    Data replication is a key technology in distributed data sharing systems, enabling higher availability and performance. This paper surveys optimistic replication algorithms that allow replica contents to diverge in the short term, in order to support concurrent work practices and to tolerate failures in low-quality communication links. The importance of such techniques is increasing as collaboration through wide-area and mobile networks becomes popular. Optimistic replication techniques are different from traditional “pessimistic ” ones. Instead of synchronous replica coordination, an optimistic algorithm propagates changes in the background, discovers conflicts after they happen and reaches agreement on the final contents incrementally. We explore the solution space for optimistic replication algorithms. This paper identifies key challenges facing optimistic replication systems — ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence — and provides a comprehensive survey of techniques developed for addressing these challenges

    Transaction management in mobile multidatabases.

    Get PDF
    This dissertation studies transaction management in the mobile Multidatabase environment. That is, it studies the management of transactions within the context of the mobile and Multidatabase environments. Two new transaction management techniques for the mobile Multidatabase environment i.e., the PS and Semantic-PS techniques are proposed. These techniques define two now states (Disconnected and Suspended) to address the disconnectivity of the mobile user. A new Partial Global Serialization Graph algorithm is introduced to verify the isolation property of global transactions. This algorithm verifies the serializability of a global transaction by constructing a partial global serialization graph. This algorithm relies on the propagation of (serialization) information to ensure that the partial graph contains sufficient information to verify serializability of global transactions. The unfair treatment of mobile transactions due to their prolonged execution time is minimized through pre-serialization. Pre-serialization allows mobile transactions to establish their serialization order prior to completing their execution.The Internet and advances in wireless communication technology have transformed many facets of the computer environment. Virtual connectivity through the internet has lead to a new genre of software systems, i.e., cooperating autonomous systems---systems that cooperate with each other to provide extended services to the user. Multidatabase systems---a set of databases that cooperate with each other in order to provide a single logical view of the underlying information---is an example of such systems. Advances in wireless communication technology dictate that the services available to the wired user be made available to the mobile user.Finally, analytical evaluation and simulation is carried out to study the performance of these techniques and to compare their performance to that of the Kangaroo [DHB97] technique. Although the PS and Semantic-PS techniques enforce the isolation property, the evaluation results establish that the service time for these techniques in not significantly greater than that of the Kangaroo technique. In addition, the simulation establishes that pre-serialization effectively minimizes the unfair treatment of mobile transactions

    Proceedings of the real-time database workshop, Eindhoven, 23 February 1995

    Get PDF

    AN ENERGY-EFFICIENT CONCURRENCY CONTROL ALGORITHM FOR MOBILE AD-HOC NETWORK DATABASES

    Get PDF
    With the rapid growth of the wireless networking technology and mobile computing devices, there is an increasing demand for processing mobile database transactions in mission-critical applications such as disaster rescue and military operations that do not require a fixed infrastructure, so that mobile users can access and manipulate the database anytime and anywhere. A Mobile Ad-hoc Network (MANET) is a collection of mobile, wireless and battery-powered nodes without a fixed infrastructure; therefore it fits well in such applications. However, when a node runs out of energy or has insufficient energy to function, communication may fail, disconnections may happen, execution of transactions may be prolonged, and thus time-critical transactions may be aborted if they missed their deadlines. In order to guarantee timely and correct results for multiple concurrent transactions, energy-efficient database concurrency control (CC) techniques become critical. Due to the characteristics of MANET databases, existing CC algorithms cannot work effectively.In this dissertation, an energy-efficient CC algorithm, called Sequential Order with Dynamic Adjustment (SODA), is developed for mission-critical MANET databases in a clustered network architecture where nodes are divided into clusters, each of which has a node, called a cluster head, responsible for the processing of all nodes in the cluster. The cluster structure is constructed using a novel weighted clustering algorithm, called MEW (Mobility, Energy, and Workload), that uses node mobility, remaining energy and workload to group nodes into clusters and select cluster heads. In SODA, in order to conserve energy and balance energy consumption among servers so that the lifetime of the network is prolonged, cluster heads are elected to work as coordinating servers. SODA is based on optimistic CC to offer high transaction concurrency and avoid unbounded blocking time. It utilizes the sequential order of committed transactions to simplify the validation process and dynamically adjusts the sequential order of committed transactions to reduce transaction aborts and improve system throughput.Besides correctness proof and theoretical analysis, comprehensive simulation experiments were conducted to study the performance of MEW and SODA. The simulation results confirm that MEW prolongs the lifetime of MANETs and has a lower cluster head change rate and re-affiliation rate than the existing algorithm MOBIC. The simulation results also show the superiority of SODA over the existing techniques, SESAMO and S2PL, in terms of transaction abort rate, system throughput, total energy consumption by all servers, and degree of balancing energy consumption among servers

    Performance assessment of real-time data management on wireless sensor networks

    Get PDF
    Technological advances in recent years have allowed the maturity of Wireless Sensor Networks (WSNs), which aim at performing environmental monitoring and data collection. This sort of network is composed of hundreds, thousands or probably even millions of tiny smart computers known as wireless sensor nodes, which may be battery powered, equipped with sensors, a radio transceiver, a Central Processing Unit (CPU) and some memory. However due to the small size and the requirements of low-cost nodes, these sensor node resources such as processing power, storage and especially energy are very limited. Once the sensors perform their measurements from the environment, the problem of data storing and querying arises. In fact, the sensors have restricted storage capacity and the on-going interaction between sensors and environment results huge amounts of data. Techniques for data storage and query in WSN can be based on either external storage or local storage. The external storage, called warehousing approach, is a centralized system on which the data gathered by the sensors are periodically sent to a central database server where user queries are processed. The local storage, in the other hand called distributed approach, exploits the capabilities of sensors calculation and the sensors act as local databases. The data is stored in a central database server and in the devices themselves, enabling one to query both. The WSNs are used in a wide variety of applications, which may perform certain operations on collected sensor data. However, for certain applications, such as real-time applications, the sensor data must closely reflect the current state of the targeted environment. However, the environment changes constantly and the data is collected in discreet moments of time. As such, the collected data has a temporal validity, and as time advances, it becomes less accurate, until it does not reflect the state of the environment any longer. Thus, these applications must query and analyze the data in a bounded time in order to make decisions and to react efficiently, such as industrial automation, aviation, sensors network, and so on. In this context, the design of efficient real-time data management solutions is necessary to deal with both time constraints and energy consumption. This thesis studies the real-time data management techniques for WSNs. It particularly it focuses on the study of the challenges in handling real-time data storage and query for WSNs and on the efficient real-time data management solutions for WSNs. First, the main specifications of real-time data management are identified and the available real-time data management solutions for WSNs in the literature are presented. Secondly, in order to provide an energy-efficient real-time data management solution, the techniques used to manage data and queries in WSNs based on the distributed paradigm are deeply studied. In fact, many research works argue that the distributed approach is the most energy-efficient way of managing data and queries in WSNs, instead of performing the warehousing. In addition, this approach can provide quasi real-time query processing because the most current data will be retrieved from the network. Thirdly, based on these two studies and considering the complexity of developing, testing, and debugging this kind of complex system, a model for a simulation framework of the real-time databases management on WSN that uses a distributed approach and its implementation are proposed. This will help to explore various solutions of real-time database techniques on WSNs before deployment for economizing money and time. Moreover, one may improve the proposed model by adding the simulation of protocols or place part of this simulator on another available simulator. For validating the model, a case study considering real-time constraints as well as energy constraints is discussed. Fourth, a new architecture that combines statistical modeling techniques with the distributed approach and a query processing algorithm to optimize the real-time user query processing are proposed. This combination allows performing a query processing algorithm based on admission control that uses the error tolerance and the probabilistic confidence interval as admission parameters. The experiments based on real world data sets as well as synthetic data sets demonstrate that the proposed solution optimizes the real-time query processing to save more energy while meeting low latency.Fundação para a Ciência e Tecnologi

    Correctness and Progress Verification of Non-Blocking Programs

    Get PDF
    The progression of multi-core processors has inspired the development of concurrency libraries that guarantee safety and liveness properties of multiprocessor applications. The difficulty of reasoning about safety and liveness properties in a concurrent environment has led to the development of tools to verify that a concurrent data structure meets a correctness condition or progress guarantee. However, these tools possess shortcomings regarding the ability to verify a composition of data structure operations. Additionally, verification techniques for transactional memory evaluate correctness based on low-level read/write histories, which is not applicable to transactional data structures that use a high-level semantic conflict detection. In my dissertation, I present tools for checking the correctness of multiprocessor programs that overcome the limitations of previous correctness verification techniques. Correctness Condition Specification (CCSpec) is the first tool that automatically checks the correctness of a composition of concurrent multi-container operations performed in a non-atomic manner. Transactional Correctness tool for Abstract Data Types (TxC-ADT) is the first tool that can check the correctness of transactional data structures. TxC-ADT elevates the standard definitions of transactional correctness to be in terms of an abstract data type, an essential aspect for checking correctness of transactions that synchronize only for high-level semantic conflicts. Many practical concurrent data structures, transactional data structures, and algorithms to facilitate non-blocking programming all incorporate helping schemes to ensure that an operation comprising multiple atomic steps is completed according to the progress guarantee. The helping scheme introduces additional interference by the active threads in the system to achieve the designed progress guarantee. Previous progress verification techniques do not accommodate loops whose termination is dependent on complex behaviors of the interfering threads, making these approaches unsuitable. My dissertation presents the first progress verification technique for non-blocking algorithms that are dependent on descriptor-based helping mechanisms
    corecore