125 research outputs found

    Canonical queries as a query answering device (Information Science)

    Get PDF
    Issued as Annual reports [nos. 1-2], and Final report, Project no. G-36-60

    Proceedings of the real-time database workshop, Eindhoven, 23 February 1995

    Get PDF

    Performance assessment of real-time data management on wireless sensor networks

    Get PDF
    Technological advances in recent years have allowed the maturity of Wireless Sensor Networks (WSNs), which aim at performing environmental monitoring and data collection. This sort of network is composed of hundreds, thousands or probably even millions of tiny smart computers known as wireless sensor nodes, which may be battery powered, equipped with sensors, a radio transceiver, a Central Processing Unit (CPU) and some memory. However due to the small size and the requirements of low-cost nodes, these sensor node resources such as processing power, storage and especially energy are very limited. Once the sensors perform their measurements from the environment, the problem of data storing and querying arises. In fact, the sensors have restricted storage capacity and the on-going interaction between sensors and environment results huge amounts of data. Techniques for data storage and query in WSN can be based on either external storage or local storage. The external storage, called warehousing approach, is a centralized system on which the data gathered by the sensors are periodically sent to a central database server where user queries are processed. The local storage, in the other hand called distributed approach, exploits the capabilities of sensors calculation and the sensors act as local databases. The data is stored in a central database server and in the devices themselves, enabling one to query both. The WSNs are used in a wide variety of applications, which may perform certain operations on collected sensor data. However, for certain applications, such as real-time applications, the sensor data must closely reflect the current state of the targeted environment. However, the environment changes constantly and the data is collected in discreet moments of time. As such, the collected data has a temporal validity, and as time advances, it becomes less accurate, until it does not reflect the state of the environment any longer. Thus, these applications must query and analyze the data in a bounded time in order to make decisions and to react efficiently, such as industrial automation, aviation, sensors network, and so on. In this context, the design of efficient real-time data management solutions is necessary to deal with both time constraints and energy consumption. This thesis studies the real-time data management techniques for WSNs. It particularly it focuses on the study of the challenges in handling real-time data storage and query for WSNs and on the efficient real-time data management solutions for WSNs. First, the main specifications of real-time data management are identified and the available real-time data management solutions for WSNs in the literature are presented. Secondly, in order to provide an energy-efficient real-time data management solution, the techniques used to manage data and queries in WSNs based on the distributed paradigm are deeply studied. In fact, many research works argue that the distributed approach is the most energy-efficient way of managing data and queries in WSNs, instead of performing the warehousing. In addition, this approach can provide quasi real-time query processing because the most current data will be retrieved from the network. Thirdly, based on these two studies and considering the complexity of developing, testing, and debugging this kind of complex system, a model for a simulation framework of the real-time databases management on WSN that uses a distributed approach and its implementation are proposed. This will help to explore various solutions of real-time database techniques on WSNs before deployment for economizing money and time. Moreover, one may improve the proposed model by adding the simulation of protocols or place part of this simulator on another available simulator. For validating the model, a case study considering real-time constraints as well as energy constraints is discussed. Fourth, a new architecture that combines statistical modeling techniques with the distributed approach and a query processing algorithm to optimize the real-time user query processing are proposed. This combination allows performing a query processing algorithm based on admission control that uses the error tolerance and the probabilistic confidence interval as admission parameters. The experiments based on real world data sets as well as synthetic data sets demonstrate that the proposed solution optimizes the real-time query processing to save more energy while meeting low latency.Fundação para a Ciência e Tecnologi

    Maintaining consistency in distributed systems

    Get PDF
    In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability

    Specification and verification of network algorithms using temporal logic

    Get PDF
    In software engineering, formal methods are mathematical-based techniques that are used in the specification, development and verification of algorithms and programs in order to provide reliability and robustness of systems. One of the most difficult challenges for software engineering is to tackle the complexity of algorithms and software found in concurrent systems. Networked systems have come to prominence in many aspects of modern life, and therefore software engineering techniques for treating concurrency in such systems has acquired a particular importance. Algorithms in the software of concurrent systems are used to accomplish certain tasks which need to comply with the properties required of the system as a whole. These properties can be broadly subdivided into `safety properties', where the requirement is `nothing bad will happen', and `liveness properties', where the requirement is that `something good will happen'. As such, specifying network algorithms and their safety and liveness properties through formal methods is the aim of the research presented in this thesis. Since temporal logic has proved to be a successful technique in formal methods, which have various practical applications due to the availability of powerful model-checking tools such as the NuSMV model checker, we will investigate the specification and verification of network algorithms using temporal logic and model checking. In the first part of the thesis, we specify and verify safety properties for network algorithms. We will use temporal logic to prove the safety property of data consistency or serializability for a model of the execution of an unbounded number of concurrent transactions over time, which could represent software schedulers for an unknown number of transactions being present in a network. In the second part of the thesis, we will specify and verify the liveness properties of networked flooding algorithms. Considering the above in more detail, the first part of this thesis specifies a model of the execution of an unbounded number of concurrent transactions over time in propositional Linear Temporal Logic (LTL) in order to prove serializability. This is made possible by assuming that data items are ordered and that the transactions accessing these data items respects this order, as then there is a bound on the number of transactions that need to be considered to prove serializability. In particular, we make use of recent work which places such bounds on the number of transactions needed when data items are accessed in order, but do not have to be accessed contiguously, i.e., there may be `gaps' in the data items being accessed by individual transactions. Our aim is to specify the concurrent modification of data held on routers in a network as a transactional model. The correctness of the routing protocol and ensuring safety and reliability then corresponds to the serializability of the transactions. We specify an example of routing in a network and the corresponding serializability condition in LTL. This is then coded up in the NuSMV model checker and proofs are performed. The novelty of this part is that no previous research has used a method for detecting serializablity and cycles for unlimited number of transactions accessing the data on routers where the transactions way of accessing the data items on the routers have a gap. In addition to this, linear temporal logic has not been used in this scenario to prove correctness of the network system. This part is very helpful in network administrative protocols where it is critical to maintain correctness of the system. This safety property can be maintained using the presented work where detection of cycles in transactions accessing the data items can be detected by only checking a limited number of cycles rather than checking all possible cycles that can be caused by the network transactions. The second part of the thesis offers two contributions. Firstly, we specify the basic synchronous network flooding algorithm, for any fixed size of network, in LTL. The specification can be customized to any single network topology or class of topologies. A specification for the termination problem is formulated and used to compare different topologies with regards to earlier termination. We give a worked example of one topology resulting in earlier termination than another, for which we perform a formal verification using the NuSMV model checker. The novelty of the second part comes in using linear temporal logic and the NuSMV model checker to specify and verify the liveness property of the flooding algorithm. The presented work shows a very difficult scenario where the network nodes are memoryless. This makes detecting the termination of network flooding very complicated especially with networks of complex topologies. In the literature, researchers focussed on using testing and simulations to detect flooding termination. In this work, we used a robust technique and a rigorous method to specify and verify the synchronous flooding algorithm and its termination. We also showed that we can use linear temporal logic and the model checker NuSMV to compare synchronous flooding termination between topologies. Adding to the novelty of the second contribution, in addition to the synchronous form of the network flooding algorithm, we further provide a formal model of bounded asynchronous network flooding by extending the synchronous flooding model to allow a sent message, non-deterministically, to either be received instantaneously, or enter a transit phase prior to being received. A generalization of `rounds' from synchronous flooding to the asynchronous case is used as a unit of time to provide a measure of time to termination, as the number of rounds taken, for a run of an asynchronous system. The model is encoded into temporal logic and a proof obligation is given for comparing the termination times of asynchronous and synchronous systems. Worked examples are formally verified using the NuSMV model checker. This work offers a constraint-based methodology for the verification of liveness properties of software algorithms distributed across the nodes in a network.</div

    Partial Computation in Real-Time Database Systems: A Research Plan

    Get PDF
    State-of-the-art database management systems are inappropriate for real-time applications due to their lack of speed and predictability of response. To combat these problems, the scheduler needs to be able to take advantage of the vast quantity of semantic and timing information that is typically available in such systems. Furthermore, to improve predictability of response, the system should be capable of providing a partial, but correct, response in a timely manner. We therefore propose to develop a semantics for real-time database systems that incorporates temporal knowledge of data-objects, their validity, and computation using their values. This temporal knowledge should include not just historical information but future knowledge of when to expect values to appear. This semantics will be used to develop a notion of approximate or partial computation, and to develop schedulers appropriate for real-time transactions

    Serializable Isolation for Snapshot Databases

    Get PDF
    Many popular database management systems implement a multiversion concurrency control algorithm called snapshot isolation rather than providing full serializability based on locking. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that would maintain consistency if run serially. Until now, the only way to prevent these anomalies was to modify the applications by introducing explicit locking or artificial update conflicts, following careful analysis of conflicts between all pairs of transactions. This thesis describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation of the algorithm in a relational database management system is described, along with a benchmark and performance study, showing that the throughput approaches that of snapshot isolation in most cases
    corecore