898 research outputs found

    On Mobility Management in Multi-Sink Sensor Networks for Geocasting of Queries

    Get PDF
    In order to efficiently deal with location dependent messages in multi-sink wireless sensor networks (WSNs), it is key that the network informs sinks what geographical area is covered by which sink. The sinks are then able to efficiently route messages which are only valid in particular regions of the deployment. In our previous work (see the 5th and 6th cited documents), we proposed a combined coverage area reporting and geographical routing protocol for location dependent messages, for example, queries that are injected by sinks. In this paper, we study the case where we have static sinks and mobile sensor nodes in the network. To provide up-to-date coverage areas to sinks, we focus on handling node mobility in the network. We discuss what is a better method for updating the routing structure (i.e., routing trees and coverage areas) to handle mobility efficiently: periodic global updates initiated from sinks or local updates triggered by mobile sensors. Simulation results show that local updating perform very well in terms of query delivery ratio. Local updating has a better scalability to increasing network size. It is also more energy efficient than ourpreviously proposed approach, where global updating in networks have medium mobility rate and speed

    OpenKnowledge at work: exploring centralized and decentralized information gathering in emergency contexts

    Get PDF
    Real-world experience teaches us that to manage emergencies, efficient crisis response coordination is crucial; ICT infrastructures are effective in supporting the people involved in such contexts, by supporting effective ways of interaction. They also should provide innovative means of communication and information management. At present, centralized architectures are mostly used for this purpose; however, alternative infrastructures based on the use of distributed information sources, are currently being explored, studied and analyzed. This paper aims at investigating the capability of a novel approach (developed within the European project OpenKnowledge1) to support centralized as well as decentralized architectures for information gathering. For this purpose we developed an agent-based e-Response simulation environment fully integrated with the OpenKnowledge infrastructure and through which existing emergency plans are modelled and simulated. Preliminary results show the OpenKnowledge capability of supporting the two afore-mentioned architectures and, under ideal assumptions, a comparable performance in both cases

    Security Verification of Secure MANET Routing Protocols

    Get PDF
    Secure mobile ad hoc network (MANET) routing protocols are not tested thoroughly against their security properties. Previous research focuses on verifying secure, reactive, accumulation-based routing protocols. An improved methodology and framework for secure MANET routing protocol verification is proposed which includes table-based and proactive protocols. The model checker, SPIN, is selected as the core of the secure MANET verification framework. Security is defined by both accuracy and availability: a protocol forms accurate routes and these routes are always accurate. The framework enables exhaustive verification of protocols and results in a counter-example if the protocol is deemed insecure. The framework is applied to models of the Optimized Link-State Routing (OLSR) and Secure OLSR protocol against five attack vectors. These vectors are based on known attacks against each protocol. Vulnerabilities consistent with published findings are automatically revealed. No unknown attacks were found; however, future attack vectors may lead to new attacks. The new framework for verifying secure MANET protocols extends verification capabilities to table-based and proactive protocols

    A Process Calculus for Dynamic Networks

    Get PDF
    In this paper we propose a process calculus framework for dynamic networks in which the network topology may change as computation proceeds. The proposed calculus allows one to abstract away from neighborhood-discovery computations and it contains features for broadcasting at multiple transmission ranges and for viewing networks at different levels of abstraction. We develop a theory of confluence for the calculus and we use the machinery developed towards the verification of a leader-election algorithm for mobile ad hoc networks

    Enabling Information Gathering Patterns for Emergency Response with the OpenKnowledge System

    Get PDF
    Today's information systems must operate effectively within open and dynamic environments. This challenge becomes a necessity for crisis management systems. In emergency contexts, in fact, a large number of actors need to collaborate and coordinate in the disaster scenes by exchanging and reporting information with each other and with the people in the control room. In such open settings, coordination technologies play a crucial role in supporting mobile agents located in areas prone to sudden changes with adaptive and flexible interaction patterns. Research efforts in different areas are converging to devise suitable mechanisms for process coordination: specifically, current results on service-oriented computing and multi-agent systems are being integrated to enable dynamic interaction among autonomous components in large, open systems. This work focuses on the exploitation and evaluation of the OpenKnowledge framework to support different information-gathering patterns in emergency contexts. The OpenKnowledge (OK) system has been adopted to model and simulate possible emergency plans. The Lightweight Coordination Calculus (LCC) is used to specify interaction models, which are published, discovered and executed by the OK distributed infrastructure in order to simulate peer interactions. A simulation environment fully integrated with the OK system has been developed to: (1) evaluate whether such infrastructure is able to support different models of information-sharing, e.g., centralized and decentralized patterns of interaction; (2) investigate under which conditions the OK paradigm, exploited in its decentralized nature, can improve the performance of more conventional centralized approaches. Preliminary results show the capability of the OK system in supporting the two afore-mentioned patterns and, under ideal assumptions, a comparable performance in both cases

    On Mobility Management in Multi-Sink Sensor Networks for Geocasting of Queries

    Get PDF
    In order to efficiently deal with location dependent messages in multi-sink wireless sensor networks (WSNs), it is key that the network informs sinks what geographical area is covered by which sink. The sinks are then able to efficiently route messages which are only valid in particular regions of the deployment. In our previous work (see the 5th and 6th cited documents), we proposed a combined coverage area reporting and geographical routing protocol for location dependent messages, for example, queries that are injected by sinks. In this paper, we study the case where we have static sinks and mobile sensor nodes in the network. To provide up-to-date coverage areas to sinks, we focus on handling node mobility in the network. We discuss what is a better method for updating the routing structure (i.e., routing trees and coverage areas) to handle mobility efficiently: periodic global updates initiated from sinks or local updates triggered by mobile sensors. Simulation results show that local updating perform very well in terms of query delivery ratio. Local updating has a better scalability to increasing network size. It is also more energy efficient than ourpreviously proposed approach, where global updating in networks have medium mobility rate and speed

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Specification and verification of network algorithms using temporal logic

    Get PDF
    In software engineering, formal methods are mathematical-based techniques that are used in the specification, development and verification of algorithms and programs in order to provide reliability and robustness of systems. One of the most difficult challenges for software engineering is to tackle the complexity of algorithms and software found in concurrent systems. Networked systems have come to prominence in many aspects of modern life, and therefore software engineering techniques for treating concurrency in such systems has acquired a particular importance. Algorithms in the software of concurrent systems are used to accomplish certain tasks which need to comply with the properties required of the system as a whole. These properties can be broadly subdivided into `safety properties', where the requirement is `nothing bad will happen', and `liveness properties', where the requirement is that `something good will happen'. As such, specifying network algorithms and their safety and liveness properties through formal methods is the aim of the research presented in this thesis. Since temporal logic has proved to be a successful technique in formal methods, which have various practical applications due to the availability of powerful model-checking tools such as the NuSMV model checker, we will investigate the specification and verification of network algorithms using temporal logic and model checking. In the first part of the thesis, we specify and verify safety properties for network algorithms. We will use temporal logic to prove the safety property of data consistency or serializability for a model of the execution of an unbounded number of concurrent transactions over time, which could represent software schedulers for an unknown number of transactions being present in a network. In the second part of the thesis, we will specify and verify the liveness properties of networked flooding algorithms. Considering the above in more detail, the first part of this thesis specifies a model of the execution of an unbounded number of concurrent transactions over time in propositional Linear Temporal Logic (LTL) in order to prove serializability. This is made possible by assuming that data items are ordered and that the transactions accessing these data items respects this order, as then there is a bound on the number of transactions that need to be considered to prove serializability. In particular, we make use of recent work which places such bounds on the number of transactions needed when data items are accessed in order, but do not have to be accessed contiguously, i.e., there may be `gaps' in the data items being accessed by individual transactions. Our aim is to specify the concurrent modification of data held on routers in a network as a transactional model. The correctness of the routing protocol and ensuring safety and reliability then corresponds to the serializability of the transactions. We specify an example of routing in a network and the corresponding serializability condition in LTL. This is then coded up in the NuSMV model checker and proofs are performed. The novelty of this part is that no previous research has used a method for detecting serializablity and cycles for unlimited number of transactions accessing the data on routers where the transactions way of accessing the data items on the routers have a gap. In addition to this, linear temporal logic has not been used in this scenario to prove correctness of the network system. This part is very helpful in network administrative protocols where it is critical to maintain correctness of the system. This safety property can be maintained using the presented work where detection of cycles in transactions accessing the data items can be detected by only checking a limited number of cycles rather than checking all possible cycles that can be caused by the network transactions. The second part of the thesis offers two contributions. Firstly, we specify the basic synchronous network flooding algorithm, for any fixed size of network, in LTL. The specification can be customized to any single network topology or class of topologies. A specification for the termination problem is formulated and used to compare different topologies with regards to earlier termination. We give a worked example of one topology resulting in earlier termination than another, for which we perform a formal verification using the NuSMV model checker. The novelty of the second part comes in using linear temporal logic and the NuSMV model checker to specify and verify the liveness property of the flooding algorithm. The presented work shows a very difficult scenario where the network nodes are memoryless. This makes detecting the termination of network flooding very complicated especially with networks of complex topologies. In the literature, researchers focussed on using testing and simulations to detect flooding termination. In this work, we used a robust technique and a rigorous method to specify and verify the synchronous flooding algorithm and its termination. We also showed that we can use linear temporal logic and the model checker NuSMV to compare synchronous flooding termination between topologies. Adding to the novelty of the second contribution, in addition to the synchronous form of the network flooding algorithm, we further provide a formal model of bounded asynchronous network flooding by extending the synchronous flooding model to allow a sent message, non-deterministically, to either be received instantaneously, or enter a transit phase prior to being received. A generalization of `rounds' from synchronous flooding to the asynchronous case is used as a unit of time to provide a measure of time to termination, as the number of rounds taken, for a run of an asynchronous system. The model is encoded into temporal logic and a proof obligation is given for comparing the termination times of asynchronous and synchronous systems. Worked examples are formally verified using the NuSMV model checker. This work offers a constraint-based methodology for the verification of liveness properties of software algorithms distributed across the nodes in a network.</div
    corecore