70 research outputs found

    Randomized and efficient time synchronization in dynamic wireless sensor networks: a gossip-consensus-based approach

    Get PDF
    This paper proposes novel randomized gossip-consensus-based sync (RGCS) algorithms to realize efficient time correction in dynamic wireless sensor networks (WSNs). First, the unreliable links are described by stochastic connections, reflecting the characteristic of changing connectivity gleaned from dynamicWSNs. Secondly, based on the mutual drift estimation, each pair of activated nodes fully adjusts clock rate and offset to achieve network-wide time synchronization by drawing upon the gossip consensus approach. The converge-to-max criterion is introduced to achieve a much faster convergence speed. The theoretical results on the probabilistic synchronization performance of the RGCS are presented. Thirdly, a Revised-RGCS is developed to counteract the negative impact of bounded delays, because the uncertain delays are always present in practice and would lead to a large deterioration of algorithm performances. Finally, extensive simulations are performed on the MATLAB and OMNeT++ platform for performance evaluation. Simulation results demonstrate that the proposed algorithms are not only efficient for synchronization issues required for dynamic topology changes but also give a better performance in term of converging speed, collision rate, and the robustness of resisting delay, and outperform other existing protocols

    Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms with Directed Gossip Communication

    Full text link
    We study distributed optimization in networked systems, where nodes cooperate to find the optimal quantity of common interest, x=x^\star. The objective function of the corresponding optimization problem is the sum of private (known only by a node,) convex, nodes' objectives and each node imposes a private convex constraint on the allowed values of x. We solve this problem for generic connected network topologies with asymmetric random link failures with a novel distributed, decentralized algorithm. We refer to this algorithm as AL-G (augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual function. Dual variables are updated by the standard method of multipliers, at a slow time scale. To update the primal variables, we propose a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses unidirectional gossip communication, only between immediate neighbors in the network and is resilient to random link failures. For networks with reliable communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian broadcast gossiping) algorithm reduces communication, computation and data storage cost. We prove convergence for all proposed algorithms and demonstrate by simulations the effectiveness on two applications: l_1-regularized logistic regression for classification and cooperative spectrum sensing for cognitive radio networks.Comment: 28 pages, journal; revise

    Security in Wireless Sensor Networks Employing MACGSP6

    Get PDF
    Wireless Sensor Networks (WSNs) have unique characteristics which constrain them; including small energy stores, limited computation, and short range communication capability. Most traditional security algorithms use cryptographic primitives such as Public-key cryptography and are not optimized for energy usage. Employing these algorithms for the security of WSNs is often not practical. At the same time, the need for security in WSNs is unavoidable. Applications such as military, medical care, structural monitoring, and surveillance systems require information security in the network. As current security mechanisms for WSNs are not sufficient, development of new security schemes for WSNs is necessary. New security schemes may be able to take advantage of the unique properties of WSNs, such as the large numbers of nodes typical in these networks to mitigate the need for cryptographic algorithms and key distribution and management. However, taking advantage of these properties must be done in an energy efficient manner. The research examines how the redundancy in WSNs can provide some security elements. The research shows how multiple random delivery paths (MRDPs) can provide data integrity for WSNs. Second, the research employs multiple sinks to increase the total number of duplicate packets received by sinks, allowing sink voting to mitigate the packet discard rate issue of a WSN with a single sink. Third, the research examines the effectiveness of using multiple random paths in maintaining data confidentiality in WSNs. Last, the research examines the use of a rate limit to cope with packet flooding attacks in WSNs

    Distributed consensus algorithms for wireless sensor networks: convergence analysis and optimization

    Get PDF
    Wireless sensor networks are developed to monitor areas of interest with the purpose of estimating physical parameters or/and detecting emergency events in a variety of military and civil applications. A wireless sensor network can be seen as a distributed computer, where spatially deployed sensor nodes are in charge of gathering measurements from the environment to compute a given function. The research areas for wireless sensor networks extend from the design of small, reliable hardware to low-complexity algorithms and energy saving communication protocols. Distributed consensus algorithms are low-complexity iterative schemes that have received increased attention in different fields due to a wide range of applications, where neighboring nodes communicate locally to compute the average of an initial set of measurements. Energy is a scarce resource in wireless sensor networks and therefore, the convergence of consensus algorithms, characterized by the total number of iterations until reaching a steady-state value, is an important topic of study. This PhD thesis addresses the problem of convergence and optimization of distributed consensus algorithms for the estimation of parameters in wireless sensor networks. The impact of quantization noise in the convergence is studied in networks with fixed topologies and symmetric communication links. In particular, a new scheme including quantization is proposed, whose mean square error with respect to the average consensus converges. The limit of the mean square error admits a closed-form expression and an upper bound for this limit depending on general network parameters is also derived. The convergence of consensus algorithms in networks with random topology is studied focusing particularly on convergence in expectation, mean square convergence and almost sure convergence. Closed-form expressions useful to minimize the convergence time of the algorithm are derived from the analysis. Regarding random networks with asymmetric links, closed-form expressions are provided for the mean square error of the state assuming equally probable uniform link weights, and mean square convergence to the statistical mean of the initial measurements is shown. Moreover, an upper bound for the mean square error is derived for the case of different probabilities of connection for the links, and a practical scheme with randomized transmission power exhibiting an improved performance in terms of energy consumption with respect to a fixed network with the same consumption on average is proposed. The mean square error expressions derived provide a means to characterize the deviation of the state vector with respect to the initial average when the instantaneous links are asymmetric. A useful criterion to minimize the convergence time in random networks with spatially correlated links is considered, establishing a sufficient condition for almost sure convergence to the consensus space. This criterion, valid also for topologies with spatially independent links, is based on the spectral radius of a positive semidefinite matrix for which we derive closed-form expressions assuming uniform link weights. The minimization of this spectral radius is a convex optimization problem and therefore, the optimum link weights minimizing the convergence time can be computed efficiently. The expressions derived are general and apply not only to random networks with instantaneous directed topologies but also to random networks with instantaneous undirected topologies. Furthermore, the general expressions can be particularized to obtain known protocols found in literature, showing that they can be seen as particular cases of the expressions derived in this thesis.Las redes de sensores inalámbricos se utilizan para monitorizar zonas de interés con el propósito final de estimar parámetros físicos y/o detectar situaciones de emergencia en gran variedad de aplicaciones militares y civiles. Una red de sensores inalámbricos puede ser considerada como un método de computación distribuido, donde nodos provistos de sensores toman medidas del entorno para calcular una función que depende de éstas. Las áreas de investigación comprenden desde el diseño de dispositivos hardware pequeños y fiables hasta algoritmos de baja complejidad o protocolos de comunicación de bajo consumo energético. Los algoritmos de consenso distribuidos son esquemas iterativos de baja complejidad que han suscitado mucha atención en diferentes campos debido a su gran espectro de aplicaciones, en los que nodos vecinos se comunican para calcular el promedio de un conjunto de medidas iniciales de la red. Dado que la energía es un recurso escaso en redes de sensores inalámbricos, la convergencia de dichos algoritmos de consenso, caracterizada por el número total de iteraciones hasta alcanzar un valor estacionario, es un importante tema de estudio. Esta tesis doctoral aborda problemas de convergencia y optimización de algoritmos de consenso distribuidos para la estimación de parámetros en redes de sensores inalámbricos. El impacto del ruido de cuantización en la convergencia se estudia en redes con topología fija y enlaces de comunicación simétricos. En particular, se propone un nuevo esquema que incluye el proceso de cuantización y se demuestra que el error cuadrático medio respecto del promedio inicial converge. Igualmente, se obtiene una expresión cerrada del límite del error cuadrático medio, y una cota superior para este límite que depende únicamente de parámetros generales de la red. La convergencia de los algoritmos de consenso en redes con topología aleatoria se estudia prestando especial atención a la convergencia en valor esperado, la convergencia en media cuadrática y la convergencia casi segura, y a partir del análisis se derivan expresiones cerradas útiles para minimizar el tiempo de convergencia. Para redes aleatorias con enlaces asimétricos, se obtienen expresiones cerradas del error cuadrático medio del estado suponiendo enlaces con probabilidad idéntica y con pesos uniformes, y se demuestra la convergencia en media cuadrática al promedio estadístico de las medidas iniciales. Se deduce una cota superior para el error cuadrático medio para el caso de enlaces con probabilidades de conexión diferentes y se propone, además, un esquema práctico con potencias de transmisión aleatorias, que mejora el rendimiento en términos de consumo de energía con respecto a una red fija. Las expresiones para el error cuadrático medio proporcionan una forma de caracterizar la desviación del vector de estado con respecto del promedio inicial cuando los enlaces instantáneos son asimétricos. Con el fin de minimizar el tiempo de convergencia en redes aleatorias con enlaces correlados espacialmente, se considera un criterio que establece una condición suficiente que garantiza la convergencia casi segura al espacio de consenso. Este criterio, que también es válido para topologías con enlaces espacialmente independientes, utiliza el radio espectral de una matriz semidefinida positiva para la cual se obtienen expresiones cerradas suponiendo enlaces con pesos uniformes. La minimización de dicho radio espectral es un problema de optimización convexa y, por lo tanto, el valor de los pesos óptimos puede calcularse de forma eficiente. Las expresiones obtenidas son generales y aplican no sólo para redes aleatorias con topologías dirigidas, sino también para redes aleatorias con topologías no dirigidas. Además, las expresiones generales pueden ser particularizadas para obtener protocolos conocidos en la literatura, demostrando que éstos últimos pueden ser considerados como casos particulares de las expresiones proporcionadas en esta tesis

    Gossip Algorithms for Distributed Signal Processing

    Full text link
    Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This article presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.Comment: Submitted to Proceedings of the IEEE, 29 page

    A survey of flooding, gossip routing, and related schemes for wireless multi- hop networks

    Get PDF
    Flooding is an essential and critical service in computer networks that is used by many routing protocols to send packets from a source to all nodes in the network. As the packets are forwarded once by each receiving node, many copies of the same packet traverse the network which leads to high redundancy and unnecessary usage of the sparse capacity of the transmission medium. Gossip routing is a well-known approach to improve the flooding in wireless multi-hop networks. Each node has a forwarding probability p that is either statically per-configured or determined by information that is available at runtime, e.g, the node degree. When a packet is received, the node selects a random number r. If the number r is below p, the packet is forwarded and otherwise, in the most simple gossip routing protocol, dropped. With this approach the redundancy can be reduced while at the same time the reachability is preserved if the value of the parameter p (and others) is chosen with consideration of the network topology. This technical report gives an overview of the relevant publications in the research domain of gossip routing and gives an insight in the improvements that can be achieved. We discuss the simulation setups and results of gossip routing protocols as well as further improved flooding schemes. The three most important metrics in this application domain are elaborated: reachability, redundancy, and management overhead. The published studies used simulation environments for their research and thus the assumptions, models, and parameters of the simulations are discussed and the feasibility of an application for real world wireless networks are highlighted. Wireless mesh networks based on IEEE 802.11 are the focus of this survey but publications about other network types and technologies are also included. As percolation theory, epidemiological models, and delay tolerant networks are often referred as foundation, inspiration, or application of gossip routing in wireless networks, a brief introduction to each research domain is included and the applicability of the particular models for the gossip routing is discussed

    Cross-layer energy optimisation of routing protocols in wireless sensor networks

    Get PDF
    Recent technological developments in embedded systems have led to the emergence of a new class of networks, known asWireless Sensor Networks (WSNs), where individual nodes cooperate wirelessly with each other with the goal of sensing and interacting with the environment.Many routing protocols have been developed tomeet the unique and challenging characteristics of WSNs (notably very limited power resources to sustain an expected lifetime of perhaps years, and the restricted computation, storage and communication capabilities of nodes that are nonetheless required to support large networks and diverse applications). No standards for routing have been developed yet for WSNs, nor has any protocol gained a dominant position among the research community. Routing has a significant influence on the overall WSN lifetime, and providing an energy efficient routing protocol remains an open problem. This thesis addresses the issue of designing WSN routing methods that feature energy efficiency. A common time reference across nodes is required in mostWSN applications. It is needed, for example, to time-stamp sensor samples and for duty cycling of nodes. Alsomany routing protocols require that nodes communicate according to some predefined schedule. However, independent distribution of the time information, without considering the routing algorithm schedule or network topology may lead to a failure of the synchronisation protocol. This was confirmed empirically, and was shown to result in loss of connectivity. This can be avoided by integrating the synchronisation service into the network layer with a so-called cross-layer approach. This approach introduces interactions between the layers of a conventional layered network stack, so that the routing layer may share information with other layers. I explore whether energy efficiency can be enhanced through the use of cross-layer optimisations and present three novel cross-layer routing algorithms. The first protocol, designed for hierarchical, cluster based networks and called CLEAR (Cross Layer Efficient Architecture for Routing), uses the routing algorithm to distribute time information which can be used for efficient duty cycling of nodes. The second method - called RISS (Routing Integrated Synchronization Service) - integrates time synchronization into the network layer and is designed to work well in flat, non-hierarchical network topologies. The third method - called SCALE (Smart Clustering Adapted LEACH) - addresses the influence of the intra-cluster topology on the energy dissipation of nodes. I also investigate the impact of the hop distance on network lifetime and propose a method of determining the optimal location of the relay node (the node through which data is routed in a two-hop network). I also address the problem of predicting the transition region (the zone separating the region where all packets can be received and that where no data can be received) and I describe a way of preventing the forwarding of packets through relays belonging in this transition region. I implemented and tested the performance of these solutions in simulations and also deployed these routing techniques on sensor nodes using TinyOS. I compared the average power consumption of the nodes and the precision of time synchronization with the corresponding parameters of a number of existing algorithms. All proposed schemes extend the network lifetime and due to their lightweight architecture they are very efficient on WSN nodes with constrained resources. Hence it is recommended that a cross-layer approach should be a feature of any routing algorithm for WSNs

    Quality-of-service provisioning for dynamic heterogeneous wireless sensor networks

    Get PDF
    A Wireless Sensor Network (WSN) consists of a large collection of spatially dis- tributed autonomous devices with sensors to monitor physical or environmental conditions, such as air-pollution, temperature and traffic flow. By cooperatively processing and communicating information to central locations, appropriate ac- tions can be performed in response. WSNs perform a large variety of applications, such as the monitoring of elderly persons or conditions in a greenhouse. To correctly and efficiently perform a task, the behaviour of the WSN should be such that sufficient Quality-of-Service (QoS) is provided. QoS is defined by constraints and objectives on network quality metrics, such as a maximum end- to-end packet loss or minimum network lifetime. After defining the application we want the WSN to perform, many steps are involved in designing the WSN such that sufficient QoS is provided. First, a (heterogeneous) set of sensor nodes and protocols need to be selected. Furthermore, a suitable deployment has to be found and the network should be configured for its first use. This configuration involves setting all controllable parameters that influence its behaviour, such as selecting the neighbouring node(s) to communicate to and setting the transmission power of its radio, to ensure that the WSN provides the required QoS. Configuring the network is a complex task as the number of parameters and their possible values are large and trade-offs between multiple quality metrics exist. High transmission power may result in a low packet loss to a neighbouring node, but also in a high power consumption and low lifetime. Heterogeneity in the network causes the impact of parameters to be different between nodes, requiring parameters of nodes to be set individually. Moreover, a static configuration is typically not sufficient to make the most efficient trade-off between the quality metrics at all times in a dynamic environment. Run-time mechanisms are needed to maintain the required level of QoS under changing circumstances, such as changing external interference, mobility of nodes or fluctuating traffic load. This thesis deals with run-time reconfiguration of dynamic heterogeneous wire- less sensor networks to maintain a required QoS, given a deployed network with selected communication protocols and their controllable parameters. The main contribution of this thesis is an efficient QoS provisioning strategy. It consists of three parts: a re-active reconfiguration method, a generic distributed service to estimate network metrics and a pro-active reconfiguration method. In the re-active method, nodes collaboratively respond to discrepancies be- tween the current and required QoS. Nodes use feedback control which, at a given speed, adapts parameters of the node to continuously reduce any error between the locally estimated network QoS and QoS requirements. A dynamic predictive model is used and updated at run-time, to predict how different parameter adap- tations influence the QoS. Setting the speed of adaptation allows us to influence the trade-off between responsiveness and overhead of the approach, and to tune it to the characteristics of the application scenario. Simulations and experiments with an actual deployment show the successful integration in practical scenar- ios. Compared to existing configuration strategies, we are able to extend network lifetime significantly, while maintaining required packet delivery ratios. To solve the non-trivial problem of efficiently estimating network quality met- rics, we introduce a generic distributed service to distributively compute various network metrics. This service takes into account the possible presence of links with asymmetric quality that may vary over time, by repeated forwarding of informa- tion over multiple hops combined with explicit information validity management. The generic service is instantiated from the definition of a recursive local update function that converges to a fixed point representing the desired metric. We show the convergence and stability of various instantiations. Parameters can be set in accordance with the characteristics of the deployment and influence the trade-off between accuracy and overhead. Simulations and experiments show a significant increase in estimation accuracy, and efficiency of a protocol using the estimates, compared to today’s current approaches. This service is integrated in various protocol stacks providing different kinds of network metric estimates. The pro-active reconfiguration method reconfigures in response to predefined run-time detectable events that may cause the network QoS to change signifi- cantly. While the re-active method is generally applicable and independent of the application scenario, the, complementary, pro-active method exploits any a-priori knowledge of the application scenario to adapt more efficiently. A simple example is that as soon as a person with a body sensor node starts walking we know that several aspects, including the network topology, will change. To avoid degradation of network QoS, we pro-actively adapt parameters, in this case, for instance, the frequency of updating the set of neighbouring nodes, as soon as we observe that a person starts to walk. At design time, different modes of operation are selected to be distinguished at run-time. Analysis techniques, such as simulations, are used to determine a suitable configuration for each of these modes. At run time, the approach ensures that nodes can detect the mode in which they should operate. We describe the integration of the pro-active method for two practical monitoring applications. Simulations and experiments show the feasibility of an implementa- tion on resource constrained nodes. The pro-active reconfiguration allows for an efficient QoS provisioning in combination with the re-active approach

    Weight Optimization for Consensus Algorithms with Correlated Switching Topology

    Full text link
    We design the weights in consensus algorithms with spatially correlated random topologies. These arise with: 1) networks with spatially correlated random link failures and 2) networks with randomized averaging protocols. We show that the weight optimization problem is convex for both symmetric and asymmetric random graphs. With symmetric random networks, we choose the consensus mean squared error (MSE) convergence rate as optimization criterion and explicitly express this rate as a function of the link formation probabilities, the link formation spatial correlations, and the consensus weights. We prove that the MSE convergence rate is a convex, nonsmooth function of the weights, enabling global optimization of the weights for arbitrary link formation probabilities and link correlation structures. We extend our results to the case of asymmetric random links. We adopt as optimization criterion the mean squared deviation (MSdev) of the nodes states from the current average state. We prove that MSdev is a convex function of the weights. Simulations show that significant performance gain is achieved with our weight design method when compared with methods available in the literature.Comment: 30 pages, 5 figures, submitted to IEEE Transactions On Signal Processin
    corecore