1,080 research outputs found

    A Framework for Secure and Survivable Wireless Sensor Networks

    Get PDF
    Wireless sensor networks increasingly become viable solutions to many challenging problems and will successively be deployed in many areas in the future. A wireless sensor network (WSN) is vulnerable to security attacks due to the insecure communication channels, limited computational and communication capabilities and unattended nature of sensor node devices, limited energy resources and memory. Security and survivability of these systems are receiving increasing attention, particularly critical infrastructure protection. So we need to design a framework that provide both security and survivability for WSNs. To meet this goals, we propose a framework for secure and survivable WSNs and we present a key management scheme as a case study to prevent the sensor networks being compromised by an adversary. This paper also considers survivability strategies for the sensor network against a variety of threats that can lead to the failure of the base station, which represents a central point of failure.key management scheme, security, survivability, WSN

    Regenerator Location Problem and survivable extensions: A hub covering location perspective

    Get PDF
    Cataloged from PDF version of article.In a telecommunications network the reach of an optical signal is the maximum distance it can traverse before its quality degrades. Regenerators are devices to extend the optical reach. The regenerator placement problem seeks to place the minimum number of regenerators in an optical network so as to facilitate the communication of a signal between any node pair. In this study, the Regenerator Location Problem is revisited from the hub location perspective directing our focus to applications arising in transportation settings. Two new dimensions involving the challenges of survivability are introduced to the problem. Under partial survivability, our designs hedge against failures in the regeneration equipment only, whereas under full survivability failures on any of the network nodes are accounted for by the utilization of extra regeneration equipment. All three variations of the problem are studied in a unifying framework involving the introduction of individual flow-based compact formulations as well as cut formulations and the implementation of branch and cut algorithms based on the cut formulations. Extensive computational experiments are conducted in order to evaluate the performance of the proposed solution methodologies and to gain insights from realistic instances. (C) 2014 Elsevier Ltd. All rights reserved

    Joint dimensioning of server and network infrastructure for resilient optical grids/clouds

    Get PDF
    We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful

    Protection and restoration algorithms for WDM optical networks

    Get PDF
    Currently, Wavelength Division Multiplexing (WDM) optical networks play a major role in supporting the outbreak in demand for high bandwidth networks driven by the Internet. It can be a catastrophe to millions of users if a single optical fiber is somehow cut off from the network, and there is no protection in the design of the logical topology for a restorative mechanism. Many protection and restoration algorithms are needed to prevent, reroute, and/or reconfigure the network from damages in such a situation. In the past few years, many works dealing with these issues have been reported. Those algorithms can be implemented in many ways with several different objective functions such as a minimization of protection path lengths, a minimization of restoration times, a maximization of restored bandwidths, etc. This thesis investigates, analyzes and compares the algorithms that are mainly aimed to guarantee or maximize the amount of remaining bandwidth still working over a damaged network. The parameters considered in this thesis are the routing computation and implementation mechanism, routing characteristics, recovering computation timing, network capacity assignment, and implementing layer. Performance analysis in terms of the restoration efficiency, the hop length, the percentage of bandwidth guaranteed, the network capacity utilization, and the blocking probability is conducted and evaluated

    Robust capacitated trees and networks with uniform demands

    Full text link
    We are interested in the design of robust (or resilient) capacitated rooted Steiner networks in case of terminals with uniform demands. Formally, we are given a graph, capacity and cost functions on the edges, a root, a subset of nodes called terminals, and a bound k on the number of edge failures. We first study the problem where k = 1 and the network that we want to design must be a tree covering the root and the terminals: we give complexity results and propose models to optimize both the cost of the tree and the number of terminals disconnected from the root in the worst case of an edge failure, while respecting the capacity constraints on the edges. Second, we consider the problem of computing a minimum-cost survivable network, i.e., a network that covers the root and terminals even after the removal of any k edges, while still respecting the capacity constraints on the edges. We also consider the possibility of protecting a given number of edges. We propose three different formulations: a cut-set based formulation, a flow based one, and a bilevel one (with an attacker and a defender). We propose algorithms to solve each formulation and compare their efficiency

    Scalable algorithms for QoS-aware virtual network mapping for cloud services

    Get PDF
    Both business and consumer applications increasingly depend on cloud solutions. Yet, many are still reluctant to move to cloud-based solutions, mainly due to concerns of service quality and reliability. Since cloud platforms depend both on IT resources (located in data centers, DCs) and network infrastructure connecting to it, both QoS and resilience should be offered with end-to-end guarantees up to and including the server resources. The latter currently is largely impeded by the fact that the network and cloud DC domains are typically operated by disjoint entities. Network virtualization, together with combined control of network and IT resources can solve that problem. Here, we formally state the combined network and IT provisioning problem for a set of virtual networks, incorporating resilience as well as QoS in physical and virtual layers. We provide a scalable column generation model, to address real world network sizes. We analyze the latter in extensive case studies, to answer the question at which layer to provision QoS and resilience in virtual networks for cloud services

    Resilient optical multicasting utilizing cycles in WDM optical networks

    Get PDF
    High capacity telecommunications of today is possible only because of the presence of optical networks. At the heart of an optical network is an optical fiber whose data carrying capabilities are unparalleled. Multicasting is a form of communication in wavelength division multiplexed (WDM) networks that involves one source and multiple destinations. Light trees, which employ light splitting at various nodes, are used to deliver data to multiple destinations. A fiber cut has been estimated to occur, on an average, once every four days by TEN, a pan-European carrier network. This thesis presents algorithms to make multicast sessions survivable against component failures. We consider multiple link failures and node failures in this work. The two algorithms presented in this thesis use a hybrid approach which is a combination of proactive and reactive approaches to recover from failures. We introduce the novel concept of minimal-hop cycles to tolerate simultaneous multiple link failures in a multicast session. While the first algorithm deals only with multiple link failures, the second algorithm considers the case of node failure and a link failure. Two different versions of the first algorithm have been implemented to thoroughly understand its behavior. Both algorithms were studied through simulators on two different networks, the USA Longhaul network and the NSF network. The input multicast sessions to all our algorithms were generated from power efficient multicast algorithms that make sure the power in the receiving nodes are at acceptable levels. The parameters used to evaluate the performance of our algorithms include computation times, network usage and power efficiency. Two new parameters, namely, recovery times and recovery success probability, have been introduced in this work. To our knowledge, this work is the first to introduce the concept of minimal hop cycles to recover from simultaneous multiple link failures in a multicast session in optical networks

    Technology-related disasters:a survey towards disaster-resilient software defined networks

    Get PDF
    Resilience against disaster scenarios is essential to network operators, not only because of the potential economic impact of a disaster but also because communication networks form the basis of crisis management. COST RECODIS aims at studying measures, rules, techniques and prediction mechanisms for different disaster scenarios. This paper gives an overview of different solutions in the context of technology-related disasters. After a general overview, the paper focuses on resilient Software Defined Networks
    • 

    corecore