124 research outputs found

    Resilient optical multicasting utilizing cycles in WDM optical networks

    Get PDF
    High capacity telecommunications of today is possible only because of the presence of optical networks. At the heart of an optical network is an optical fiber whose data carrying capabilities are unparalleled. Multicasting is a form of communication in wavelength division multiplexed (WDM) networks that involves one source and multiple destinations. Light trees, which employ light splitting at various nodes, are used to deliver data to multiple destinations. A fiber cut has been estimated to occur, on an average, once every four days by TEN, a pan-European carrier network. This thesis presents algorithms to make multicast sessions survivable against component failures. We consider multiple link failures and node failures in this work. The two algorithms presented in this thesis use a hybrid approach which is a combination of proactive and reactive approaches to recover from failures. We introduce the novel concept of minimal-hop cycles to tolerate simultaneous multiple link failures in a multicast session. While the first algorithm deals only with multiple link failures, the second algorithm considers the case of node failure and a link failure. Two different versions of the first algorithm have been implemented to thoroughly understand its behavior. Both algorithms were studied through simulators on two different networks, the USA Longhaul network and the NSF network. The input multicast sessions to all our algorithms were generated from power efficient multicast algorithms that make sure the power in the receiving nodes are at acceptable levels. The parameters used to evaluate the performance of our algorithms include computation times, network usage and power efficiency. Two new parameters, namely, recovery times and recovery success probability, have been introduced in this work. To our knowledge, this work is the first to introduce the concept of minimal hop cycles to recover from simultaneous multiple link failures in a multicast session in optical networks

    Resource-aware Video Multicasting via Access Gateways in Wireless Mesh Networks

    Get PDF
    This paper studies video multicasting in large-scale areas using wireless mesh networks. The focus is on the use of Internet access gateways that allow a choice of alternative routes to avoid potentially lengthy and low-capacity multihop wireless paths. A set of heuristic-based algorithms is described that together aim to maximize reliable network capacity: the two-tier integrated architecture algorithm, the weighted gateway uploading algorithm, the link-controlled routing tree algorithm, and the dynamic group management algorithm. These algorithms use different approaches to arrange nodes involved in video multicasting into a clustered and two-tier integrated architecture in which network protocols can make use of multiple gateways to improve system throughput. Simulation results are presented, showing that our multicasting algorithms can achieve up to 40 percent more throughput than other related published approaches

    Mécanismes d'allocation de ressources et fiabilité dans les réseaux coeur de prochaines générations

    Get PDF
    Définitions et concepts de bases -- Éléments de problématique -- Objectifs de recherche -- Principales contributions -- Revue de littérature -- Modèles de services -- Routage avec Qualité de Service -- Ingénierie de traffic -- Contrôle d'admission avec Qualité de Service -- Fiabilité des réseaux -- A novel admission control mechanism in GMPLS- BASED IP over optical networks -- Problem statement -- Numerical results -- Joint routing and admission control problem under statistical delay and jitter constraints in MPLS networks -- Simulation results -- A survivable multicast routing mechanism in WDM optical networks -- Survivable routing under SRLG constraints -- GR-SMRS : Greedy heuristic for survivable multicast routing under SRLG constraints -- simulation results

    Making Networks Robust to Component Failures

    Get PDF
    In this thesis, we consider instances of component failure in the Internet and in networked cyber-physical systems, such as the communication network used by the modern electric power grid (termed the smart grid). We design algorithms that make these networks more robust to various component failures, including failed routers, failures of links connecting routers, and failed sensors. This thesis divides into three parts: recovery from malicious or misconfigured nodes injecting false information into a distributed system (e.g., the Internet), placing smart grid sensors to provide measurement error detection, and fast recovery from link failures in a smart grid communication network. First, we consider the problem of malicious or misconfigured nodes that inject and spread incorrect state throughout a distributed system. Such false state can degrade the performance of a distributed system or render it unusable. For example, in the case of network routing algorithms, false state corresponding to a node incorrectly declaring a cost of 0 to all destinations (maliciously or due to misconfiguration) can quickly spread through the network. This causes other nodes to (incorrectly) route via the misconfigured node, resulting in suboptimal routing and network congestion. We propose three algorithms for efficient recovery in such scenarios and evaluate their efficacy. The last two parts of this thesis consider robustness in the context of the electric power grid. We study the use and placement of a sensor, called a Phasor Measurement Unit (PMU), currently being deployed in electric power grids worldwide. PMUs provide voltage and current measurements at a sampling rate orders of magnitude higher than the status quo. As a result, PMUs can both drastically improve existing power grid operations and enable an entirely new set of applications, such as the reliable integration of renewable energy resources. However, PMU applications require correct (addressed in thesis part 2) and timely(covered in thesis part 3) PMU data. Without these guarantees, smart grid operators and applications may make incorrect decisions and take corresponding (incorrect) actions. The second part of this thesis addresses PMU measurement errors, which have been observed in practice. We formulate a set of PMU placement problems that aim to satisfy two constraints: place PMUs near each other to allow for measurement error detection and use the minimal number of PMUs to infer the state of the maximum number of system buses and transmission lines. For each PMU placement problem, we prove it is NP-Complete, propose a simple greedy approximation algorithm, and evaluate our greedy solutions. In the last part of this thesis, we design algorithms for fast recovery from link failures in a smart grid communication network. We propose, design, and evaluate solutions to all three aspects of link failure recovery: (a) link failure detection, (b) algorithms for pre-computing backup multicast trees, and (c) fast backup tree installation. To address (a), we design link-failure detection and reporting mechanisms that use OpenFlow to detect link failures when and where they occur inside the network. OpenFlow is an open source framework that cleanly separates the control and data planes for use in network management and control. For part (b), we formulate a new problem, Multicast Recycling, that pre-computes backup multicast trees that aim to minimize control plane signaling overhead. We prove Multicast Recycling is at least NP-hard and present a corresponding approximation algorithm. Lastly, two control plane algorithms are proposed that signal data plane switches to install pre-computed backup trees. An optimized version of each installation algorithm is designed that finds a near minimum set of forwarding rules by sharing forwarding rules across multicast groups. This optimization reduces backup tree install time and associated control state. We implement these algorithms using the POX open-source OpenFlow controller and evaluate them using the Mininet emulator, quantifying control plane signaling and installation time

    Arquitetura de elevada disponibilidade para bases de dados na cloud

    Get PDF
    Dissertação de mestrado em Computer ScienceCom a constante expansão de sistemas informáticos nas diferentes áreas de aplicação, a quantidade de dados que exigem persistência aumenta exponencialmente. Assim, por forma a tolerar faltas e garantir a disponibilidade de dados, devem ser implementadas técnicas de replicação. Atualmente existem várias abordagens e protocolos, tendo diferentes tipos de aplicações em vista. Existem duas grandes vertentes de protocolos de replicação, protocolos genéricos, para qualquer serviço, e protocolos específicos destinados a bases de dados. No que toca a protocolos de replicação genéricos, as principais técnicas existentes, apesar de completa mente desenvolvidas e em utilização, têm algumas limitações, nomeadamente: problemas de performance relativamente a saturação da réplica primária na replicação passiva e o determinismo necessário associado à replicação ativa. Algumas destas desvantagens são mitigadas pelos protocolos específicos de base de dados (e.g., com recurso a multi-master) mas estes protocolos não permitem efetuar uma separação entre a lógica da replicação e os respetivos dados. Abordagens mais recentes tendem a basear-se em técnicas de repli cação com fundamentos em mecanismos distribuídos de logging. Tais mecanismos propor cionam alta disponibilidade de dados e tolerância a faltas, permitindo abordagens inovado ras baseadas puramente em logs. Por forma a atenuar as limitações encontradas não só no mecanismo de replicação ativa e passiva, mas também nas suas derivações, esta dissertação apresenta uma solução de replicação híbrida baseada em middleware, o SQLware. A grande vantagem desta abor dagem baseia-se na divisão entre a camada de replicação e a camada de dados, utilizando um log distribuído altamente escalável que oferece tolerância a faltas e alta disponibilidade. O protótipo desenvolvido foi validado com recurso à execução de testes de desempenho, sendo avaliado em duas infraestruturas diferentes, nomeadamente, um servidor privado de média gama e um grupo de servidores de computação de alto desempenho. Durante a avaliação do protótipo, o standard da indústria TPC-C, tipicamente utilizado para avaliar sistemas de base de dados transacionais, foi utilizado. Os resultados obtidos demonstram que o SQLware oferece uma aumento de throughput de 150 vezes, comparativamente ao mecanismo de replicação nativo da base de dados considerada, o PostgreSQL.With the constant expansion of computational systems, the amount of data that requires durability increases exponentially. All data persistence must be replicated in order to provide high-availability and fault tolerance according to the surrogate application or use-case. Currently, there are numerous approaches and replication protocols developed supporting different use-cases. There are two prominent variations of replication protocols, generic protocols, and database specific ones. The two main techniques associated with generic replication protocols are the active and passive replication. Although generic replication techniques are fully matured and widely used, there are inherent problems associated with those protocols, namely: performance issues of the primary replica of passive replication and the determinism required by the active replication. Some of those disadvantages are mitigated by specific database replication protocols (e.g., using multi-master) but, those protocols do not allow a separation between logic and data and they can not be decoupled from the database engine. Moreover, recent strategies consider highly-scalable and fault tolerant distributed logging mechanisms, allowing for newer designs based purely on logs to power replication. To mitigate the shortcomings found in both active and passive replication mechanisms, but also in partial variations of these methods, this dissertation presents a hybrid replication middleware, SQLware. The cornerstone of the approach lies in the decoupling between the logical replication layer and the data store, together with the use of a highly scalable distributed log that provides fault-tolerance and high-availability. We validated the prototype by conducting a benchmarking campaign to evaluate the overall system’s performance under two distinct infrastructures, namely a private medium class server, and a private high performance computing cluster. Across the evaluation campaign, we considered the TPCC benchmark, a widely used benchmark in the evaluation of Online transaction processing (OLTP) database systems. Results show that SQLware was able to achieve 150 times more throughput when compared with the native replication mechanism of the underlying data store considered as baseline, PostgreSQL.This work was partially funded by FCT - Fundação para a Ciência e a Tecnologia, I.P., (Portuguese Foundation for Science and Technology) within project UID/EEA/50014/201

    Reliable Multicast in Mobile Ad Hoc Wireless Networks

    Get PDF
    A mobile wireless ad hoc network (MANET) consists of a group of mobile nodes communicating wirelessly with no fixed infrastructure. Each node acts as source or receiver, and all play a role in path discovery and packet routing. MANETs are growing in popularity due to multiple usage models, ease of deployment and recent advances in hardware with which to implement them. MANETs are a natural environment for multicasting, or group communication, where one source transmits data packets through the network to multiple receivers. Proposed applications for MANET group communication ranges from personal network apps, impromptu small scale business meetings and gatherings, to conference, academic or sports complex presentations for large crowds reflect the wide range of conditions such a protocol must handle. Other applications such as covert military operations, search and rescue, disaster recovery and emergency response operations reflect the mission critical nature of many ad hoc applications. Reliable data delivery is important for all categories, but vital for this last one. It is a feature that a MANET group communication protocol must provide. Routing protocols for MANETs are challenged with establishing and maintaining data routes through the network in the face of mobility, bandwidth constraints and power limitations. Multicast communication presents additional challenges to protocols. In this dissertation we study reliability in multicast MANET routing protocols. Several on-demand multicast protocols are discussed and their performance compared. Then a new reliability protocol, R-ODMRP is presented that runs on top of ODMRP, a well documented best effort protocol with high reliability. This protocol is evaluated against ODMRP in a standard network simulator, ns-2. Next, reliable multicast MANET protocols are discussed and compared. We then present a second new protocol, Reyes, also a reliable on-demand multicast communication protocol. Reyes is implemented in the ns-2 simulator and compared against the current standards for reliability, flooding and ODMRP. R-ODMRP is used as a comparison point as well. Performance results are comprehensively described for latency, bandwidth and reliable data delivery. The simulations show Reyes to greatly outperform the other protocols in terms of reliability, while also outperforming R-ODMRP in terms of latency and bandwidth overhead

    A Framework for Controlling Quality of Sessions in Multimedia Systems

    Get PDF
    Collaborative multimedia systems demand overall session quality control beyond the level of quality of service (QoS) pertaining to individual connections in isolation of others. At every instant in time, the quality of the session depends on the actual QoS offered by the system to each of the application streams, as well as on the relative priorities of these streams according to the application semantics. We introduce a framework for achieving QoSess control and address the architectural issues involved in designing a QoSess control laver that realizes the proposed framework. In addition, we detail our contributions for two main components of the QoSess control layer. The first component is a scalable and robust feedback protocol, which allows for determining the worst case state among a group of receivers of a stream. This mechanism is used for controlling the transmission rates of multimedia sources in both cases of layered and single-rate multicast streams. The second component is a set of inter-stream adaptation algorithms that dynamically control the bandwidth shares of the streams belonging to a session. Additionally, in order to ensure stability and responsiveness in the inter-stream adaptation process, several measures are taken, including devising a domain rate control protocol. The performance of the proposed mechanisms is analyzed and their advantages are demonstrated by simulation and experimental results

    Support Efficient, Scalable, and Online Social Spam Detection in System

    Get PDF
    The broad success of online social networks (OSNs) has created fertile soil for the emergence and fast spread of social spam. Fake news, malicious URL links, fraudulent advertisements, fake reviews, and biased propaganda are bringing serious consequences for both virtual social networks and human life in the real world. Effectively detecting social spam is a hot topic in both academia and industry. However, traditional social spam detection techniques are limited to centralized processing on top of one specific data source but ignore the social spam correlations of distributed data sources. Moreover, a few research efforts are conducting in integrating the stream system (e.g., Storm, Spark) with the large-scale social spam detection, but they typically ignore the specific details in managing and recovering interim states during the social stream data processing. We observed that social spammers who aim to advertise their products or post victim links are more frequently spreading malicious posts during a very short period of time. They are quite smart to adapt themselves to old models that were trained based on historical records. Therefore, these bring a question: how can we uncover and defend against these online spam activities in an online and scalable manner? In this dissertation, we present there systems that support scalable and online social spam detection from streaming social data: (1) the first part introduces Oases, a scalable system that can support large-scale online social spam detection, (2) the second part introduces a system named SpamHunter, a novel system that supports efficient online scalable spam detection in social networks. The system gives novel insights in guaranteeing the efficiency of the modern stream applications by leveraging the spam correlations at scale, and (3) the third part refers to the state recovery during social spam detection, it introduces a customizable state recovery framework that provides fast and scalable state recovery mechanisms for protecting large distributed states in social spam detection applications
    • …
    corecore