6 research outputs found

    Virtual Broking Coding for Reliable In-Network Storage on WSANs

    Get PDF
    International audienceThe emerging Internet of Things (IoT) paradigmmakes Wireless Sensor and Actuator Networks (WSANs) seemas a central element for data production and consumption. Inthis realm, where data are produced and consumed within thenetwork, WSANs have as a challenge to perform in-network datastorage considering their resource shortage. In this paper, wepropose the Virtual Broking Coding (VBC) as a data storagescheme compliant with WSANs constraints. As such, VBCensures a reliable data storage and an efficient mechanism fordata retrievability. To evaluate our proposed solution, we presenta theoretical analysis as well as a simulation study. Using both,we show that VBC reduces the cost incurred by the codingtechniques; and increases the delivery ratio of the requesteddata. The results presented by VBC suggest this solution as anew direction on how to use network coding based schemes toaddress the WSAN in-network storage problem

    Information Resilience through User-Assisted Caching in Disruptive Content-Centric Networks

    Get PDF
    We investigate an information-resilience scheme in the context of Content-Centric Networks (CCN) for the retrieval of content in disruptive, fragmented networks cases. To resolve and fetch content when the origin is not available due to fragmentation, we exploit content cached both in in-network caches and in end-users’ devices. Initially, we present the required modifications in the CCN architecture to support the proposed resilience scheme. We also present the family of policies that enable the retrieval of cached content and we derive an analytical expression/lower bound of the probability that an information item will disappear from the network (be absorbed) and the time to absorption when the origin of the item is not reachable. Extensive simulations indicate that the proposed resilience scheme is a valid tool for the retrieval of cached content in disruptive scenarios, since it allows the retrieval of content for a long period after the fragmentation of the network and the “disappearance” of the content origin

    A content-based publish/subscribe framework for large-scale content delivery

    No full text
    The publish/subscribe communication paradigm has become an important architectural style for designing distributed systems and has recently been considered one of the most promising future network architectures that solves many challenges of content delivery in the current Internet. This work is concerned with scaling decentralized content-based publish/subscribe (CBPS) networks for large-scale content distribution. A fundamental step for CBPS networks to reach the large-scale is to move from the current exhaustive filtering service model, where a subscription selects every relevant publication, to a service model capturing the quantitative and qualitative heterogeneity of information consumers requirements. Moreover, the proposed work aims at leveraging caching for increasing the communication efficiency of CBPS operating at large-scale characterized by widely spread information consumers with heterogeneous requirements, large number of publications and scarcity of end-to-end bandwidth. We propose and design a service model for addressing the consumers' requirements for content-based information retrieval and describe the relevant protocols necessary to implement such a service. We evaluate the proposed approach, by using realistic workload scenarios and comparing different content and interest forwarding strategies as well as caching policies in terms of resource efficiency and user perceived QoS metrics. (C) 2012 Elsevier B.V. All rights reserved

    Efficient Methods on Reducing Data Redundancy in the Internet

    Get PDF
    The transformation of the Internet from a client-server based paradigm to a content-based one has led to many of the fundamental network designs becoming outdated. The increase in user-generated contents, instant sharing, flash popularity, etc., brings forward the needs for designing an Internet which is ready for these and can handle the needs of the small-scale content providers. The Internet, as of today, carries and stores a large amount of duplicate, redundant data, primarily due to a lack of duplication detection mechanisms and caching principles. This redundancy costs the network in different ways: it consumes energy from the network elements that need to process the extra data; it makes the network caches store duplicate data, thus causing the tail of the data distribution to be swapped out of the caches; and it causes the content-servers to be loaded more as they have to always serve the less popular contents.  In this dissertation, we have analyzed the aforementioned phenomena and proposed several methods to reduce the redundancy of the network at a low cost. The proposals involve different approaches to do so--including data chunk level redundancy detection and elimination, rerouting-based caching mechanisms in information-centric networks, and energy-aware content distribution techniques. Using these approaches, we have demonstrated how we can perform redundancy elimination using a low overhead and low processing power. We have also demonstrated that by using local or global cooperation methods, we can increase the storage efficiency of the existing caches many-fold. In addition to that, this work shows that it is possible to reduce a sizable amount of traffic from the core network using collaborative content download mechanisms, while reducing client devices' energy consumption simultaneously
    corecore