164 research outputs found

    A P2P Sensor Data Stream Delivery System to Accommodate Heterogeneous Cycles Using Skip Graphs

    Get PDF
    3PGCIC2015 : 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing , Nov 4-6, 2015 , Krakow, PolandIn this paper, we propose a method using skip graphs to delivery sensor data streams with heterogeneous delivery cycles. Currently skip graphs have been proposed as one of structured overlay networks that construct links among nodes based on a specific rule. The proposed method sorts nodes by their delivery cycles and constructs delivery paths based on skip graphs. We confirmed in simulation that our proposed method can delivery sensor data with heterogeneous cycles using skip graphs to distribute the load of source node

    Design and Evaluation of Distributed Algorithms for Placement of Network Services

    Get PDF
    Network services play an important role in the Internet today. They serve as data caches for websites, servers for multiplayer games and relay nodes for Voice over IP: VoIP) conversations. While much research has focused on the design of such services, little attention has been focused on their actual placement. This placement can impact the quality of the service, especially if low latency is a requirement. These services can be located on nodes in the network itself, making these nodes supernodes. Typically supernodes are selected in either a proprietary or ad hoc fashion, where a study of this placement is either unavailable or unnecessary. Previous research dealt with the only pieces of the problem, such as finding the location of caches for a static topology, or selecting better routes for relays in VoIP. However, a comprehensive solution is needed for dynamic applications such as multiplayer games or P2P VoIP services. These applications adapt quickly and need solutions based on the immediate demands of the network. In this thesis we develop distributed algorithms to assign nodes the role of a supernode. This research first builds off of prior work by modifying an existing assignment algorithm and implementing it in a distributed system called Supernode Placement in Overlay Topologies: SPOT). New algorithms are developed to assign nodes the supernode role. These algorithms are then evaluated in SPOT to demonstrate improved SN assignment and scalability. Through a series of simulation, emulation, and experimentation insight is gained into the critical issues associated with allocating resources to perform the role of supernodes. Our contributions include distributed algorithms to assign nodes as supernodes, an open source fully functional distributed supernode allocation system, an evaluation of the system in diverse networking environments, and a simulator called SPOTsim which demonstrates the scalability of the system to thousands of nodes. An example of an application deploying such a system is also presented along with the empirical results

    Application of overlay techniques to network monitoring

    Get PDF
    Measurement and monitoring are important for correct and efficient operation of a network, since these activities provide reliable information and accurate analysis for characterizing and troubleshooting a network’s performance. The focus of network measurement is to measure the volume and types of traffic on a particular network and to record the raw measurement results. The focus of network monitoring is to initiate measurement tasks, collect raw measurement results, and report aggregated outcomes. Network systems are continuously evolving: besides incremental change to accommodate new devices, more drastic changes occur to accommodate new applications, such as overlay-based content delivery networks. As a consequence, a network can experience significant increases in size and significant levels of long-range, coordinated, distributed activity; furthermore, heterogeneous network technologies, services and applications coexist and interact. Reliance upon traditional, point-to-point, ad hoc measurements to manage such networks is becoming increasingly tenuous. In particular, correlated, simultaneous 1-way measurements are needed, as is the ability to access measurement information stored throughout the network of interest. To address these new challenges, this dissertation proposes OverMon, a new paradigm for edge-to-edge network monitoring systems through the application of overlay techniques. Of particular interest, the problem of significant network overheads caused by normal overlay network techniques has been addressed by constructing overlay networks with topology awareness - the network topology information is derived from interior gateway protocol (IGP) traffic, i.e. OSPF traffic, thus eliminating all overlay maintenance network overhead. Through a prototype that uses overlays to initiate measurement tasks and to retrieve measurement results, systematic evaluation has been conducted to demonstrate the feasibility and functionality of OverMon. The measurement results show that OverMon achieves good performance in scalability, flexibility and extensibility, which are important in addressing the new challenges arising from network system evolution. This work, therefore, contributes an innovative approach of applying overly techniques to solve realistic network monitoring problems, and provides valuable first hand experience in building and evaluating such a distributed system

    Efficient Passive Clustering and Gateways selection MANETs

    Get PDF
    Passive clustering does not employ control packets to collect topological information in ad hoc networks. In our proposal, we avoid making frequent changes in cluster architecture due to repeated election and re-election of cluster heads and gateways. Our primary objective has been to make Passive Clustering more practical by employing optimal number of gateways and reduce the number of rebroadcast packets

    Markovian Model for Data-Driven P2P Video Streaming Applications

    Get PDF
    The purpose of this study is to propose a Markovian model to evaluate general P2P streaming applications with the assumption of chunk-delivery approach similar to Bit-Torrent file sharing applications. The state of the system was defined as the number of useful pieces in a peer's buffer. The model was numerically solved to find out the probability distribution of the number of useful pieces. The central theme of this study revolved around answering the question: what is the probability that a peer can play the stream continuously? This is one of the most important metrics to evaluate the performance of a streaming application. By finding the numerical solution of the Markov chain, we found that increasing the number of neighbours enhances the continuity to a certain threshold, after which the continuity improvement is marginal which complies with empirical results conducted with DONet, a data-driven overlay network for media streaming. We also found that increasing the buffer length increases the continuity but there is a trade-off because peers exchange information about the buffer map, hence increasing the buffer length increases the overhead. We discussed the continuity for both homogeneous and heterogeneous peers regarding the uploading bandwidth. Then we discussed the case when the first chunk is downloaded, but not played out because the playtime deadline was missed. We suggested a general approach for freezing and skipping the playback pointer, that can be used to take advantage of the available delay tolerance, finally given a specific configuration we measured the probability of sliding action, that could be used to initiate peers' adaptation process

    Flexi-WVSNP-DASH: A Wireless Video Sensor Network Platform for the Internet of Things

    Get PDF
    abstract: Video capture, storage, and distribution in wireless video sensor networks (WVSNs) critically depends on the resources of the nodes forming the sensor networks. In the era of big data, Internet of Things (IoT), and distributed demand and solutions, there is a need for multi-dimensional data to be part of the Sensor Network data that is easily accessible and consumable by humanity as well as machinery. Images and video are expected to become as ubiquitous as is the scalar data in traditional sensor networks. The inception of video-streaming over the Internet, heralded a relentless research for effective ways of distributing video in a scalable and cost effective way. There has been novel implementation attempts across several network layers. Due to the inherent complications of backward compatibility and need for standardization across network layers, there has been a refocused attention to address most of the video distribution over the application layer. As a result, a few video streaming solutions over the Hypertext Transfer Protocol (HTTP) have been proposed. Most notable are Apple’s HTTP Live Streaming (HLS) and the Motion Picture Experts Groups Dynamic Adaptive Streaming over HTTP (MPEG-DASH). These frameworks, do not address the typical and future WVSN use cases. A highly flexible Wireless Video Sensor Network Platform and compatible DASH (WVSNP-DASH) are introduced. The platform's goal is to usher video as a data element that can be integrated into traditional and non-Internet networks. A low cost, scalable node is built from the ground up to be fully compatible with the Internet of Things Machine to Machine (M2M) concept, as well as the ability to be easily re-targeted to new applications in a short time. Flexi-WVSNP design includes a multi-radio node, a middle-ware for sensor operation and communication, a cross platform client facing data retriever/player framework, scalable security as well as a cohesive but decoupled hardware and software design.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Pervasive service discovery in low-power and lossy networks

    Get PDF
    Pervasive Service Discovery (SD) in Low-power and Lossy Networks (LLNs) is expected to play a major role in realising the Internet of Things (IoT) vision. Such a vision aims to expand the current Internet to interconnect billions of miniature smart objects that sense and act on our surroundings in a way that will revolutionise the future. The pervasiveness and heterogeneity of such low-power devices requires robust, automatic, interoperable and scalable deployment and operability solutions. At the same time, the limitations of such constrained devices impose strict challenges regarding complexity, energy consumption, time-efficiency and mobility. This research contributes new lightweight solutions to facilitate automatic deployment and operability of LLNs. It mainly tackles the aforementioned challenges through the proposition of novel component-based, automatic and efficient SD solutions that ensure extensibility and adaptability to various LLN environments. Building upon such architecture, a first fully-distributed, hybrid pushpull SD solution dubbed EADP (Extensible Adaptable Discovery Protocol) is proposed based on the well-known Trickle algorithm. Motivated by EADPs’ achievements, new methods to optimise Trickle are introduced. Such methods allow Trickle to encompass a wide range of algorithms and extend its usage to new application domains. One of the new applications is concretized in the TrickleSD protocol aiming to build automatic, reliable, scalable, and time-efficient SD. To optimise the energy efficiency of TrickleSD, two mechanisms improving broadcast communication in LLNs are proposed. Finally, interoperable standards-based SD in the IoT is demonstrated, and methods combining zero-configuration operations with infrastructure-based solutions are proposed. Experimental evaluations of the above contributions reveal that it is possible to achieve automatic, cost-effective, time-efficient, lightweight, and interoperable SD in LLNs. These achievements open novel perspectives for zero-configuration capabilities in the IoT and promise to bring the ‘things’ to all people everywhere

    Об'єднані матеріали семінарів з квантових інформаційних технологій та периферійних обчислень (QuaInT+doors 2021). Житомир, Україна, 11 квітня 2021 р.

    Get PDF
    Об'єднані матеріали семінарів з квантових інформаційних технологій та периферійних обчислень (QuaInT+doors 2021). Житомир, Україна, 11 квітня 2021 р.Joint Proceedings of the Workshops on Quantum Information Technologies and Edge Computing (QuaInT+doors 2021). Zhytomyr, Ukraine, April 11, 2021

    Live Streaming with Gossip

    Get PDF
    Peer-to-peer (P2P) architectures have emerged as a popular paradigm to support the dynamic and scalable nature of distributed systems. This is particularly relevant today, given the tremendous increase in the intensity of information exchanged over the Internet. A P2P system is typically composed of participants that are willing to contribute resources, such as memory or bandwidth, in the execution of a collaborative task providing a benefit to all participants. File sharing is probably the most widely used collaborative task, where each participant wants to receive an individual copy of some file. Users collaborate by sending fragments of the file they have already downloaded to other participants. Sharing files containing multimedia content, files that typically reach the hundreds of megabytes to gigabytes, introduces a number of challenges. Given typical bandwidths of participants of hundreds of kilobits per second to a couple of megabits per second, it is unacceptable to wait until completion of the download before actually being able to use the file as the download represents a non negligible time. From the point of view of the participant, getting the (entire) file as fast as possible is typically not good enough. As one example, Video on Demand (VoD) is a scenario where a participant would like to start previewing the multimedia content (the stream), offered by a source, even though only a fraction of it has been received, and then continue the viewing while the rest of the content is being received. Following the same line of reasoning, new applications have emerged that rely on live streaming: the source does not own a file that it wants to share with others, but shares content as soon as it is produced. In other words, the content to distribute is live, not pre-recorded and stored. Typical examples include the broadcasting of live sports events, conferences or interviews. The gossip paradigm is a type of data dissemination that relies on random communication between participants in a P2P system, sharing similarities with the epidemic dissemination of diseases. An epidemic starts to spread when the source randomly chooses a set of communication partners, of size fanout, and infects them, i.e., it shares a rumor with them. This set of participants, in turn, randomly picks fanout communication partners each and infects them, i.e., share with them the same rumor. This paradigm has many advantages including fast propagation of rumors, a probabilistic guarantee that each rumor reaches all participants, high resilience to churn (i.e., participants that join and leave) and high scalability. Gossip therefore constitutes a candidate of choice for live streaming in large-scale systems. These advantages, however, come at a price. While disseminating data, gossip creates many duplicates of the same rumor and participants usually receive multiple copies of the same rumor. While this is obviously a feature when it comes to guaranteeing good dissemination of the rumor when churn is high, it is a clear disadvantage when spreading large amounts of multimedia data (i.e., ordered and time-critical) to participants with limited resources, namely upload bandwidth in the case of high-bandwidth content dissemination. This thesis therefore investigates if and how the gossip paradigm can be used as a highly effcient communication system for live streaming under the following specific scenarios: (i) where participants can only contribute limited resources, (ii) when these limited resources are heterogeneously distributed among nodes, and (iii) where only a fraction of participants are contributing their fair share of work while others are freeriding. To meet these challenges, this thesis proposes (i) gossip++: a gossip-based protocol especially tailored for live streaming that separates the dissemination of metadata, i.e., the location of the data, and the dissemination of the data itself. By first spreading the location of the content to interested participants, the protocol avoids wasted bandwidth in sending and receiving duplicates of the payload, (ii) HEAP: a fanout adaptation mechanism that enables gossip to adapt participants' contribution with respect to their resources while still preserving its reliability, and (iii) LiFT: a protocol to secure high-bandwidth gossip-based dissemination protocols against freeriders

    Adaptive trust and reputation system as a security service in group communications

    Get PDF
    Group communications has been facilitating many emerging applications which require packet delivery from one or more sender(s) to multiple receivers. Owing to the multicasting and broadcasting nature, group communications are susceptible to various kinds of attacks. Though a number of proposals have been reported to secure group communications, provisioning security in group communications remains a critical and challenging issue. This work first presents a survey on recent advances in security requirements and services in group communications in wireless and wired networks, and discusses challenges in designing secure group communications in these networks. Effective security services to secure group communications are then proposed. This dissertation also introduces the taxonomy of security services, which can be applied to secure group communications, and evaluates existing secure group communications schemes. This dissertation work analyzes a number of vulnerabilities against trust and reputation systems, and proposes a threat model to predict attack behaviors. This work also considers scenarios in which multiple attacking agents actively and collaboratively attack the whole network as well as a specific individual node. The behaviors may be related to both performance issues and security issues. Finally, this work extensively examines and substantiates the security of the proposed trust and reputation system. This work next discusses the proposed trust and reputation system for an anonymous network, referred to as the Adaptive Trust-based Anonymous Network (ATAN). The distributed and decentralized network management in ATAN does not require a central authority so that ATAN alleviates the problem of a single point of failure. In ATAN, the trust and reputation system aims to enhance anonymity by establishing a trust and reputation relationship between the source and the forwarding members. The trust and reputation relationship of any two nodes is adaptive to new information learned by these two nodes or recommended from other trust nodes. Therefore, packets are anonymously routed from the \u27trusted\u27 source to the destination through \u27trusted\u27 intermediate nodes, thereby improving anonymity of communications. In the performance analysis, the ratio of the ATAN header and data payload is around 0.1, which is relatively small. This dissertation offers analysis on security services on group communications. It illustrates that these security services are needed to incorporate with each other such that group communications can be secure. Furthermore, the adaptive trust and reputation system is proposed to integrate the concept of trust and reputation into communications. Although deploying the trust and reputation system incurs some overheads in terms of storage spaces, bandwidth and computation cycles, it shows a very promising performance that enhance users\u27 confidence in using group communications, and concludes that the trust and reputation system should be deployed as another layer of security services to protect group communications against malicious adversaries and attacks
    corecore