3,141 research outputs found

    The Beginnings and Prospective Ending of “End-to-End”: An Evolutionary Perspective On the Internet’s Architecture

    Get PDF
    The technology of “the Internet” is not static. Although its “end-to- end” architecture has made this “connection-less” communications system readily “extensible,” and highly encouraging to innovation both in hardware and software applications, there are strong pressures for engineering changes. Some of these are wanted to support novel transport services (e.g. voice telephony, real-time video); others would address drawbacks that appeared with opening of the Internet to public and commercial traffic - e.g., the difficulties of blocking delivery of offensive content, suppressing malicious actions (e.g. “denial of service” attacks), pricing bandwidth usage to reduce congestion. The expected gains from making “improvements” in the core of the network should be weighed against the loss of the social and economic benefits that derive from the “end-to-end” architectural design. Even where technological “fixes” can be placed at the networks’ edges, the option remains to search for alternative, institutional mechanisms of governing conduct in cyberspace.

    Distributed Rate Allocation Policies for Multi-Homed Video Streaming over Heterogeneous Access Networks

    Full text link
    We consider the problem of rate allocation among multiple simultaneous video streams sharing multiple heterogeneous access networks. We develop and evaluate an analytical framework for optimal rate allocation based on observed available bit rate (ABR) and round-trip time (RTT) over each access network and video distortion-rate (DR) characteristics. The rate allocation is formulated as a convex optimization problem that minimizes the total expected distortion of all video streams. We present a distributed approximation of its solution and compare its performance against H-infinity optimal control and two heuristic schemes based on TCP-style additive-increase-multiplicative decrease (AIMD) principles. The various rate allocation schemes are evaluated in simulations of multiple high-definition (HD) video streams sharing multiple access networks. Our results demonstrate that, in comparison with heuristic AIMD-based schemes, both media-aware allocation and H-infinity optimal control benefit from proactive congestion avoidance and reduce the average packet loss rate from 45% to below 2%. Improvement in average received video quality ranges between 1.5 to 10.7 dB in PSNR for various background traffic loads and video playout deadlines. Media-aware allocation further exploits its knowledge of the video DR characteristics to achieve a more balanced video quality among all streams.Comment: 12 pages, 22 figure

    Merlin: A Language for Provisioning Network Resources

    Full text link
    This paper presents Merlin, a new framework for managing resources in software-defined networks. With Merlin, administrators express high-level policies using programs in a declarative language. The language includes logical predicates to identify sets of packets, regular expressions to encode forwarding paths, and arithmetic formulas to specify bandwidth constraints. The Merlin compiler uses a combination of advanced techniques to translate these policies into code that can be executed on network elements including a constraint solver that allocates bandwidth using parameterizable heuristics. To facilitate dynamic adaptation, Merlin provides mechanisms for delegating control of sub-policies and for verifying that modifications made to sub-policies do not violate global constraints. Experiments demonstrate the expressiveness and scalability of Merlin on real-world topologies and applications. Overall, Merlin simplifies network administration by providing high-level abstractions for specifying network policies and scalable infrastructure for enforcing them

    Assessing and augmenting SCADA cyber security: a survey of techniques

    Get PDF
    SCADA systems monitor and control critical infrastructures of national importance such as power generation and distribution, water supply, transportation networks, and manufacturing facilities. The pervasiveness, miniaturisations and declining costs of internet connectivity have transformed these systems from strictly isolated to highly interconnected networks. The connectivity provides immense benefits such as reliability, scalability and remote connectivity, but at the same time exposes an otherwise isolated and secure system, to global cyber security threats. This inevitable transformation to highly connected systems thus necessitates effective security safeguards to be in place as any compromise or downtime of SCADA systems can have severe economic, safety and security ramifications. One way to ensure vital asset protection is to adopt a viewpoint similar to an attacker to determine weaknesses and loopholes in defences. Such mind sets help to identify and fix potential breaches before their exploitation. This paper surveys tools and techniques to uncover SCADA system vulnerabilities. A comprehensive review of the selected approaches is provided along with their applicability

    Wireless Sensor Network transport protocol: A critical review

    Get PDF
    The transport protocols for Wireless Sensor Network (WSN) play vital role in achieving the high performance together with longevity of the network. The researchers are continuously contributing in developing new transport layer protocols based on different principles and architectures enabling different combinations of technical features. The uniqueness of each new protocol more or less lies in these functional features, which can be commonly classified based on their proficiencies in fulfilling congestion control, reliability support, and prioritization. The performance of these protocols has been evaluated using dissimilar set of experimental/simulation parameters, thus there is no well defined benchmark for experimental/simulation settings. The researchers working in this area have to compare the performance of the new protocol with the existing protocols to prove that new protocol is better. However, one of the major challenges faced by the researchers is investigating the performance of all the existing protocols, which have been tested in different simulation environments. This leads the significance of having a well-defined benchmark for the experimental/simulation settings. If the future researchers simulate their protocols according to a standard set of simulation/experimental settings, the performance of those protocols can be directly compared with each other just using the published simulation results.This article offers a twofold contribution to support researchers working in the area of WSN transport protocol design. First, we extensively review the technical features of existing transport protocols and suggest a generic framework for a WSN transport protocol, which offers a strong groundwork for the new researchers to identify the open research issues. Second we analyse the experimental settings, focused application areas and the addressed performance criteria of existing protocols; thus suggest a benchmark of experimental/simulation settings for evaluating prospective transport protocols

    Layering as Optimization Decomposition: Questions and Answers

    Get PDF
    Network protocols in layered architectures have historically been obtained on an ad-hoc basis, and much of the recent cross-layer designs are conducted through piecemeal approaches. Network protocols may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems in the form of generalized Network Utility Maximization (NUM), providing insight on what they optimize and on the structures of network protocol stacks. In the form of 10 Questions and Answers, this paper presents a short survey of the recent efforts towards a systematic understanding of "layering" as "optimization decomposition". The overall communication network is modeled by a generalized NUM problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. Furthermore, there are many alternative decompositions, each leading to a different layering architecture. Industry adoption of this unifying framework has also started. Here we summarize the current status of horizontal decomposition into distributed computation and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and coding. We also discuss under-explored future research directions in this area. More importantly than proposing any particular crosslayer design, this framework is working towards a mathematical foundation of network architectures and the design process of modularization

    Congestion Prediction in Internet of Things Network using Temporal Convolutional Network A Centralized Approach

    Get PDF
    The unprecedented ballooning of network traffic flow, specifically, Internet of Things (IoT) network traffic, has big stressed of congestion on todays Internet. Non-recurring network traffic flow may be caused by temporary disruptions, such as packet drop, poor quality of services, delay, etc. Hence, the network traffic flow estimation is important in IoT networks to predict congestion. As the data in IoT networks is collected from a large number of diversified devices which have unlike format of data and also manifest complex correlations, so the generated data is heterogeneous and nonlinear in nature. Conventional machine learning approaches unable to deal with nonlinear datasets and suffer from misclassification of real network traffic due to overfitting. Therefore, it also becomes really hard for conventional machine learning tools like shallow neural networks to predict the congestion accurately. Accuracy of congestion prediction algorithms play an important role to control the congestion by regulating the send rate of the source. Various deeplearning methods (LSTM, CNN, GRU, etc.) are considered in designing network traffic flow predictors, which have shown promising results. In this work, we propose a novel congestion predictor for IoT, that uses Temporal Convolutional Network (TCN). Furthermore, we use Taguchi method to optimize the TCN model that reduces the number of runs of the experiments. We compare TCN with other four deep learning-based models concerning Mean Absolute Error (MAE) and Mean Relative Error (MRE). The experimental results show that TCN based deep learning framework achieves improved performance with 95.52% accuracy in predicting network congestion. Further, we design the Home IoT network testbed to capture the real network traffic flows as no standard dataset is available

    A Routing Delay Predication Based on Packet Loss and Explicit Delay Acknowledgement for Congestion Control in MANET

    Get PDF
    In Mobile Ad hoc Networks congestion control and prevention are demanding because of network node mobility and dynamic topology. Congestion occurs primarily due to the large traffic volume in the case of data flow because the rate of inflow of data traffic is higher than the rate of data packets on the node. This alteration in sending rate results in routing delays and low throughput. The Rate control is a significant concern in streaming applications, especially in wireless networks. The TCP friendly rate control method is extensively recognized as a rate control mechanism for wired networks, which is effective in minimizing packet loss (PL) in the event of congestion. In this paper, we propose a routing delay prediction based on PL and Explicit Delay Acknowledgement (EDA) mechanism for data rate and congestion control in MANET to control data rate to minimize the loss of packets and improve the throughput. The experiment is performed over a reactive routing protocol to reduce the packet loss, jitter, and improvisation of throughput
    • 

    corecore