320,282 research outputs found

    Game theory for collaboration in future networks

    Get PDF
    Cooperative strategies have the great potential of improving network performance and spectrum utilization in future networking environments. This new paradigm in terms of network management, however, requires a novel design and analysis framework targeting a highly flexible networking solution with a distributed architecture. Game Theory is very suitable for this task, since it is a comprehensive mathematical tool for modeling the highly complex interactions among distributed and intelligent decision makers. In this way, the more convenient management policies for the diverse players (e.g. content providers, cloud providers, home providers, brokers, network providers or users) should be found to optimize the performance of the overall network infrastructure. The authors discuss in this chapter several Game Theory models/concepts that are highly relevant for enabling collaboration among the diverse players, using different ways to incentivize it, namely through pricing or reputation. In addition, the authors highlight several related open problems, such as the lack of proper models for dynamic and incomplete information games in this area.info:eu-repo/semantics/acceptedVersio

    JANUS: A Framework for Distributed Management of Wireless Mesh Networks

    Full text link
    Abstract — Wireless Mesh Networks (WMNs) are emerging as a potentially attractive access architecture for metropolitan-scale networks. While research on WMNs has been up to a large extent confined to the study of efficient routing protocols, there is a clear need to envision new network management tools, able to sufficiently exploit the peculiarities of WMNs. In particular, a new generation of middleware tools for network monitoring and profiling must be introduced in order to speed up development and testing of novel protocol architectures. Currently, manage-ment functionalities are developed using conventional central-ized approaches. The distributed and self-organizing nature of WMNs suggest a transition from network monitoring to network sensing. In this work, we propose JANUS, a novel framework for distributed monitoring of WMNs. We describe the JANUS architecture, present a possible implementation based on open-source software and report some experimental measurements carried out on a small-scale testbed. Index Terms — wireless mesh networks, network management, distributed hash table, overlay networks, publish-subscribe sys-tems I

    Foggy clouds and cloudy fogs: a real need for coordinated management of fog-to-cloud computing systems

    Get PDF
    The recent advances in cloud services technology are fueling a plethora of information technology innovation, including networking, storage, and computing. Today, various flavors have evolved of IoT, cloud computing, and so-called fog computing, a concept referring to capabilities of edge devices and users' clients to compute, store, and exchange data among each other and with the cloud. Although the rapid pace of this evolution was not easily foreseeable, today each piece of it facilitates and enables the deployment of what we commonly refer to as a smart scenario, including smart cities, smart transportation, and smart homes. As most current cloud, fog, and network services run simultaneously in each scenario, we observe that we are at the dawn of what may be the next big step in the cloud computing and networking evolution, whereby services might be executed at the network edge, both in parallel and in a coordinated fashion, as well as supported by the unstoppable technology evolution. As edge devices become richer in functionality and smarter, embedding capacities such as storage or processing, as well as new functionalities, such as decision making, data collection, forwarding, and sharing, a real need is emerging for coordinated management of fog-to-cloud (F2C) computing systems. This article introduces a layered F2C architecture, its benefits and strengths, as well as the arising open and research challenges, making the case for the real need for their coordinated management. Our architecture, the illustrative use case presented, and a comparative performance analysis, albeit conceptual, all clearly show the way forward toward a new IoT scenario with a set of existing and unforeseen services provided on highly distributed and dynamic compute, storage, and networking resources, bringing together heterogeneous and commodity edge devices, emerging fogs, as well as conventional clouds.Peer ReviewedPostprint (author's final draft

    Network Optimizations for Distributed Storage Networks

    Get PDF
    Distributed file systems enable the reliable storage of exabytes of information on thousands of servers distributed throughout a network. These systems achieve reliability and performance by storing three or more copies of data in different locations across the network. The management of these copies of data is commonly handled by intermediate servers that track and coordinate the placement of data in the network. This introduces potential network bottlenecks, as multiple transfers to fast storage nodes can saturate the network links connecting intermediate servers to the storage. The advent of open Network Operating Systems presents an opportunity to alleviate this bottleneck, as it is now possible to treat network elements as intermediate nodes in this distributed file system and have them perform the task of replicating data across storage nodes. In this thesis, we propose a new design paradigm for distributed file systems, driven by a new fundamental component of the system which runs on network elements such as switches or routers. We describe the component’s architecture and how it can be integrated into existing distributed file systems to increase their performance. To measure this performance increase over current approaches, we emulate a distributed file system by creating a block-level storage array distributed across multiple iSCSI targets presented in a network. Furthermore we emulate more complicated redundancy schemes likely to be used in distributed file systems in the future to determine what effect this approach may have on those systems and what benefits it offers. We find that this new component offers a decrease in request latency proportional to the number of storage nodes involved in the request. We also find that the benefits of this approach are limited by the ability of switch hardware to process incoming data from the request, but that these limitations can be surmounted through the proposed design paradigm

    Cognition-Based Networks: A New Perspective on Network Optimization Using Learning and Distributed Intelligence

    Get PDF
    IEEE Access Volume 3, 2015, Article number 7217798, Pages 1512-1530 Open Access Cognition-based networks: A new perspective on network optimization using learning and distributed intelligence (Article) Zorzi, M.a , Zanella, A.a, Testolin, A.b, De Filippo De Grazia, M.b, Zorzi, M.bc a Department of Information Engineering, University of Padua, Padua, Italy b Department of General Psychology, University of Padua, Padua, Italy c IRCCS San Camillo Foundation, Venice-Lido, Italy View additional affiliations View references (107) Abstract In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS, we propose to combine this learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared with the past and current research efforts in this area, the technical approach outlined in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication network

    Webchain: Verifiable Citations and References for the World Wide Web

    Get PDF
    Readers’ capability to consider and assess sources is imperative. Digital preservation efforts, however, mostly neglected citation provenance, which is a necessity for transparent source verification. We therefore present Webchain, a new system enabling verifiable citations and references on the World Wide Web. Its architecture combines a distributed ledger with secure timestamping to ensure history of creation, ownership, and referential integrity of online resources. With Webchain, readers can independently detect content manipulation by verifying authenticity, integrity, and time consistency. At the same time, authors gain a proof of existence for referenced articles. Webchain extends a well-known distributed timestamping scheme to handle an open and dynamic network topology by providing a solution for membership management. We examine the security of our approach, particularly regarding forging attacks. Our results show that we are able to render such attacks infeasible, even in the face of a powerful attacker

    A distributed intelligent network based on CORBA and SCTP

    Get PDF
    The telecommunications services marketplace is undergoing radical change due to the rapid convergence and evolution of telecommunications and computing technologies. Traditionally telecommunications service providers’ ability to deliver network services has been through Intelligent Network (IN) platforms. The IN may be characterised as envisioning centralised processing of distributed service requests from a limited number of quasi-proprietary nodes with inflexible connections to the network management system and third party networks. The nodes are inter-linked by the operator’s highly reliable but expensive SS.7 network. To leverage this technology as the core of new multi-media services several key technical challenges must be overcome. These include: integration of the IN with new technologies for service delivery, enhanced integration with network management services, enabling third party service providers and reducing operating costs by using more general-purpose computing and networking equipment. In this thesis we present a general architecture that defines the framework and techniques required to realise an open, flexible, middleware (CORBA)-based distributed intelligent network (DIN). This extensible architecture naturally encapsulates the full range of traditional service network technologies, for example IN (fixed network), GSM-MAP and CAMEL. Fundamental to this architecture are mechanisms for inter-working with the existing IN infrastructure, to enable gradual migration within a domain and inter-working between IN and DIN domains. The DIN architecture compliments current research on third party service provision, service management and integration Internet-based servers. Given the dependence of such a distributed service platform on the transport network that links computational nodes, this thesis also includes a detailed study of the emergent IP-based telecommunications transport protocol of choice, Stream Control Transmission Protocol (SCTP). In order to comply with the rigorous performance constraints of this domain, prototyping, simulation and analytic modelling of the DIN based on SCTP have been carried out. This includes the first detailed analysis of the operation of SCTP congestion controls under a variety of network conditions leading to a number of suggested improvements in the operation of the protocol. Finally we describe a new analytic framework for dimensioning networks with competing multi-homed SCTP flows in a DIN. This framework can be used for any multi-homed SCTP network e.g. one transporting SIP or HTTP

    A novel anomaly detection mechanism for Open radio access networks with Peer-to-Peer Federated Learning

    Get PDF
    Abstract. Open radio access network (O-RAN) has been recognized as a revolutionary architecture to support the different classes of wireless services needed in fifth-generation (5G) and beyond 5G networks, which have various reliability, bandwidth, and latency requirements. It provides significant advantages based on the disaggregation and cloudification of the components, the standardized open interfaces, and the introduction of intelligence. However, these new features including the openness and the distributed nature of the O-RAN architecture have created new forms of threat surfaces than the conventional RAN architecture and require complex anomaly detection mechanisms. With the introduction of RAN intelligent controllers (RICs) in the O-RAN architecture, it is possible to utilize advanced artificial intelligence (AI) and machine learning (ML) algorithms based on closed control loops to perform automated security management in a data-driven manner, including detecting anomalies. In this thesis, the use of Federated Learning (FL) for anomaly detection in the O-RAN architecture is investigated, which can further preserve data privacy in a sensitive data processing system such as RAN. A Peer-to-Peer (P2P) FL-based anomaly detection mechanism is proposed for the O-RAN architecture and provides comprehensive analysis of four variants of P2P FL techniques. Three of the models are based on secure multiparty average computing, and the other is a homomorphic averaging-based model that provide protection against semi-honest local trainers. Moreover, the proposed models are simulated using the UNSW-NB15 dataset in a Python environment and the performance is tested using the same dataset. The simulation results indicated that all the proposed models have improved accuracy and F1-score values

    New information model that allows logical distribution of the control plane for software-defined networking : the distributed active information model (DAIM) can enable an effective distributed control plane for SDN with OpenFlow as the standard protocol

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.In recent years, technological innovations in communication networks, computing applications and information modelling have been increasing significantly in complexity and functionality driven by the needs of the modern world. As large-scale networks are becoming more complex and difficult to manage, traditional network management paradigms struggle to cope with traffic bottlenecks of the traditional switch and routing based networking deployments. Recently, there has been a growing movement led by both industry and academia aiming to develop mechanisms to reach a management paradigm that separates the control plane from the data plane. A new emerging network management paradigm called Software-Defined Networking (SDN) is an attempt to overcome the bottlenecks of traditional data networks. SDN offers a great potential to ease network management, and the OpenFlow protocol in particularly often referred to a radical new idea in networking. SDN adopts the concept of programmable networks which separate the control decisions from forwarding hardware and thus enabling the creation of a standardised programming interface. Flow computation is managed by a centralised controller with the switches only performing simple forwarding functions. This allows researchers to implement their protocols and algorithms to control data packets without impacting on the production network. Therefore, the emerging OpenFlow technology provides more flexible control of networks infrastructure, are cost effective, open and programmable components of network architecture. SDN is very efficient at moving the computational load away from the forwarding plane and into a centralised controller, but a physically centralised controller can represent a single point of failure for the entire network. This centralisation approach brings optimality, however, it creates additional problems of its own including single-domain restriction, scalability, robustness and the ability for switches to adapt well to changes in local environments. This research aims at developing a new distributed active information model (DAIM) to allow programmability of network elements and local decision-making processes that will essentially contribute to complex distributed networks. DAIM offers adaptation algorithms embedded with intelligent information objects to be applied to such complex systems. By applying the DAIM model and these adaptation algorithms, managing complex systems in any distributed network environment can become scalable, adaptable and robust. The DAIM model is integrated into the SDN architecture at the level of switches to provide a logically distributed control plane that can manage the flow setups. The proposal moves the computational load to the switches, which allows them to adapt dynamically according to real-time demands and needs. The DAIM model can enhance information objects and network devices to make their local decisions through its active performance, and thus significantly reduce the workload of a centralised SDN/OpenFlow controller. In addition to the introduction (Chapter 1) and the comprehensive literature reviews (Chapter 2), the first part of this dissertation (Chapter 3) presents the theoretical foundation for the rest of the dissertation. This foundation is comprised of the logically distributed control plane for SDN networks, an efficient DAIM model framework inspired by the O:MIB and hybrid O:XML semantics, as well as the necessary architecture to aggregate the distribution of network information. The details of the DAIM model including design, structure and packet forwarding process are also described. The DAIM software specification and its implementation are demonstrated in the second part of the thesis (Chapter 4). The DAIM model is developed in the C++ programming language using free and open source NetBeans IDE. In more detail, the three core modules that construct the DAIM ecosystem are discussed with some sample code reviews and flowchart diagrams of the implemented algorithms. To show DAIM’s feasibility, a small-size OpenFlow lab based on Raspberry Pi’s has been set up physically to check the compliance of the system with its purpose and functions. Various tasks and scenarios are demonstrated to verify the functionalities of DAIM such as executing a ping command, streaming media and transferring files between hosts. These scenarios are created based on OpenVswitch in a virtualised network using Mininet. The third part (Chapter 5) presents the performance evaluation of the DAIM model, which is defined by four characteristics: round-trip-time, throughput, latency and bandwidth. The ping command is used to measure the mean RTT between two IP hosts. The flow setup throughput and latency of the DAIM controller are measured by using Cbench. Also, Iperf is the tool used to measure the available bandwidth of the network. The performance of the distributed DAIM model has been tested and good results are reported when compared with current OpenFlow controllers including NOX, POX and NOX-MT. The comparisons reveal that DAIM can outperform both NOX and POX controllers. The DAIM’s performance in a physical OpenFlow test lab and other parameters that can affect the performance evaluation are also discussed. Because decentralisation is an essential element of autonomic systems, building a distributed computing environment by DAIM can consequently enable the development of autonomic management strategies. The experiment results show the DAIM model can be one of the architectural approaches to creating the autonomic service management for SDN. The DAIM model can be utilised to investigate the functionalities required by the autonomic networking within the ACNs community. This efficient DAIM model can be further applied to enable adaptability and autonomy to other distributed networks such as WSNs, P2P and Ad-Hoc sensor networks

    Network Optimizations for Distributed Storage Networks

    Get PDF
    Distributed file systems enable the reliable storage of exabytes of information on thousands of servers distributed throughout a network. These systems achieve reliability and performance by storing three or more copies of data in different locations across the network. The management of these copies of data is commonly handled by intermediate servers that track and coordinate the placement of data in the network. This introduces potential network bottlenecks, as multiple transfers to fast storage nodes can saturate the network links connecting intermediate servers to the storage. The advent of open Network Operating Systems presents an opportunity to alleviate this bottleneck, as it is now possible to treat network elements as intermediate nodes in this distributed file system and have them perform the task of replicating data across storage nodes. In this thesis, we propose a new design paradigm for distributed file systems, driven by a new fundamental component of the system which runs on network elements such as switches or routers. We describe the component’s architecture and how it can be integrated into existing distributed file systems to increase their performance. To measure this performance increase over current approaches, we emulate a distributed file system by creating a block-level storage array distributed across multiple iSCSI targets presented in a network. Furthermore we emulate more complicated redundancy schemes likely to be used in distributed file systems in the future to determine what effect this approach may have on those systems and what benefits it offers. We find that this new component offers a decrease in request latency proportional to the number of storage nodes involved in the request. We also find that the benefits of this approach are limited by the ability of switch hardware to process incoming data from the request, but that these limitations can be surmounted through the proposed design paradigm
    corecore