530 research outputs found

    Traffic measurement and analysis of Wide Area Network (WAN) link usage: a case study of UiTM Perlis campus network / Abidah Mat Taib, Rafiza Ruslan and Abdul Hadi Kamel Abdullah

    Get PDF
    Network users normally will start complaining when the response time for the network access is intolerable or their network access is intermittently interrupted due to unclear reasons. Knowing how the network bandwidth is being utilized by the users will give the administrator some ideas of possible reasons for the network access problems and how to alleviate the problems. Some users may just use the network for running non bandwidth intensive applications like email and telnet and some users may use the network for running bandwidth intensive application such as video streaming and downloading big files using ftp application. Many users may visit popular web sites that are not related to their work during office hours. The above information about bandwidth utilization on a network link can be acquired through measurement and analysis of the network traffic that pass through the link. This paper discuss real time passive measurement of the network traffic on the WAN link that connects UiTM Perlis campus network to the main campus of UiTM in Shah Alam. The analysis of the traffic helped us to know some aspects of the WAN link utilization. This includes the average bandwidth utilization, popular application layer protocols and popular websites among different user segments and also some anomalies found in the captured trace

    Estimating point-to-point and point-to-multipoint traffic matrices: An information-theoretic approach

    Get PDF
    © 2005 IEEE.Traffic matrices are required inputs for many IP network management tasks, such as capacity planning, traffic engineering, and network reliability analysis. However, it is difficult to measure these matrices directly in large operational IP networks, so there has been recent interest in inferring traffic matrices from link measurements and other more easily measured data. Typically, this inference problem is ill-posed, as it involves significantly more unknowns than data. Experience in many scientific and engineering fields has shown that it is essential to approach such ill-posed problems via "regularization". This paper presents a new approach to traffic matrix estimation using a regularization based on "entropy penalization". Our solution chooses the traffic matrix consistent with the measured data that is information-theoretically closest to a model in which source/destination pairs are stochastically independent. It applies to both point-to-point and point-to-multipoint traffic matrix estimation. We use fast algorithms based on modern convex optimization theory to solve for our traffic matrices. We evaluate our algorithm with real backbone traffic and routing data, and demonstrate that it is fast, accurate, robust, and flexible.Yin Zhang, Member, Matthew Roughan, Carsten Lund, and David L. Donoh

    Management, Optimization and Evolution of the LHCb Online Network

    Get PDF
    The LHCb experiment is one of the four large particle detectors running at the Large Hadron Collider (LHC) at CERN. It is a forward single-arm spectrometer dedicated to test the Standard Model through precision measurements of Charge-Parity (CP) violation and rare decays in the b quark sector. The LHCb experiment will operate at a luminosity of 2x10^32cm-2s-1, the proton-proton bunch crossings rate will be approximately 10 MHz. To select the interesting events, a two-level trigger scheme is applied: the rst level trigger (L0) and the high level trigger (HLT). The L0 trigger is implemented in custom hardware, while HLT is implemented in software runs on the CPUs of the Event Filter Farm (EFF). The L0 trigger rate is dened at about 1 MHz, and the event size for each event is about 35 kByte. It is a serious challenge to handle the resulting data rate (35 GByte/s). The Online system is a key part of the LHCb experiment, providing all the IT services. It consists of three major components: the Data Acquisition (DAQ) system, the Timing and Fast Control (TFC) system and the Experiment Control System (ECS). To provide the services, two large dedicated networks based on Gigabit Ethernet are deployed: one for DAQ and another one for ECS, which are referred to Online network in general. A large network needs sophisticated monitoring for its successful operation. Commercial network management systems are quite expensive and dicult to integrate into the LHCb ECS. A custom network monitoring system has been implemented based on a Supervisory Control And Data Acquisition (SCADA) system called PVSS which is used by LHCb ECS. It is a homogeneous part of the LHCb ECS. In this thesis, it is demonstrated how a large scale network can be monitored and managed using tools originally made for industrial supervisory control. The thesis is organized as the follows: Chapter 1 gives a brief introduction to LHC and the B physics on LHC, then describes all sub-detectors and the trigger and DAQ system of LHCb from structure to performance. Chapter 2 first introduces the LHCb Online system and the dataflow, then focuses on the Online network design and its optimization. In Chapter 3, the SCADA system PVSS is introduced briefly, then the architecture and implementation of the network monitoring system are described in detail, including the front-end processes, the data communication and the supervisory layer. Chapter 4 first discusses the packet sampling theory and one of the packet sampling mechanisms: sFlow, then demonstrates the applications of sFlow for the network trouble-shooting, the traffic monitoring and the anomaly detection. In Chapter 5, the upgrade of LHC and LHCb is introduced, the possible architecture of DAQ is discussed, and two candidate internetworking technologies (high speed Ethernet and InfniBand) are compared in different aspects for DAQ. Three schemes based on 10 Gigabit Ethernet are presented and studied. Chapter 6 is a general summary of the thesis

    Towards Automated Network Configuration Management

    Get PDF
    Modern networks are designed to satisfy a wide variety of competing goals related to network operation requirements such as reachability, security, performance, reliability and availability. These high level goals are realized through a complex chain of low level configuration commands performed on network devices. As networks become larger, more complex and more heterogeneous, human errors become the most significant threat to network operation and the main cause of network outage. In addition, the gap between high-level requirements and low-level configuration data is continuously increasing and difficult to close. Although many solutions have been introduced to reduce the complexity of configuration management, network changes, in most cases, are still manually performed via low--level command line interfaces (CLIs). The Internet Engineering Task Force (IETF) has introduced NETwork CONFiguration (NETCONF) protocol along with its associated data--modeling language, YANG, that significantly reduce network configuration complexity. However, NETCONF is limited to the interaction between managers and agents, and it has weak support for compliance to high-level management functionalities. We design and develop a network configuration management system called AutoConf that addresses the aforementioned problems. AutoConf is a distributed system that manages, validates, and automates the configuration of IP networks. We propose a new framework to augment NETCONF/YANG framework. This framework includes a Configuration Semantic Model (CSM), which provides a formal representation of domain knowledge needed to deploy a successful management system. Along with CSM, we develop a domain--specific language called Structured Configuration language to specify configuration tasks as well as high--level requirements. CSM/SCL together with NETCONF/YANG makes a powerful management system that supports network--wide configuration. AutoConf supports two levels of verifications: consistency verification and behavioral verification. We apply a set of logical formalizations to verifying the consistency and dependency of configuration parameters. In behavioral verification, we present a set of formal models and algorithms based on Binary Decision Diagram (BDD) to capture the behaviors of forwarding control lists that are deployed in firewalls, routers, and NAT devices. We also adopt an enhanced version of Dyna-Q algorithm to support dynamic adaptation of network configuration in response to changes occurred during network operation. This adaptation approach maintains a coherent relationship between high level requirements and low level device configuration. We evaluate AutoConf by running several configuration scenarios such as interface configuration, RIP configuration, OSPF configuration and MPLS configuration. We also evaluate AutoConf by running several simulation models to demonstrate the effectiveness and the scalability of handling large-scale networks

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    ANALYSIS OF DATA & COMPUTER NETWORKS IN STUDENTS' RESIDENTIAL AREA IN UNIVERSITI TEKNOLOGI PETRONAS

    Get PDF
    In Universiti Teknologi Petronas (UTP), most of the students depend on the Internet and computer network connection to gain academics information and share educational resources. Even though the Internet connections and computers networks are provided, the service always experience interruption, such as slow Internet access, viruses and worms distribution, and network abuse by irresponsible students. Since UTP organization keeps on expanding, the need for a better service in UTP increases. Several approaches were put into practice to address the problems. Research on data and computer network was performed to understand the network technology applied in UTP. A questionnaire forms were distributed among the students to obtain feedback and statistical data about UTP's network in Students' Residential Area. The studies concentrate only on Students' Residential Area as it is where most of the users reside. From the survey, it can be observed that 99% of the students access the network almost 24 hours a day. In 2005, the 2 Mbps allocated bandwidth was utilized 100% almost continuously but in 2006, the bottleneck of Internet access has reduced significantly since the bandwidth allocated have been increased to 8 Mbps. Server degradation due to irresponsible acts by users also adds burden to the main server. In general, if the proposal to ITMS (Information Technology & Media Services) Department for them to improve their Quality of Service (QoS) and established UTP Computer Emergency Response Team (UCert), most of the issues addressed in this report can be solved

    Building Programmable Wireless Networks: An Architectural Survey

    Full text link
    In recent times, there have been a lot of efforts for improving the ossified Internet architecture in a bid to sustain unstinted growth and innovation. A major reason for the perceived architectural ossification is the lack of ability to program the network as a system. This situation has resulted partly from historical decisions in the original Internet design which emphasized decentralized network operations through co-located data and control planes on each network device. The situation for wireless networks is no different resulting in a lot of complexity and a plethora of largely incompatible wireless technologies. The emergence of "programmable wireless networks", that allow greater flexibility, ease of management and configurability, is a step in the right direction to overcome the aforementioned shortcomings of the wireless networks. In this paper, we provide a broad overview of the architectures proposed in literature for building programmable wireless networks focusing primarily on three popular techniques, i.e., software defined networks, cognitive radio networks, and virtualized networks. This survey is a self-contained tutorial on these techniques and its applications. We also discuss the opportunities and challenges in building next-generation programmable wireless networks and identify open research issues and future research directions.Comment: 19 page

    DISco: a Distributed Information Store for network Challenges and their Outcome

    Full text link
    We present DISco, a storage and communication middleware designed to enable distributed and task-centric autonomic control of networks. DISco is designed to enable multi-agent identification of anomalous situations -- so-called "challenges" -- and assist coordinated remediation that maintains degraded -- but acceptable -- service level, while keeping a track of the challenge evolution in order to enable human-assisted diagnosis of flaws in the network. We propose to use state-of-art peer-to-peer publish/subscribe and distributed storage as core building blocks for the DISco service
    • …
    corecore