367 research outputs found

    Dynamic Interactions for Network Visualization and Simulation

    Get PDF
    Most network visualization suites do not interact with a simulator, as it executes. Nor do they provide an effective user interface that includes multiple visualization functions. The subject of this research is to improve the network visualization presented in the previous research [5] adding these capabilities to the framework. The previous network visualization did not have the capability of altering specific visualization characteristics, especially when detailed observations needed to be made for a small part of a large network. Searching for a network event in this topology might cause large delays leading to lower quality user interface. In addition to shortfalls in handling complex network events, [5] did not provide dynamic user interactions since it did not have real-time interaction with a simulator. These shortfalls motivate the development of a new network visualization framework design that provides a more robust user interface, network observation tools and an interaction with the simulator. Our research presents the design, development and implementation of this new network visualization framework to enhance network scenarios and provide interaction with NS-2, as it executes. From the interface design perspective, this research presents a prototype design to ease the implementation process of the framework. The visualization functions such as clustering, filtering, labeling and color coding help accessing network objects and events, supporting four tabs consisting of buttons, menus, and sliders. The new network visualization framework design gives the ability to handle the inherent complexity of large networks, allowing the user to interact with the current display of the framework, alter visualization parameters and control the network through the visualization. In our application, multiple visualizations are linked to NS-2 to build execution scenarios which let to test clustering, filtering, labeling functionalities on separate visualization screens, as NS-2 progresses

    Engineering Enterprise Networks with SDN

    Get PDF
    Today’s networks are growing in terms of bandwidth, number of devices, variety of applications, and various front-end and back-end technologies. Current network architecture is not sufficient for scaling, managing and monitoring them. In this thesis, we explore SDN to address scalability and monitoring issue in growing networks such as IITH campus network. SDN architecture separates the control plane and data plane of a networking device. SDN provides a single control plane (or centralized way) to configure, manage and monitor them more effectively. Scalability of Ethernet is a known issue where communication is disturbed by a large number of nodes in a single broadcast domain. This thesis proposes Extensible Transparent Filter (ETF) for Ethernet using SDN. ETF suppresses broadcast traffic in a broadcast domain by forwarding the broadcast packet to only selected port of a switch through which the target host of that packet is reachable. ETF maintains both consistent functionality and backward compatibility with existing protocols that work with broadcast of a packet. Nowadays, flow-level details of network traffic are the major requirements of many network monitoring applications such as anomaly detection, traffic accounting etc. Packet sampling based solutions (such as NetFlow) provide flow-level details of network traffic. However, they are inad- equate for several monitoring applications. This thesis proposes Network Monitor (NetMon) for OpenFlow networks, which includes the implementation of a few flow-based metrics to determine the state of the network and a Device Logger. NetMon uses a push-based approach to achieve its goals with complete flow-level details. NetMon determines the fraction of useful flows for each host in the network. It calculates out-degree and in-degree based on the IP address, for each hosts in the network. NetMon classifies the host as a client, server or peer-to-peer node, based on the number of source ports and active flows. Device Logger records the device (MAC address and IP address) and its location (Switch DPID and Port No). Device Logger helps to identify owners (devices) of an IP address within a particular time period. This thesis also discusses the practical deployment and operation of SDN. A small SDN network has been deployed in IIT Hyderabad campus. Both, ETF and NetMon are functional in the SDN network. ETF and NetMon were developed using Floodlight which is an open source SDN controller. ETF and NetMon improve scalability and monitoring of enterprise networks as an enhancement to existing networks using SDN

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    System analysis of a Peer-to-Peer Video-on-Demand architecture : Kangaroo

    Get PDF
    Architectural design and deployment of Peer-to-Peer Video-on-Demand (P2PVoD) systems which support VCR functionalities is attracting the interest of an increasing number of research groups within the scientific community; especially due to the intrinsic characteristics of such systems and the benefits that peers could provide at reducing the server load. This work focuses on the performance analysis of a P2P-VoD system considering user behaviors obtained from real traces together with other synthetic user patterns. The experiments performed show that it is feasible to achieve a performance close to the best possible. Future work will consider monitoring the physical characteristics of the network in order to improve the design of different aspects of a VoD system.El disseny arquitectònic i el desplegament de sistemes de Vídeo sota Demanda "Peer-to-Peer" que soporten funcionalitats VCR està captant l'interès d'un nombre creixent de grups de recerca a la comunitat científica, degut especialment a les característiques intrínsiques dels mencionats sistemes i als beneficis que els peers podrien proporcionar a la reducció de la càrrega en el servidor. Aquest treball tracta l'anàlisi del rendiment d'un sistema P2P-VoD considerant el comportament d'usuaris obtingut amb traçes reals i amb patrons sintètics. Els experiments realitzats mostren que és viable assolir un rendiment proper al cas més óptim. Com treball futur es considerarà la monitorització de les característiques físiques de la xarxa per a poder millorar el disseny dels diferents aspectes que formen un sistema de VoD.El diseño arquitectónico y el despliegue de sistemas de Video bajo Demanda "Peer-to-Peer" que soportan funcionalidades VCR está captando el interés de un número creciente de grupos de investigación dentro de la comunidad científica; especialmente debido a las características intrínsecas de tales sistemas y a los beneficios que los peers podrían proporcionar en la reducción de la carga en el servidor. Este trabajo se enfoca en el análisis de rendimiento de un sistema P2PVoD considerando el comportamiento de usuarios obtenido de trazas reales, junto a otros patrones sintéticos. Los experimentos realizados muestran que es viable lograr un rendimiento cercano al caso más óptimo. El trabajo futuro considerará la monitorización de las características físicas de la red para poder mejorar el diseño de los diferentes aspectos que conforman un sistema de VoD

    dCAMP: Distributed Common API for Measuring Performance

    Get PDF
    Although the nearing end of Moore’s Law has been predicted numerous times in the past, it will eventually come to pass. In forethought of this, many modern computing systems have become increasingly complex, distributed, and parallel. As software is developed on and for these complex systems, a common API is necessary for gathering vital performance related metrics while remaining transparent to the user, both in terms of system impact and ease of use. Several distributed performance monitoring and testing systems have been proposed and implemented by both research and commercial institutions. However, most of these systems do not meet several fundamental criterion for a truly useful distributed performance monitoring system: 1) variable data delivery models, 2) security, 3) scalability, 4) transparency, 5) completeness, 6) validity, and 7) portability. This work presents dCAMP: Distributed Common API for Measuring Performance, a distributed performance framework built on top of Mark Gabel and Michael Haungs’ work with CAMP. This work also presents an updated and extended set of criterion for evaluating distributed performance frameworks and uses these to evaluate dCAMP and several related works

    A Hierarchical Filtering-Based Monitoring Architecture for Large-scale Distributed Systems

    Get PDF
    On-line monitoring is essential for observing and improving the reliability and performance of large-scale distributed (LSD) systems. In an LSD environment, large numbers of events are generated by system components during their execution and interaction with external objects (e.g. users or processes). These events must be monitored to accurately determine the run-time behavior of an LSD system and to obtain status information that is required for debugging and steering applications. However, the manner in which events are generated in an LSD system is complex and represents a number of challenges for an on-line monitoring system. Correlated events axe generated concurrently and can occur at multiple locations distributed throughout the environment. This makes monitoring an intricate task and complicates the management decision process. Furthermore, the large number of entities and the geographical distribution inherent with LSD systems increases the difficulty of addressing traditional issues, such as performance bottlenecks, scalability, and application perturbation. This dissertation proposes a scalable, high-performance, dynamic, flexible and non-intrusive monitoring architecture for LSD systems. The resulting architecture detects and classifies interesting primitive and composite events and performs either a corrective or steering action. When appropriate, information is disseminated to management applications, such as reactive control and debugging tools. The monitoring architecture employs a novel hierarchical event filtering approach that distributes the monitoring load and limits event propagation. This significantly improves scalability and performance while minimizing the monitoring intrusiveness. The architecture provides dynamic monitoring capabilities through: subscription policies that enable applications developers to add, delete and modify monitoring demands on-the-fly, an adaptable configuration that accommodates environmental changes, and a programmable environment that facilitates development of self-directed monitoring tasks. Increased flexibility is achieved through a declarative and comprehensive monitoring language, a simple code instrumentation process, and automated monitoring administration. These elements substantially relieve the burden imposed by using on-line distributed monitoring systems. In addition, the monitoring system provides techniques to manage the trade-offs between various monitoring objectives. The proposed solution offers improvements over related works by presenting a comprehensive architecture that considers the requirements and implied objectives for monitoring large-scale distributed systems. This architecture is referred to as the HiFi monitoring system. To demonstrate effectiveness at debugging and steering LSD systems, the HiFi monitoring system has been implemented at the Old Dominion University for monitoring the Interactive Remote Instruction (IRI) system. The results from this case study validate that the HiFi system achieves the objectives outlined in this thesis

    Wide-area IP multicast traffic characterization

    Full text link

    WallMon : Interactive distributed monitoring of process-level resource usage on display and compute clusters

    Get PDF
    To achieve low overhead, traditional cluster monitoring systems sample data at low frequencies and with coarse granularity. However, interactive monitoring requires frequent (up to 60 Hz) sampling of fine-grained data and visualization tools that can explore and display data in near real-time. This makes traditional cluster monitoring systems unsuited for interactive monitoring of distributed cluster applications, as they fail to capture short-duration events, making understanding the performance relationship between processes on the same or different nodes difficult. To address this issue, WallMon was developed, a tool for interactive visual exploration of performance behaviors in distributed systems. For gathering of data, WallMon is centered around an abstraction of collectors and handlers; collectors gathers data of interest, such as CPU and memory usage, and forwards it to handlers in a push-based fashion, while handlers take action upon the data. WallMon captures and visualizes data for every process on every node, as well as overall node statistics. Data is visualized using a technique inspired by the concept of information flocking. WallMon's design is based on the client-server model, and it is extensible through a module system that encapsulates functionality specific to monitoring (collectors) and visualization (handlers). A set of experiments have been carried out on a cluster of 29 nodes with 180 processes per node. Performance results show 7% (of 100) CPU usage at 64 Hz sampling rate when performing process-level monitoring with WallMon. Using WallMon's interactive visualization, we have observed interesting patterns in different parallel and distributed systems, such as unexpected ratio of user- and kernel-level execution among processes in a particular distributed system
    corecore