419,700 research outputs found

    Passive available bandwidth: Applying self -induced congestion analysis of application-generated traffic

    Get PDF
    Monitoring end-to-end available bandwidth is critical in helping applications and users efficiently use network resources. Because the performance of distributed systems is intrinsically linked to the performance of the network, applications that have knowledge of the available bandwidth can adapt to changing network conditions and optimize their performance. A well-designed available bandwidth tool should be easily deployable and non-intrusive. While several tools have been created to actively measure the end-to-end available bandwidth of a network path, they require instrumentation at both ends of the path, and the traffic injected by these tools may affect the performance of other applications on the path.;We propose a new passive monitoring system that accurately measures available bandwidth by applying self-induced congestion analysis to traces of application-generated traffic. The Watching Resources from the Edge of the Network (Wren) system transparently provides available bandwidth information to applications without having to modify the applications to make the measurements and with negligible impact on the performance of applications. Wren produces a series of real-time available bandwidth measurements that can be used by applications to adapt their runtime behavior to optimize performance or that can be sent to a central monitoring system for use by other or future applications.;Most active bandwidth tools rely on adjustments to the sending rate of packets to infer the available bandwidth. The major obstacle with using passive kernel-level traces of TCP traffic is that we have no control over the traffic pattern. We demonstrate that there is enough natural variability in the sending rates of TCP traffic that techniques used by active tools can be applied to traces of application-generated traffic to yield accurate available bandwidth measurements.;Wren uses kernel-level instrumentation to collect traces of application traffic and analyzes the traces in the user-level to achieve the necessary accuracy and avoid intrusiveness. We introduce new passive bandwidth algorithms based on the principles of the active tools to measure available bandwidth, investigate the effectiveness of these new algorithms, implement a real-time system capable of efficiently monitoring available bandwidth, and demonstrate that applications can use Wren measurements to adapt their runtime decisions

    Designing a High-Quality Network: An Application-Oriented Approach

    Get PDF
    As new computer network technologies emerge, the application designers and the application users expect an increasing level of quality of service from them. Hence, it is a common practice in the newer technologies to provide more Quality of Service (QoS) components. Until now, these QoS solutions have been both network-technology specific and network-oriented solutions. In this thesis, we present an application-oriented approach to design a high quality network which is independent of the underlying communication technology. In this thesis, we propose a QoS architecture to provide predictable performance to the end-to-end application users in a high quality networking environment. In our architecture, QUANTA (Quality of Service Architecture for Native TCP/IP over ATM networks), we integrate different application requirements and different existing native QoS architectures into a single end-to-end architecture. Through experimentation we identify the architectural issues and the different QoS components required. We propose solutions which include isolation of the applications and managing the knowledge of the applications. The issue of isolating an application is subdivided into classification and identification of the applications. In addressing these issues we propose a ripple-through classification mechanism and a Generic Soft State (GSS) identification mechanism. To manage the knowledge of the applications we propose different QoS components, such as a GSS negotiation mechanism, a GSS communication mechanism and a GSS monitoring mechanism (such as GSS Relays and GSS Agents). QUANTA\u27s overhead is measured by running applications with different life-times and QoS requirements with and without QUANTA. For a transaction-oriented applications, the overhead induced by using QUANTA is larger than the benefit of using QUANTA. In high data rate applications and in long life time applications, a predictable performance to the applications is achieved with very low overhead by QUANTA. We demonstrate that QUANTA can manage and maintain Quality of Service for different classes of applications under varying host and network load conditions transparent to the application user. Using QUANTA, we can reach nearly 80% of the channel utilization under loaded conditions, whereas without QUANTA, the load on the network can reduce the channel utilization to 40%. Quanta has reduced a 350 msec delay under loaded conditions to less than a 10 msec delay. With the exception of short transition periods, QUANTA can sustain throughput to within the bounds of the specification of the user. We identify the limitations of QUANTA as currently proposed and discuss possible enhancements to it and other such architectures to remedy these limitations

    Landslide monitoring using mobile device and cloud-based photogrammetry

    Get PDF
    PhD ThesisLandslides are one of the most commonly occurring natural disasters that can cause a serious threat to human life and society, in addition to significant economic loss. Investigation and monitoring of landslides are important tasks in geotechnical engineering in order to mitigate the hazards created by such phenomena. However, current geomatics approaches used for precise landslide monitoring are largely inappropriate for initial assessment by an engineer over small areas due to the labourintensive and costly methods often adopted. Therefore, the development of a costeffective landslide monitoring system for real-time on-site investigation is essential to aid initial geotechnical interpretation and assessment. In this research, close-range photogrammetric techniques using imagery from a mobile device camera (e.g. a modern smartphone) were investigated as a low-cost, non-contact monitoring approach to on-site landslide investigation. The developed system was implemented on a mobile platform with cloud computing technology to enable the potential for real-time processing. The system comprised the front-end service of a mobile application controlled by the operator and a back-end service employed for photogrammetric measurement and landslide monitoring analysis. In terms of the backend service, Structure-from-Motion (SfM) photogrammetry was implemented to provide fully-automated processing to offer user-friendliness to non-experts. This was integrated with developed functions that were used to enhance the processing performance and deliver appropriate photogrammetric results for assessing landslide deformations. In order to implement this system with a real-time response, the cloud-based system required data transfer using Internet services via a modern 4G/5G network. Furthermore, the relationship between the number of images and image size was investigated to optimize data processing. The potential of the developed system for monitoring landslides was investigated at two different real-world UK sites, comprising a natural earth-flow landslide and coastal cliff erosion. These investigations demonstrated that the cloud-based photogrammetric measurement system was capable of providing three-dimensional results to subdecimeter-level accuracy. The results of the initial assessments for on-site investigation could be effectively presented on the mobile device through visualisation and/or statistical quantification of the landslide changes at a local-scale.Royal Thai Government and Naresuan University for the scholarship and financial suppor

    User-Centric Traffic Engineering in Software Defined Networks

    Get PDF
    Software defined networking (SDN) is a relatively new paradigm that decouples individual network elements from the control logic, offering real-time network programmability, translating high level policy abstractions into low level device configurations. The framework comprises of the data (forwarding) plane incorporating network devices, while the control logic and network services reside in the control and application planes respectively. Operators can optimize the network fabric to yield performance gains for individual applications and services utilizing flow metering and application-awareness, the default traffic management method in SDN. Existing approaches to traffic optimization, however, do not explicitly consider user application trends. Recent SDN traffic engineering designs either offer improvements for typical time-critical applications or focus on devising monitoring solutions aimed at measuring performance metrics of the respective services. The performance caveats of isolated service differentiation on the end users may be substantial considering the growth in Internet and network applications on offer and the resulting diversity in user activities. Application-level flow metering schemes therefore, fall short of fully exploiting the real-time network provisioning capability offered by SDN instead relying on rather static traffic control primitives frequent in legacy networking. For individual users, SDN may lead to substantial improvements if the framework allows operators to allocate resources while accounting for a user-centric mix of applications. This thesis explores the user traffic application trends in different network environments and proposes a novel user traffic profiling framework to aid the SDN control plane (controller) in accurately configuring network elements for a broad spectrum of users without impeding specific application requirements. This thesis starts with a critical review of existing traffic engineering solutions in SDN and highlights recent and ongoing work in network optimization studies. Predominant existing segregated application policy based controls in SDN do not consider the cost of isolated application gains on parallel SDN services and resulting consequence for users having varying application usage. Therefore, attention is given to investigating techniques which may capture the user behaviour for possible integration in SDN traffic controls. To this end, profiling of user application traffic trends is identified as a technique which may offer insight into the inherent diversity in user activities and offer possible incorporation in SDN based traffic engineering. A series of subsequent user traffic profiling studies are carried out in this regard employing network flow statistics collected from residential and enterprise network environments. Utilizing machine learning techniques including the prominent unsupervised k-means cluster analysis, user generated traffic flows are cluster analysed and the derived profiles in each networking environment are benchmarked for stability before integration in SDN control solutions. In parallel, a novel flow-based traffic classifier is designed to yield high accuracy in identifying user application flows and the traffic profiling mechanism is automated. The core functions of the novel user-centric traffic engineering solution are validated by the implementation of traffic profiling based SDN network control applications in residential, data center and campus based SDN environments. A series of simulations highlighting varying traffic conditions and profile based policy controls are designed and evaluated in each network setting using the traffic profiles derived from realistic environments to demonstrate the effectiveness of the traffic management solution. The overall network performance metrics per profile show substantive gains, proportional to operator defined user profile prioritization policies despite high traffic load conditions. The proposed user-centric SDN traffic engineering framework therefore, dynamically provisions data plane resources among different user traffic classes (profiles), capturing user behaviour to define and implement network policy controls, going beyond isolated application management

    An LLVM Instrumentation Plug-in for Score-P

    Full text link
    Reducing application runtime, scaling parallel applications to higher numbers of processes/threads, and porting applications to new hardware architectures are tasks necessary in the software development process. Therefore, developers have to investigate and understand application runtime behavior. Tools such as monitoring infrastructures that capture performance relevant data during application execution assist in this task. The measured data forms the basis for identifying bottlenecks and optimizing the code. Monitoring infrastructures need mechanisms to record application activities in order to conduct measurements. Automatic instrumentation of the source code is the preferred method in most application scenarios. We introduce a plug-in for the LLVM infrastructure that enables automatic source code instrumentation at compile-time. In contrast to available instrumentation mechanisms in LLVM/Clang, our plug-in can selectively include/exclude individual application functions. This enables developers to fine-tune the measurement to the required level of detail while avoiding large runtime overheads due to excessive instrumentation.Comment: 8 page

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    BonFIRE: A multi-cloud test facility for internet of services experimentation

    Get PDF
    BonFIRE offers a Future Internet, multi-site, cloud testbed, targeted at the Internet of Services community, that supports large scale testing of applications, services and systems over multiple, geographically distributed, heterogeneous cloud testbeds. The aim of BonFIRE is to provide an infrastructure that gives experimenters the ability to control and monitor the execution of their experiments to a degree that is not found in traditional cloud facilities. The BonFIRE architecture has been designed to support key functionalities such as: resource management; monitoring of virtual and physical infrastructure metrics; elasticity; single document experiment descriptions; and scheduling. As for January 2012 BonFIRE release 2 is operational, supporting seven pilot experiments. Future releases will enhance the offering, including the interconnecting with networking facilities to provide access to routers, switches and bandwidth-on-demand systems. BonFIRE will be open for general use late 2012

    LIKWID Monitoring Stack: A flexible framework enabling job specific performance monitoring for the masses

    Full text link
    System monitoring is an established tool to measure the utilization and health of HPC systems. Usually system monitoring infrastructures make no connection to job information and do not utilize hardware performance monitoring (HPM) data. To increase the efficient use of HPC systems automatic and continuous performance monitoring of jobs is an essential component. It can help to identify pathological cases, provides instant performance feedback to the users, offers initial data to judge on the optimization potential of applications and helps to build a statistical foundation about application specific system usage. The LIKWID monitoring stack is a modular framework build on top of the LIKWID tools library. It aims on enabling job specific performance monitoring using HPM data, system metrics and application-level data for small to medium sized commodity clusters. Moreover, it is designed to integrate in existing monitoring infrastructures to speed up the change from pure system monitoring to job-aware monitoring.Comment: 4 pages, 4 figures. Accepted for HPCMASPA 2017, the Workshop on Monitoring and Analysis for High Performance Computing Systems Plus Applications, held in conjunction with IEEE Cluster 2017, Honolulu, HI, September 5, 201
    corecore