1,708 research outputs found

    Development of Emulation Network Analyzer Tool for Computer Network Planning

    Full text link
    This paper describes the development of Emulation Network Analyzer (ENA) for heterogeneous services in campus environment. The purpose of this paper is to inform this ENA able to plan and predict network performance. For this purpose, our ENA development is differ from others system such as application and hardware network analyzer. This study focuses on the design of emulation network analyzer, user interface design, characteristics, model description, implementation and evaluation. This ENA can provide a useful network architectural solutions and optimization of network resources during preparation, proposal and planning phases. Finally, ENA tool is a good emulation analyzer that can be used in small to medium size networks for campus environment purposes with minimum cost

    Development of Emulation Network Analyzer Tool for Computer Network Planning

    Get PDF
    This paper describes the development of Emulation Network Analyzer (ENA) for heterogeneous services in campus environment. The purpose of this paper is to inform this ENA able to plan and predict network performance. For this purpose, our ENA development is differ from others system such as application and hardware network analyzer. This study focuses on the design of emulation network analyzer, user interface design, characteristics, model description, implementation and evaluation. This ENA can provide a useful network architectural solutions and optimization of network resources during preparation, proposal and planning phases. Finally, ENA tool is a good emulation analyzer that can be used in small to medium size networks for campus environment purposes with minimum cost

    Computing server power modeling in a data center: survey,taxonomy and performance evaluation

    Full text link
    Data centers are large scale, energy-hungry infrastructure serving the increasing computational demands as the world is becoming more connected in smart cities. The emergence of advanced technologies such as cloud-based services, internet of things (IoT) and big data analytics has augmented the growth of global data centers, leading to high energy consumption. This upsurge in energy consumption of the data centers not only incurs the issue of surging high cost (operational and maintenance) but also has an adverse effect on the environment. Dynamic power management in a data center environment requires the cognizance of the correlation between the system and hardware level performance counters and the power consumption. Power consumption modeling exhibits this correlation and is crucial in designing energy-efficient optimization strategies based on resource utilization. Several works in power modeling are proposed and used in the literature. However, these power models have been evaluated using different benchmarking applications, power measurement techniques and error calculation formula on different machines. In this work, we present a taxonomy and evaluation of 24 software-based power models using a unified environment, benchmarking applications, power measurement technique and error formula, with the aim of achieving an objective comparison. We use different servers architectures to assess the impact of heterogeneity on the models' comparison. The performance analysis of these models is elaborated in the paper

    Transfer Learning for Improving Model Predictions in Highly Configurable Software

    Full text link
    Modern software systems are built to be used in dynamic environments using configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost. We define a cost model that transform the traditional view of model learning into a multi-objective problem that not only takes into account model accuracy but also measurements effort as well. We evaluate our cost-aware transfer learning solution using real-world configurable software including (i) a robotic system, (ii) 3 different stream processing applications, and (iii) a NoSQL database system. The experimental results demonstrate that our approach can achieve (a) a high prediction accuracy, as well as (b) a high model reliability.Comment: To be published in the proceedings of the 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS'17

    Application-level optimization of end-to-end data transfer throughput

    Get PDF
    For large-scale distributed applications, effective use of available network throughput and optimization of data transfer speed is crucial for end-to-end application performance. Today, many regional and national optical networking initiatives such as LONI, ESnet and Teragrid provide high speed network connectivity to their users. However, majority of the users fail to obtain even a fraction of the theoretical speeds promised by these networks due to issues such as sub-optimal protocol tuning, disk bottleneck on the sending and/or receiving ends, and processor limitations. This implies that having high speed networks in place is important but not sufficient for the improvement of end-to-end data transfer throughput. Being able to effectively use these high speed networks is becoming more and more important. Optimization of the underlying protocol parameters at the application layer (i.e. opening multiple parallel TCP streams, tuning the TCP buffer size and I/O block size) is one way of improving the network transfer throughput. On the other hand, end-to-end data transfer throughput bottleneck on high performance networking systems occur mostly at the participating storage systems rather than the network. The performance of a storage system heavily depends on the speed of its disk and CPU subsystems. Thus, it is critical to estimate the storage system\u27s bandwidth at both endpoints in addition to the network bandwidth. Disk bottleneck can be eliminated by the use of multiple disks (data striping), and CPU bottleneck can be eliminated by the use of multiple processors (parallelism). In this dissertation, we develop application-level models to predict the best combination of protocol parameters for optimal network performance, including the number of parallel data streams, protocol buffer size; and integration of disk and CPU speed parameters into the performance model to predict the optimal number of disk and CPU striping for the best end-to-end data throughput. These models will be made available to the community for use in data transfer tools, schedulers, and high-level planners

    Reducing Transport Latency for Short Flows with Multipath TCP

    Get PDF
    Multipath TCP (MPTCP) has been an emerging transport protocol that provides network resilience to failures and improves throughput by splitting a data stream into multiple subflows across all the available multiple paths. While MPTCP is generally beneficial for throughput-sensitive large flows with large number of subflows, it may be harmful for latency-sensitive small flows. MPTCP assigns each subflow a congestion window, making short flows susceptible to timeout when a flow only contains a few packets. This condition becomes even worse when the paths have heterogeneous characteristics as packet reordering occurs and the slow paths can be used with MPTCP, causing the increased end-to-end delay and the lower application Goodput. Thus, it is important to choose the appropriate subflows for each MPTCP connection to achieve the good performance. However, the subflows in MPTCP are determined before a connection is established, and they usually remain unchanged during the lifetime of that connection. To address this issue, we propose DMPTCP, which dynamically adjusts the subflows according to application workloads. Specifically, DMPTCP first utilizes the idea of TCP modeling to estimate the latency on the path under scheduling and the data amount sent on the other paths simultaneously, and then decides the set of subflows to be used for certain application periodically with the goal of reducing completion time for short flows and achieving a higher throughput for long flows. We implement DMPTCP in a Linux server and conduct extensive experiments both in NS3 and in Linux testbed to validate its effectiveness. Our evaluation shows that DMPTCP decreases the completion time by over 46.55% compared to conventional MPTCP for short flows while increases the Goodput up to 21.3% for long-lived flows

    Making broadband access networks transparent to researchers, developers, and users

    Get PDF
    Broadband networks are used by hundreds of millions of users to connect to the Internet today. However, most ISPs are hesitant to reveal details about their network deployments,and as a result the characteristics of broadband networks are often not known to users,developers, and researchers. In this thesis, we make progress towards mitigating this lack of transparency in broadband access networks in two ways. First, using novel measurement tools we performed the first large-scale study of thecharacteristics of broadband networks. We found that broadband networks have very different characteristics than academic networks. We also developed Glasnost, a system that enables users to test their Internet access links for traffic differentiation. Glasnost has been used by more than 350,000 users worldwide and allowed us to study ISPs' traffic management practices. We found that ISPs increasingly throttle or even block traffic from popular applications such as BitTorrent. Second, we developed two new approaches to enable realistic evaluation of networked systems in broadband networks. We developed Monarch, a tool that enables researchers to study and compare the performance of new and existing transport protocols at large scale in broadband environments. Furthermore, we designed SatelliteLab, a novel testbed that can easily add arbitrary end nodes, including broadband nodes and even smartphones, to existing testbeds like PlanetLab.Breitbandanschlüsse werden heute von hunderten Millionen Nutzern als Internetzugang verwendet. Jedoch geben die meisten ISPs nur ungern über Details ihrer Netze Auskunft und infolgedessen sind Nutzern, Anwendungsentwicklern und Forschern oft deren Eigenheiten nicht bekannt. Ziel dieser Dissertation ist es daher Breitbandnetze transparenter zu machen. Mit Hilfe neuartiger Messwerkzeuge konnte ich die erste groß angelegte Studie über die Besonderheiten von Breitbandnetzen durchführen. Dabei stellte sich heraus, dass Breitbandnetze und Forschungsnetze sehr unterschiedlich sind. Mit Glasnost habe ich ein System entwickelt, das mehr als 350.000 Nutzern weltweit ermöglichte ihren Internetanschluss auf den Einsatz von Verkehrsmanagement zu testen. Ich konnte dabei zeigen, dass ISPs zunehmend BitTorrent Verkehr drosseln oder gar blockieren. Meine Studien zeigten dar überhinaus, dass existierende Verfahren zum Testen von Internetsystemen nicht die typischen Eigenschaften von Breitbandnetzen berücksichtigen. Ich ging dieses Problem auf zwei Arten an: Zum einen entwickelte ich Monarch, ein Werkzeug mit dem das Verhalten von Transport-Protokollen über eine große Anzahl von Breitbandanschlüssen untersucht und verglichen werden kann. Zum anderen habe ich SatelliteLab entworfen, eine neuartige Testumgebung, die, anders als zuvor, beliebige Internetknoten, einschließlich Breitbandknoten und sogar Handys, in bestehende Testumgebungen wie PlanetLab einbinden kann
    corecore