837 research outputs found

    Methodology for modeling high performance distributed and parallel systems

    Get PDF
    Performance modeling of distributed and parallel systems is of considerable importance to the high performance computing community. To achieve high performance, proper task or process assignment and data or file allocation among processing sites is essential. This dissertation describes an elegant approach to model distributed and parallel systems, which combines the optimal static solutions for data allocation with dynamic policies for task assignment. A performance-efficient system model is developed using analytical tools and techniques. The system model is accomplished in three steps. First, the basic client-server model which allows only data transfer is evaluated. A prediction and evaluation method is developed to examine the system behavior and estimate performance measures. The method is based on known product form queueing networks. The next step extends the model so that each site of the system behaves as both client and server. A data-allocation strategy is designed at this stage which optimally assigns the data to the processing sites. The strategy is based on flow deviation technique in queueing models. The third stage considers process-migration policies. A novel on-line adaptive load-balancing algorithm is proposed which dynamically migrates processes and transfers data among different sites to minimize the job execution cost. The gradient-descent rule is used to optimize the cost function, which expresses the cost of process execution at different processing sites. The accuracy of the prediction method and the effectiveness of the analytical techniques is established by the simulations. The modeling procedure described here is general and applicable to any message-passing distributed and parallel system. The proposed techniques and tools can be easily utilized in other related areas such as networking and operating systems. This work contributes significantly towards the design of distributed and parallel systems where performance is critical

    Performance analysis of mobile networks under signalling storms

    Get PDF
    There are numerous security challenges in cellular mobile networks, many of which originate from the Internet world. One of these challenges is to answer the problem with increasing rate of signalling messages produced by smart devices. In particular, many services in the Internet are provided through mobile applications in an unobstructed manner, such that users get an always connected feeling. These services, which usually come from instant messaging, advertising and social networking areas, impose significant signalling loads on mobile networks by frequent exchange of control data in the background. Such services and applications could be built intentionally or unintentionally, and result in denial of service attacks known as signalling attacks or storms. Negative consequences, among others, include degradations of mobile network’s services, partial or complete net- work failures, increased battery consumption for infected mobile terminals. This thesis examines the influence of signalling storms on different mobile technologies, and proposes defensive mechanisms. More specifically, using stochastic modelling techniques, this thesis first presents a model of the vulnerability in a single 3G UMTS mobile terminal, and studies the influence of the system’s internal parameters on stability under a signalling storm. Further on, it presents a queueing network model of the radio access part of 3G UMTS and examines the effect of the radio resource control (RRC) inactivity timers. In presence of an attack, the proposed dynamic setting of the timers manage to lower the signalling load in the network and to increase the threshold above which a network failure could happen. Further on, the network model is upgraded into a more generic and detailed model, represent different generations of mobile technologies. It is than used to compare technologies with dedicated and shared organisation of resource allocation, referred to as traditional and contemporary networks, using performance metrics such as: signalling and communication delay, blocking probability, signalling load on the network’s nodes, bandwidth holding time, etc. Finally, based on the carried analysis, two mechanisms are proposed for detection of storms in real time, based on counting of same-type bandwidth allocations, and usage of allocated bandwidth. The mechanisms are evaluated using discrete event simulation in 3G UMTS, and experiments are done combining the detectors with a simple attack mitigation approach.Open Acces

    Performance and Analysis of Transfer Control Protocol Over Voice Over Wireless Local Area Network

    Get PDF
    A thesis presented to the faculty of the College of Science and Technology at Morehead State University in partial fulfillment of the requirements for the Degree Master of Science by Rajendra Patil in August of 2008

    A Specific Network Link and Path Likelihood Prediction Tool

    Get PDF
    Communications have always been a crucial part of any military operation. As the pace of warfare and the technological complexity of weaponry have increased, so has the need for rapid information to assess battlefield conditions. Message passing across a network of communication nodes allowed commanders to communicate with their forces. It is clear that an accurate prediction of communication usage through a network will provide commanders with useful intelligence of friendly and unfriendly activities. Providing a specific network link and path likelihood prediction tool gives strategic military commanders additional intelligence information and enables them to manage their limited resources more efficiently. In this study, Dijkstra\u27s algorithm has been modified to allow the Queueing Network Analyzer\u27s (QNA) analysis output to act as a node\u27s goodness metric. QNA\u27s calculation of the expected Total Sojourn Time for the completion of queueing and service in a node provides accurate measurement of expected congestion. The modified Dijkstra\u27s algorithm in the Generalized Network Analyzer (GNA) is verified and empirically validated to properly deliver traffic. It appropriately generates the fastest traffic path from a start node to a destination node. This implementation includes notification if input parameters exceed the network\u27s processing capability. GNA\u27s Congestion Control displays notification and informs the user certain network input parameters must be lowered (PTR or BSTR) or where certain nodes must be improved to maintain node stability. With this unstable node identification, users can determine which node needs attention and improvements. Once this instability is removed, a good QoS is achieved and analysis proceeds

    Queueing networks: solutions and applications

    Get PDF
    During the pasttwo decades queueing network models have proven to be a versatile tool for computer system and computer communication system performance evaluation. This chapter provides a survey of th field with a particular emphasis on applications. We start with a brief historical retrospective which also servesto introduce the majr issues and application areas. Formal results for product form queuenig networks are reviewed with particular emphasis on the implications for computer systems modeling. Computation algorithms, sensitivity analysis and optimization techniques are among the topics covered. Many of the important applicationsof queueing networks are not amenableto exact analysis and an (often confusing) array of approximation methods have been developed over the years. A taxonomy of approximation methods is given and used as the basis for for surveing the major approximation methods that have been studied. The application of queueing network to a number of areas is surveyed, including computer system cpacity planning, packet switching networks, parallel processing, database systems and availability modeling.Durante as últimas duas décadas modelos de redes de filas provaram ser uma ferramenta versátil para avaliação de desempenho de sistemas de computação e sistemas de comunicação. Este capítulo faz um apanhado geral da área, com ênfase em aplicações. Começamos com uma breve retrospectiva histórica que serve também para introduzir os pontos mais importantes e as áreas de aplicação. Resultados formais para redes de filas em forma de produto são revisados com ênfase na modelagem de sistemas de computação. Algoritmos de computação, análise de sensibilidade e técnicas de otimização estão entre os tópicos revistos. Muitas dentre importantes aplicações de redes de filas não são tratáveis por análise exata e uma série (frequentemente confusa) de métodos de aproximação tem sido desenvolvida. Uma taxonomia de métodos de aproximação é dada e usada como base para revisão dos mais importantes métodos de aproximação propostos. Uma revisão das aplicações de redes de filas em um número de áreas é feita, incluindo planejamento de capacidade de sistemas de computação, redes de comunicação por chaveamento de pacotes, processamento paralelo, sistemas de bancos de dados e modelagem de confiabilidade

    A Hierarchical Analysis Approach for High Performance Computing and Communication Applications

    Get PDF
    The proliferation of high performance computers and high-speed networks has made parallel and distributed computing feasible and cost-effective on High Performance Computing and Communication Systems (HPCC). However, the design, analysis and development of parallel and distributed applications on such computing systems are still very challenging tasks. Therefore, there is a great need for an integrated multilevel analysis methodology to assist in designing and analyzing the performance of both existing and proposed systems. Currently, there are no comprehensive analysis methods that address such diverse needs. This paper presents a three-level hierarchical modeling approach for analyzing the end-to-end performance of an application running on an HPCC system. The overall system is partitioned into application level, protocol level and network level. Functions at each level are modeled using queueing networks. Norton Equivalence for queueing networks and equivalent-queue representations of complex sections of the network are employed to simplify the analysis process. This approach enables the designer to study system performance for different types of networks and protocols and different design strategies. In this paper we use video-on-demand as an application example to show how our approach can be used to analyze the performance of such an application

    QAware: A Cross-Layer Approach to MPTCP Scheduling

    Full text link
    Multipath TCP (MPTCP) allows applications to transparently use all available network interfaces by creating a TCP subflow per interface. One critical component of MPTCP is the scheduler that decides which subflow to use for each packet. Existing schedulers typically use estimates of end-to-end path properties, such as delay and bandwidth, for making the scheduling decisions. In this paper, we show that these scheduling decisions can be significantly improved by incorporating readily available local information from the device driver queues in the decision-making process. We propose QAware, a novel cross-layer approach for MPTCP scheduling. QAware combines end-to-end delay estimates with local queue buffer occupancy information and allows for a better and faster adaptation to the network conditions. This results in more efficient use of the available resources and considerable gains in aggregate throughput. We present the design of QAware and evaluate its performance through simulations, and also through real experiments, comparing it to existing schedulers. Our results show that QAware performs significantly better than other available approaches for various use-cases and applications.Comment: in Proceedings of IFIP Networking 2018, 2018 available at: https://files.ifi.uzh.ch/stiller/IFIP%20Networking%202018-Proceedings.pd

    Cloud-based charging management of heterogeneous electric vehicles in a network of charging stations : price incentive vs. capacity expansion

    Get PDF
    This paper presents a novel cloud-based charging management system for electric vehicles (EVs). Two levels of cloud computing, i.e., local and remote cloud, are employed to meet the different latency requirements of the heterogeneous EVs while exploiting the lower-cost computing in remote clouds. Specifically, we consider time-sensitive EVs at highway exit charging stations and EVs with relaxed timing constraints at parking lot charging stations. We propose algorithms for the interplay among EVs, charging stations, system operator, and clouds. Considering the contention-based random access for EVs to a 4G Long-Term Evolution network, and the quality of service metrics (average waiting time and blocking probability), the model is composed of: queuing-based cloud server planning, capacity planning in charging stations, delay analysis, and profit maximization. We propose and analyze a price-incentive method that shifts heavy load from peak to off-peak hours, a capacity expansion method that accommodates the peak demand by purchasing additional electricity, and a hybrid method of prince-incentive and capacity expansion that balances the immediate charging needs of customers with the alleviation of the peak power grid load through price-incentive based demand control. Numerical results demonstrate the effectiveness of the proposed methods and elucidate the tradeoffs between the methods
    corecore