811 research outputs found

    Maximum Likelihood Estimation of Closed Queueing Network Demands from Queue Length Data

    Get PDF
    Resource demand estimation is essential for the application of analyical models, such as queueing networks, to real-world systems. In this paper, we investigate maximum likelihood (ML) estimators for service demands in closed queueing networks with load-independent and load-dependent service times. Stemming from a characterization of necessary conditions for ML estimation, we propose new estimators that infer demands from queue-length measurements, which are inexpensive metrics to collect in real systems. One advantage of focusing on queue-length data compared to response times or utilizations is that confidence intervals can be rigorously derived from the equilibrium distribution of the queueing network model. Our estimators and their confidence intervals are validated against simulation and real system measurements for a multi-tier application

    Maximum Likelihood Estimation of Closed Queueing Network Demands from Queue Length Data

    No full text
    Resource demand estimation is essential for the application of analyical models, such as queueing networks, to real-world systems. In this paper, we investigate maximum likelihood (ML) estimators for service demands in closed queueing networks with load-independent and load-dependent service times. Stemming from a characterization of necessary conditions for ML estimation, we propose new estimators that infer demands from queue-length measurements, which are inexpensive metrics to collect in real systems. One advantage of focusing on queue-length data compared to response times or utilizations is that confidence intervals can be rigorously derived from the equilibrium distribution of the queueing network model. Our estimators and their confidence intervals are validated against simulation and real system measurements for a multi-tier application

    QMLE: a methodology for statistical inference of service demands from queueing data

    Get PDF
    Estimating the demands placed by services on physical resources is an essential step for the definition of performance models. For example, scalability analysis relies on these parameters to predict queueing delays under increasing loads. In this paper, we investigate maximum likelihood (ML) estimators for demands at load-independent and load-dependent resources in systems with parallelism constraints. We define a likelihood function based on state measurements and derive necessary conditions for its maximization. We then obtain novel estimators that accurately and inexpensively obtain service demands using only aggregate state data. With our approach, and also thanks to approximation methods for computing marginal and joint distributions for the load-dependent case, confidence intervals can be rigorously derived, explicitly taking into account both topology and concurrency levels of the services. Our estimators and their confidence intervals are validated against simulations and real system measurements for two multi-tier applications, showing high accuracy also in the presence of load-dependent resources

    Queueing-Theoretic End-to-End Latency Modeling of Future Wireless Networks

    Get PDF
    The fifth generation (5G) of mobile communication networks is envisioned to enable a variety of novel applications. These applications demand requirements from the network, which are diverse and challenging. Consequently, the mobile network has to be not only capable to meet the demands of one of these applications, but also be flexible enough that it can be tailored to different needs of various services. Among these new applications, there are use cases that require low latency as well as an ultra-high reliability, e.g., to ensure unobstructed production in factory automation or road safety for (autonomous) transportation. In these domains, the requirements are crucial, since violating them may lead to financial or even human damage. Hence, an ultra-low probability of failure is necessary. Based on this, two major questions arise that are the motivation for this thesis. First, how can ultra-low failure probabilities be evaluated, since experiments or simulations would require a tremendous number of runs and, thus, turn out to be infeasible. Second, given a network that can be configured differently for different applications through the concept of network slicing, which performance can be expected by different parameters and what is their optimal choice, particularly in the presence of other applications. In this thesis, both questions shall be answered by appropriate mathematical modeling of the radio interface and the radio access network. Thereby the aim is to find the distribution of the (end-to-end) latency, allowing to extract stochastic measures such as the mean, the variance, but also ultra-high percentiles at the distribution tail. The percentile analysis eventually leads to the desired evaluation of worst-case scenarios at ultra-low probabilities. Therefore, the mathematical tool of queuing theory is utilized to study video streaming performance and one or multiple (low-latency) applications. One of the key contributions is the development of a numeric algorithm to obtain the latency of general queuing systems for homogeneous as well as for prioritized heterogeneous traffic. This provides the foundation for analyzing and improving end-to-end latency for applications with known traffic distributions in arbitrary network topologies and consisting of one or multiple network slices.Es wird erwartet, dass die fünfte Mobilfunkgeneration (5G) eine Reihe neuartiger Anwendungen ermöglichen wird. Allerdings stellen diese Anwendungen sowohl sehr unterschiedliche als auch überaus herausfordernde Anforderungen an das Netzwerk. Folglich muss das mobile Netz nicht nur die Voraussetzungen einer einzelnen Anwendungen erfüllen, sondern auch flexibel genug sein, um an die Vorgaben unterschiedlicher Dienste angepasst werden zu können. Ein Teil der neuen Anwendungen erfordert hochzuverlässige Kommunikation mit niedriger Latenz, um beispielsweise unterbrechungsfreie Produktion in der Fabrikautomatisierung oder Sicherheit im (autonomen) Straßenverkehr zu gewährleisten. In diesen Bereichen ist die Erfüllung der gestellten Anforderungen besonders kritisch, da eine Verletzung finanzielle oder sogar personelle Schäden nach sich ziehen könnte. Eine extrem niedrige Ausfallwahrscheinlichkeit ist daher von größter Wichtigkeit. Daraus ergeben sich zwei wesentliche Fragestellungen, welche diese Arbeit motivieren. Erstens, wie können extrem niedrige Ausfallwahrscheinlichkeiten evaluiert werden. Ihr Nachweis durch Experimente oder Simulationen würde eine extrem große Anzahl an Durchläufen benötigen und sich daher als nicht realisierbar herausstellen. Zweitens, welche Performanz ist für ein gegebenes Netzwerk durch unterschiedliche Konfigurationen zu erwarten und wie kann die optimale Konfiguration gewählt werden. Diese Frage ist insbesondere dann interessant, wenn mehrere Anwendungen gleichzeitig bedient werden und durch sogenanntes Slicing für jeden Dienst unterschiedliche Konfigurationen möglich sind. In dieser Arbeit werden beide Fragen durch geeignete mathematische Modellierung der Funkschnittstelle sowie des Funkzugangsnetzes (Radio Access Network) adressiert. Mithilfe der Warteschlangentheorie soll die stochastische Verteilung der (Ende-zu-Ende-) Latenz bestimmt werden. Dies liefert unterschiedliche stochastische Metriken, wie den Erwartungswert, die Varianz und insbesondere extrem hohe Perzentile am oberen Rand der Verteilung. Letztere geben schließlich Aufschluss über die gesuchten schlimmsten Fälle, die mit sehr geringer Wahrscheinlichkeit eintreten können. In der Arbeit werden Videostreaming und ein oder mehrere niedriglatente Anwendungen untersucht. Zu den wichtigsten Beiträgen zählt dabei die Entwicklung einer numerischen Methode, um die Latenz in allgemeinen Warteschlangensystemen für homogenen sowie für priorisierten heterogenen Datenverkehr zu bestimmen. Dies legt die Grundlage für die Analyse und Verbesserung von Ende-zu-Ende-Latenz für Anwendungen mit bekannten Verkehrsverteilungen in beliebigen Netzwerktopologien mit ein oder mehreren Slices

    Stability Problems for Stochastic Models: Theory and Applications II

    Get PDF
    Most papers published in this Special Issue of Mathematics are written by the participants of the XXXVI International Seminar on Stability Problems for Stochastic Models, 21­25 June, 2021, Petrozavodsk, Russia. The scope of the seminar embraces the following topics: Limit theorems and stability problems; Asymptotic theory of stochastic processes; Stable distributions and processes; Asymptotic statistics; Discrete probability models; Characterization of probability distributions; Insurance and financial mathematics; Applied statistics; Queueing theory; and other fields. This Special Issue contains 12 papers by specialists who represent 6 countries: Belarus, France, Hungary, India, Italy, and Russia

    TCP performance enhancement in wireless networks via adaptive congestion control and active queue management

    Get PDF
    The transmission control protocol (TCP) exhibits poor performance when used in error-prone wireless networks. Remedy to this problem has been an active research area. However, a widely accepted and adopted solution is yet to emerge. Difficulties of an acceptable solution lie in the areas of compatibility, scalability, computational complexity and the involvement of intermediate routers and switches. This dissertation rexriews the current start-of-the-art solutions to TCP performance enhancement, and pursues an end-to-end solution framework to the problem. The most noticeable cause of the performance degradation of TCP in wireless networks is the higher packet loss rate as compared to that in traditional wired networks. Packet loss type differentiation has been the focus of many proposed TCP performance enhancement schemes. Studies conduced by this dissertation research suggest that besides the standard TCP\u27s inability of discriminating congestion packet losses from losses related to wireless link errors, the standard TCP\u27s additive increase and multiplicative decrease (AIMD) congestion control algorithm itself needs to be redesigned to achieve better performance in wireless, and particularly, high-speed wireless networks. This dissertation proposes a simple, efficient, and effective end-to-end solution framework that enhances TCP\u27s performance through techniques of adaptive congestion control and active queue management. By end-to-end, it means a solution with no requirement of routers being wireless-aware or wireless-specific . TCP-Jersey has been introduced as an implementation of the proposed solution framework, and its performance metrics have been evaluated through extensive simulations. TCP-Jersey consists of an adaptive congestion control algorithm at the source by means of the source\u27s achievable rate estimation (ARE) —an adaptive filter of packet inter-arrival times, a congestion indication algorithm at the links (i.e., AQM) by means of packet marking, and a effective loss differentiation algorithm at the source by careful examination of the congestion marks carried by the duplicate acknowledgment packets (DUPACK). Several improvements to the proposed TCP-Jersey have been investigated, including a more robust ARE algorithm, a less computationally intensive threshold marking algorithm as the AQM link algorithm, a more stable congestion indication function based on virtual capacity at the link, and performance results have been presented and analyzed via extensive simulations of various network configurations. Stability analysis of the proposed ARE-based additive increase and adaptive decrease (AJAD) congestion control algorithm has been conducted and the analytical results have been verified by simulations. Performance of TCP-Jersey has been compared to that of a perfect , but not practical, TCP scheme, and encouraging results have been observed. Finally the framework of the TCP-Jersey\u27s source algorithm has been extended and generalized for rate-based congestion control, as opposed to TCP\u27s window-based congestion control, to provide a design platform for applications, such as real-time multimedia, that do not use TCP as transport protocol yet do need to control network congestion as well as combat packet losses in wireless networks. In conclusion, the framework architecture presented in this dissertation that combines the adaptive congestion control and active queue management in solving the TCP performance degradation problem in wireless networks has been shown as a promising answer to the problem due to its simplistic design philosophy complete compatibility with the current TCP/IP and AQM practice, end-to-end architecture for scalability, and the high effectiveness and low computational overhead. The proposed implementation of the solution framework, namely TCP-Jersey is a modification of the standard TCP protocol rather than a completely new design of the transport protocol. It is an end-to-end approach to address the performance degradation problem since it does not require split mode connection establishment and maintenance using special wireless-aware software agents at the routers. The proposed solution also differs from other solutions that rely on the link layer error notifications for packet loss differentiation. The proposed solution is also unique among other proposed end-to-end solutions in that it differentiates packet losses attributed to wireless link errors from congestion induced packet losses directly from the explicit congestion indication marks in the DUPACK packets, rather than inferring the loss type based on packet delay or delay jitter as in many other proposed solutions; nor by undergoing a computationally expensive off-line training of a classification model (e.g., HMM), or a Bayesian estimation/detection process that requires estimations of a priori loss probability distributions of different loss types. The proposed solution is also scalable and fully compatible to the current practice in Internet congestion control and queue management, but with an additional function of loss type differentiation that effectively enhances TCP\u27s performance over error-prone wireless networks. Limitations of the proposed solution architecture and areas for future researches are also addressed

    Assessing the Effect of High Performance Computing Capabilities on Academic Research Output

    Get PDF
    This paper uses nonparametric methods and some new results on hypothesis testing with nonparametric efficiency estimators and applies these to analyze the effect of locally-available high performance computing (HPC) resources on universities efficiency in producing research and other outputs. We find that locally-available HPC resources enhance the technical efficiency of research output in Chemistry, Civil Engineering, Physics, and History, but not in Computer Science, Economics, nor English; we find mixed results for Biology. Out research results provide a critical first step in a quantitative economic model for investments in HPC
    corecore