71 research outputs found

    Buffer management and cell switching management in wireless packet communications

    Get PDF
    The buffer management and the cell switching (e.g., packet handoff) management using buffer management scheme are studied in Wireless Packet Communications. First, a throughput improvement method for multi-class services is proposed in Wireless Packet System. Efficient traffic management schemes should be developed to provide seamless access to the wireless network. Specially, it is proposed to regulate the buffer by the Selective- Delay Push-In (SDPI) scheme, which is applicable to scheduling delay-tolerant non-real time traffic and delay-sensitive real time traffic. Simulation results show that the performance observed by real time traffics are improved as compared to existing buffer priority scheme in term of packet loss probability. Second, the performance of the proposed SDPI scheme is analyzed in a single CBR server. The arrival process is derived from the superposition of two types of traffics, each in turn results from the superposition of homogeneous ON-OFF sources that can be approximated by means of a two-state Markov Modulated Poisson Process (MMPP). The buffer mechanism enables the ATM layer to adapt the quality of the cell transfer to the QoS requirements and to improve the utilization of network resources. This is achieved by selective-delaying and pushing-in cells according to the class they belong to. Analytical expressions for various performance parameters and numerical results are obtained. Simulation results in term of cell loss probability conform with our numerical analysis. Finally, a novel cell-switching scheme based on TDMA protocol is proposed to support QoS guarantee for the downlink. The new packets and handoff packets for each type of traffic are defined and a new cutoff prioritization scheme is devised at the buffer of the base station. A procedure to find the optimal thresholds satisfying the QoS requirements is presented. Using the ON-OFF approximation for aggregate traffic, the packet loss probability and the average packet delay are computed. The performance of the proposed scheme is evaluated by simulation and numerical analysis in terms of packet loss probability and average packet delay

    Quality aspects of Internet telephony

    Get PDF
    Internet telephony has had a tremendous impact on how people communicate. Many now maintain contact using some form of Internet telephony. Therefore the motivation for this work has been to address the quality aspects of real-world Internet telephony for both fixed and wireless telecommunication. The focus has been on the quality aspects of voice communication, since poor quality leads often to user dissatisfaction. The scope of the work has been broad in order to address the main factors within IP-based voice communication. The first four chapters of this dissertation constitute the background material. The first chapter outlines where Internet telephony is deployed today. It also motivates the topics and techniques used in this research. The second chapter provides the background on Internet telephony including signalling, speech coding and voice Internetworking. The third chapter focuses solely on quality measures for packetised voice systems and finally the fourth chapter is devoted to the history of voice research. The appendix of this dissertation constitutes the research contributions. It includes an examination of the access network, focusing on how calls are multiplexed in wired and wireless systems. Subsequently in the wireless case, we consider how to handover calls from 802.11 networks to the cellular infrastructure. We then consider the Internet backbone where most of our work is devoted to measurements specifically for Internet telephony. The applications of these measurements have been estimating telephony arrival processes, measuring call quality, and quantifying the trend in Internet telephony quality over several years. We also consider the end systems, since they are responsible for reconstructing a voice stream given loss and delay constraints. Finally we estimate voice quality using the ITU proposal PESQ and the packet loss process. The main contribution of this work is a systematic examination of Internet telephony. We describe several methods to enable adaptable solutions for maintaining consistent voice quality. We have also found that relatively small technical changes can lead to substantial user quality improvements. A second contribution of this work is a suite of software tools designed to ascertain voice quality in IP networks. Some of these tools are in use within commercial systems today

    Performance modelling with adaptive hidden Markov models and discriminatory processor sharing queues

    Get PDF
    In modern computer systems, workload varies at different times and locations. It is important to model the performance of such systems via workload models that are both representative and efficient. For example, model-generated workloads represent realistic system behaviour, especially during peak times, when it is crucial to predict and address performance bottlenecks. In this thesis, we model performance, namely throughput and delay, using adaptive models and discrete queues. Hidden Markov models (HMMs) parsimoniously capture the correlation and burstiness of workloads with spatiotemporal characteristics. By adapting the batch training of standard HMMs to incremental learning, online HMMs act as benchmarks on workloads obtained from live systems (i.e. storage systems and financial markets) and reduce time complexity of the Baum-Welch algorithm. Similarly, by extending HMM capabilities to train on multiple traces simultaneously it follows that workloads of different types are modelled in parallel by a multi-input HMM. Typically, the HMM-generated traces verify the throughput and burstiness of the real data. Applications of adaptive HMMs include predicting user behaviour in social networks and performance-energy measurements in smartphone applications. Equally important is measuring system delay through response times. For example, workloads such as Internet traffic arriving at routers are affected by queueing delays. To meet quality of service needs, queueing delays must be minimised and, hence, it is important to model and predict such queueing delays in an efficient and cost-effective manner. Therefore, we propose a class of discrete, processor-sharing queues for approximating queueing delay as response time distributions, which represent service level agreements at specific spatiotemporal levels. We adapt discrete queues to model job arrivals with distributions given by a Markov-modulated Poisson process (MMPP) and served under discriminatory processor-sharing scheduling. Further, we propose a dynamic strategy of service allocation to minimise delays in UDP traffic flows whilst maximising a utility function.Open Acces

    Loss Performance Modeling for Hierarchical Heterogeneous Wireless Networks With Speed-Sensitive Call Admission Control

    Get PDF

    Cellular networks for smart grid communication

    Get PDF
    The next-generation electric power system, known as smart grid, relies on a robust and reliable underlying communication infrastructure to improve the efficiency of electricity distribution. Cellular networks, e.g., LTE/LTE-A systems, appear as a promising technology to facilitate the smart grid evolution. Their inherent performance characteristics and well-established ecosystem could potentially unlock unprecedented use cases, enabling real-time and autonomous distribution grid operations. However, cellular technology was not originally intended for smart grid communication, associated with highly-reliable message exchange and massive device connectivity requirements. The fundamental differences between smart grid and human-type communication challenge the classical design of cellular networks and introduce important research questions that have not been sufficiently addressed so far. Motivated by these challenges, this doctoral thesis investigates novel radio access network (RAN) design principles and performance analysis for the seamless integration of smart grid traffic in future cellular networks. Specifically, we focus on addressing the fundamental RAN problems of network scalability in massive smart grid deployments and radio resource management for smart grid and human-type traffic. The main objective of the thesis lies on the design, analysis and performance evaluation of RAN mechanisms that would render cellular networks the key enabler for emerging smart grid applications. The first part of the thesis addresses the radio access limitations in LTE-based networks for reliable and scalable smart grid communication. We first identify the congestion problem in LTE random access that arises in large-scale smart grid deployments. To overcome this, a novel random access mechanism is proposed that can efficiently support real-time distribution automation services with negligible impact on the background traffic. Motivated by the stringent reliability requirements of various smart grid operations, we then develop an analytical model of the LTE random access procedure that allows us to assess the performance of event-based monitoring traffic under various load conditions and network configurations. We further extend our analysis to include the relation between the cell size and the availability of orthogonal random access resources and we identify an additional challenge for reliable smart grid connectivity. To this end, we devise an interference- and load-aware cell planning mechanism that enhances reliability in substation automation services. Finally, we couple the problem of state estimation in wide-area monitoring systems with the reliability challenges in information acquisition. Using our developed analytical framework, we quantify the impact of imperfect communication reliability in the state estimation accuracy and we provide useful insights for the design of reliability-aware state estimators. The second part of the thesis builds on the previous one and focuses on the RAN problem of resource scheduling and sharing for smart grid and human-type traffic. We introduce a novel scheduler that achieves low latency for distribution automation traffic while resource allocation is performed in a way that keeps the degradation of cellular users at a minimum level. In addition, we investigate the benefits of Device-to-Device (D2D) transmission mode for event-based message exchange in substation automation scenarios. We design a joint mode selection and resource allocation mechanism which results in higher data rates with respect to the conventional transmission mode via the base station. An orthogonal resource partition scheme between cellular and D2D links is further proposed to prevent the underutilization of the scarce cellular spectrum. The research findings of this thesis aim to deliver novel solutions to important RAN performance issues that arise when cellular networks support smart grid communication.Las redes celulares, p.e., los sistemas LTE/LTE-A, aparecen como una tecnología prometedora para facilitar la evolución de la próxima generación del sistema eléctrico de potencia, conocido como smart grid (SG). Sin embargo, la tecnología celular no fue pensada originalmente para las comunicaciones en la SG, asociadas con el intercambio fiable de mensajes y con requisitos de conectividad de un número masivo de dispositivos. Las diferencias fundamentales entre las comunicaciones en la SG y la comunicación de tipo humano desafían el diseño clásico de las redes celulares e introducen importantes cuestiones de investigación que hasta ahora no se han abordado suficientemente. Motivada por estos retos, esta tesis doctoral investiga los principios de diseño y analiza el rendimiento de una nueva red de acceso radio (RAN) que permita una integración perfecta del tráfico de la SG en las redes celulares futuras. Nos centramos en los problemas fundamentales de escalabilidad de la RAN en despliegues de SG masivos, y en la gestión de los recursos radio para la integración del tráfico de la SG con el tráfico de tipo humano. El objetivo principal de la tesis consiste en el diseño, el análisis y la evaluación del rendimiento de los mecanismos de las RAN que convertirán a las redes celulares en el elemento clave para las aplicaciones emergentes de las SGs. La primera parte de la tesis aborda las limitaciones del acceso radio en redes LTE para la comunicación fiable y escalable en SGs. En primer lugar, identificamos el problema de congestión en el acceso aleatorio de LTE que aparece en los despliegues de SGs a gran escala. Para superar este problema, se propone un nuevo mecanismo de acceso aleatorio que permite soportar de forma eficiente los servicios de automatización de la distribución eléctrica en tiempo real, con un impacto insignificante en el tráfico de fondo. Motivados por los estrictos requisitos de fiabilidad de las diversas operaciones en la SG, desarrollamos un modelo analítico del procedimiento de acceso aleatorio de LTE que nos permite evaluar el rendimiento del tráfico de monitorización de la red eléctrica basado en eventos bajo diversas condiciones de carga y configuraciones de red. Además, ampliamos nuestro análisis para incluir la relación entre el tamaño de celda y la disponibilidad de recursos de acceso aleatorio ortogonales, e identificamos un reto adicional para la conectividad fiable en la SG. Con este fin, diseñamos un mecanismo de planificación celular que tiene en cuenta las interferencias y la carga de la red, y que mejora la fiabilidad en los servicios de automatización de las subestaciones eléctricas. Finalmente, combinamos el problema de la estimación de estado en sistemas de monitorización de redes eléctricas de área amplia con los retos de fiabilidad en la adquisición de la información. Utilizando el modelo analítico desarrollado, cuantificamos el impacto de la baja fiabilidad en las comunicaciones sobre la precisión de la estimación de estado. La segunda parte de la tesis se centra en el problema de scheduling y compartición de recursos en la RAN para el tráfico de SG y el tráfico de tipo humano. Presentamos un nuevo scheduler que proporciona baja latencia para el tráfico de automatización de la distribución eléctrica, mientras que la asignación de recursos se realiza de un modo que mantiene la degradación de los usuarios celulares en un nivel mínimo. Además, investigamos los beneficios del modo de transmisión Device-to-Device (D2D) en el intercambio de mensajes basados en eventos en escenarios de automatización de subestaciones eléctricas. Diseñamos un mecanismo conjunto de asignación de recursos y selección de modo que da como resultado tasas de datos más elevadas con respecto al modo de transmisión convencional a través de la estación base. Finalmente, se propone un esquema de partición de recursos ortogonales entre enlaces celulares y D2Postprint (published version

    Queuing Modelling and Performance Analysis of Content Transfer in Information Centric Networks

    Get PDF
    With the rapid development of multimedia services and wireless technology, new generation of network traffic like short-form video and live streaming have put tremendous pressure on the current network infrastructure. To meet the high bandwidth and low latency needs of this new generation of traffic, the focus of Internet architecture has moved from host-centric end-to-end communication to requester-driven content retrieval. This shift has motivated the development of Information-Centric Networking (ICN), a promising new paradigm for the future Internet. ICN aims to improve information retrieval on the Internet by identifying and routing data using unified names. In-network caching and the use of a pending interest table (PIT) are two key features of ICN that are designed to efficiently handle bulk data dissemination and retrieval, as well as reduce bandwidth consumption. Performance analysis has been and continues to be key research interests of ICN. This thesis starts with the evaluation of content delivery delays in ICN. The main component of delay is composed of propagation delay, transmission delay,processing delay and queueing delay. To characterize the main components of content delivery delay, queueing network theory has been exploited to coordinate with cache miss rate in modelling the content delivery time in ICN. Moreover, different topologies and network conditions have been taken into account to evaluate the performance of content transfer in ICN. ICN is intrinsically compatible with wireless networks. To evaluate the performance of content transfer in wireless networks, an analytical model to evaluate the mean service time based on consumer and provider mobility has been proposed. The accuracy of the analytical model is validated through extensive simulation experiments. Finally, the analytical model is used to evaluate the impact of key metrics, such as the cache size, content size and content popularity on the performance of PIT and content transfer in ICN. Pending interest table (PIT) is one of the essential components of the ICN forwarding plane, which is responsible for stateful routing in ICN. It also aggregates the same interests to alleviate request flooding and network congestion. The aggregation feature of PIT improves performance of content delivery in ICN. Thus, having an analytical model to characterize the impact of PIT on content delivery time could allow for a more precise evaluation of content transfer performance. In parallel, if the size of the PIT is not properly determined, the interest drop rate may be too high, resulting in a reduction in quality of service for consumers as their requests have to be retransmitted. Furthermore, PIT is a costly resource as it requires to operate at wirespeed in the forwarding plane. Therefore, in order to ensure that interests drop rate less than the requirement, an analytical model of PIT occupancy has been developed to determine the minimum PIT size. In this thesis, the proposed analytical models are used to efficiently and accurately evaluate the performance of ICN content transfer and investigate the key component of ICN forwarding plane. Leveraging the insights discovered by these analytical models, the minimal PIT size and proper interest timeout can be determined to enhance the performance of ICN. To widen the outcomes achieved in the thesis, several interesting yet challenging research directions are pointed out

    Performance Modelling and Resource Allocation of the Emerging Network Architectures for Future Internet

    Get PDF
    With the rapid development of information and communications technologies, the traditional network architecture has approached to its performance limit, and thus is unable to meet the requirements of various resource-hungry applications. Significant infrastructure improvements to the network domain are urgently needed to guarantee the continuous network evolution and innovation. To address this important challenge, tremendous research efforts have been made to foster the evolution to Future Internet. Long-term Evolution Advanced (LTE-A), Software Defined Networking (SDN) and Network Function Virtualisation (NFV) have been proposed as the key promising network architectures for Future Internet and attract significant attentions in the network and telecom community. This research mainly focuses on the performance modelling and resource allocations of these three architectures. The major contributions are three-fold: 1) LTE-A has been proposed by the 3rd Generation Partnership Project (3GPP) as a promising candidate for the evolution of LTE wireless communication. One of the major features of LTE-A is the concept of Carrier Aggregation (CA). CA enables the network operators to exploit the fragmented spectrum and increase the peak transmission data rate, however, this technical innovation introduces serious unbalanced loads among in the radio resource allocation of LTE-A. To alleviate this problem, a novel QoS-aware resource allocation scheme, termed as Cross-CC User Migration (CUM) scheme, is proposed in this research to support real-time services, taking into consideration the system throughput, user fairness and QoS constraints. 2) SDN is an emerging technology towards next-generation Internet. In order to improve the performance of the SDN network, a preemption-based packet-scheduling scheme is firstly proposed in this research to improve the global fairness and reduce the packet loss rate in SDN data plane. Furthermore, in order to achieve a comprehensive and deeper understanding of the performance behaviour of SDN network, this work develops two analytical models to investigate the performance of SDN in the presence of Poisson Process and Markov Modulated Poisson Process (MMPP) respectively. 3) NFV is regarded as a disruptive technology for telecommunication service providers to reduce the Capital Expenditure (CAPEX) and Operational Expenditure (OPEX) through decoupling individual network functions from the underlying hardware devices. While NFV faces a significant challenging problem of Service-Level-Agreement (SLA) guarantee during service provisioning. In order to bridge this gap, a novel comprehensive analytical model based on stochastic network calculus is proposed in this research to investigate end-to-end performance of NFV network. The resource allocation strategies proposed in this study significantly improve the network performance in terms of packet loss probability, global allocation fairness and throughput per user in LTE-A and SDN networks; the analytical models designed in this study can accurately predict the network performances of SDN and NFV networks. Both theoretical analysis and simulation experiments are conducted to demonstrate the effectiveness of the proposed algorithms and the accuracy of the designed models. In addition, the models are used as practical and cost-effective tools to pinpoint the performance bottlenecks of SDN and NFV networks under various network conditions
    corecore