127 research outputs found

    On the Throughput of Large-but-Finite MIMO Networks using Schedulers

    Full text link
    This paper studies the sum throughput of the {multi-user} multiple-input-single-output (MISO) networks in the cases with large but finite number of transmit antennas and users. Considering continuous and bursty communication scenarios with different users' data request probabilities, we derive quasi-closed-form expressions for the maximum achievable throughput of the networks using optimal schedulers. The results are obtained in various cases with different levels of interference cancellation. Also, we develop an efficient scheduling scheme using genetic algorithms (GAs), and evaluate the effect of different parameters, such as channel/precoding models, number of antennas/users, scheduling costs and power amplifiers' efficiency, on the system performance. Finally, we use the recent results on the achievable rates of finite block-length codes to analyze the system performance in the cases with short packets. As demonstrated, the proposed GA-based scheduler reaches (almost) the same throughput as in the exhaustive search-based optimal scheduler, with substantially less implementation complexity. Moreover, the power amplifiers' inefficiency and the scheduling delay affect the performance of the scheduling-based systems significantly

    A Genetic Algorithm-based Beamforming Approach for Delay-constrained Networks

    Get PDF
    In this paper, we study the performance of initial access beamforming schemes in the cases with large but finite number of transmit antennas and users. Particularly, we develop an efficient beamforming scheme using genetic algorithms. Moreover, taking the millimeter wave communication characteristics and different metrics into account, we investigate the effect of various parameters such as number of antennas/receivers, beamforming resolution as well as hardware impairments on the system performance. As shown, our proposed algorithm is generic in the sense that it can be effectively applied with different channel models, metrics and beamforming methods. Also, our results indicate that the proposed scheme can reach (almost) the same end-to-end throughput as the exhaustive search-based optimal approach with considerably less implementation complexity

    Packet scheduling in satellite LTE networks employing MIMO technology.

    Get PDF
    Doctor of Philosophy in Electronic Engineering. University of KwaZulu-Natal, Durban 2014.Rapid growth in the number of mobile users and ongoing demand for different types of telecommunication services from mobile networks, have driven the need for new technologies that provide high data rates and satisfy their respective Quality of Service (QoS) requirements, irrespective of their location. The satellite component will play a vital role in these new technologies, since the terrestrial component is not able to provide global coverage due to economic and technical limitations. This has led to the emergence of Satellite Long Term Evolution (LTE) networks which employ Multiple-In Multiple-Out (MIMO) technology. In order to achieve the set QoS targets, required data rates and fairness among various users with different traffic demands in the satellite LTE network, it is crucial to design an effective scheduling and a sub-channel allocation scheme that will provide an optimal balance of all these requirements. It is against this background that this study investigates packet scheduling in satellite LTE networks employing MIMO technology. One of the main foci of this study is to propose new cross-layer based packet scheduling schemes, tagged Queue Aware Fair (QAF) and Channel Based Queue Sensitive (CBQS) scheduling schemes. The proposed schemes are designed to improve both fairness and network throughput without compromising users’ QoS demands, as they provide a good trade-off between throughput, QoS demands and fairness. They also improve the performance of the network in comparison with other scheduling schemes. The comparison is determined through simulations. Due to the fact that recent schedulers provide a trade-off among major performance indices, a new performance index to evaluate the overall performance of each scheduler is derived. This index is tagged the Scheduling Performance Metric (SPM). The study also investigates the impact of the long propagation delay and different effective isotropic radiated powers on the performance of the satellite LTE network. The results show that both have a significant impact on network performance. In order to actualize an optimal scheduling scheme for the satellite LTE network, the scheduling problem is formulated as an optimization function and an optimal solution is obtained using Karush-Kuhn-Tucker multipliers. The obtained Near Optimal Scheduling Scheme (NOSS), whose aim is to maximize the network throughput without compromising users’ QoS demands and fairness, provides better throughput and spectral efficiency performance than other schedulers. The comparison is determined through simulations. Based on the new SPM, the proposed NOSS1 and NOSS2 outperform other schedulers. A stability analysis is also presented to determine whether or not the proposed scheduler will provide a stable network. A fluid limit technique is used for the stability analysis. Finally, a sub-channel allocation scheme is proposed, with the aim of providing a better sub-channel or Physical Resource Block (PRB) allocation method, tagged the Utility Auction Based (UAB) subchannel allocation scheme that will improve the system performance of the satellite LTE network. The results show that the proposed method performs better than the other scheme. The comparison is obtained through simulations

    A Comparison of Beam Refinement Algorithms for Millimeter Wave Initial Access

    Get PDF
    Initial access (IA) is identified as a key challenge for the upcoming 5G mobile communication system operating at high carrier frequencies, and several techniques are currently being proposed. In this paper, we extend our previously proposed genetic algorithm (GA)-based beam refinement scheme to include beamforming at both the transmitter and the receiver, and compare the performance with alternative approaches in the millimeter wave multi-user multiple-input-multiple-output (MU-MIMO) networks. Taking the millimeter wave communications characteristics and various metrics into account, we investigate the effect of different parameters such as the number of transmit antennas/users/per-user receive antennas, beamforming resolution as well as hardware impairments on the system performance employing different beam refinement algorithms. As shown, our proposed GA-based approach performs well in delay-constrained networks with multi-antenna users. Compared to the considered state-of-the-art schemes, our method reaches the highest service outage-constrained end-to-end throughput with considerably less implementation complexity. Moreover, taking the users\u27 mobility into account, GA-based approach can remarkably reduce the beam refinement delay at low/moderate speeds when the spatial correlation is taken into account

    Genetic Algorithm-Based Beam Refinement for Initial Access in Millimeter Wave Mobile Networks

    Get PDF
    Initial access (IA) is identified as a key challenge for the upcoming 5G mobile communication system operating at high carrier frequencies, and several techniques are currently being proposed. In this paper, we extend our previously proposed efficient genetic algorithm-(GA-) based beam refinement scheme to include beamforming at both the transmitter and the receiver and compare the performance with alternative approaches in the millimeter wave multiuser multiple-input-multiple-output (MU-MIMO) networks. Taking the millimeter wave communications characteristics and various metrics into account, we investigate the effect of different parameters such as the number of transmit antennas/users/per-user receive antennas, beamforming resolutions, and hardware impairments on the system performance employing different beam refinement algorithms. As shown, our proposed GA-based approach performs well in delay-constrained networks with multiantenna users. Compared to the considered state-of-the-art schemes, our method reaches the highest service outage-constrained end-to-end throughput with considerably less implementation complexity. Moreover, taking the users\u27 mobility into account, our GA-based approach can remarkably reduce the beam refinement delay at low/moderate speeds when the spatial correlation is taken into account. Finally, we compare the cases of collaborative users and noncollaborative users and evaluate their difference in system performance

    Architectural Model for Evaluating Space Communication Networks

    Get PDF
    [ANGLÈS] The space exploration endeavor started in 1957 with the launch and operation of the first manmade satellite, the URSS Sputnik 1. Since then, multiple space programs have been developed, pushing the limits of technology and science but foremost unveiling the mysteries of the universe. In all these cases, the need for flexible and reliable communication systems has been primordial, allowing the return of collected science data and, when necessary, ensuring the well-being and safety of astronauts. To that end, multiple space communication networks have been globally deployed, be it through geographically distributed ground assets or through space relay satellites. Until now most of these systems have relied upon mature technology standards that have been adapted to the specific needs of particular missions and customers. Nevertheless, current trends in the space programs suggest that a shift of paradigm is needed: an Internet-like space network would increase the capacity and reliability of an interplanetary network while dramatically reducing its overall costs. In this context, the System Architecting Paradigm can be a good starting point. Through its formal decomposition of the system, it can help determine the architecturally distinguishing decisions and identify potential areas of commonality and cost reduction. This thesis presents a general framework to evaluate space communication relay systems for the near Earth domain. It indicates the sources of complexity in the modeling process, and discusses the validity and appropriateness of past approaches to the problem. In particular, it proposes a discussion of current models vis-à-vis the System Architecting Paradigm and how they fit into tradespace exploration studies. Next, the thesis introduces a computational performance model for the analysis and fast simulation of space relay satellite systems. The tool takes advantage of a specifically built-in rule-based expert system for storing the constitutive elements of the architecture and perform logical interactions between them. Analogously, it uses numerical models to assess the network topology over a given timeframe, perform physical layer computations and calculate plausible schedules for the overall system. In particular, it presents a newly developed heuristic scheduler that guarantees prioritization of specific missions and services while ensuring manageable computational times.[CASTELLÀ] El inicio de la carrera espacial se inició en 1957 con el lanzamiento y operación del primer satélite artificial, el Sputnik 1 de la URSS. Desde entonces se han desarrollado múltiples programas espaciales que han llevado al límite tanto la tecnología como la ciencia y han permitido desvelar los misterios del universo. En todos estos casos, la necesidad de sistemas de comunicación flexibles y fiables ha sido primordial con el fin de asegurar el retorno de los datos científicos recopilados y, en ciertos casos, garantizar la seguridad de los astronautas. Como consecuencia, múltiples redes de comunicaciones espaciales han sido desplegadas, ya sea a través de antenas globalmente distribuidas a través de la superficie terrestre o mediante satélites repetidores. Hasta ahora la mayoría de estos sistemas se ha basado en estándares tecnológicos maduros y testeados, los cuales se han adaptado con el fin de satisfacer las necesidades específicas de cada misión y cliente. Sin embargo, las tendencias actuales en el diseño de los nuevos programas espaciales indica que un cambio de paradigma es necesario: una red espacial a imagen de Internet permitiría incrementar la capacidad y fiabilidad de las comunicaciones interplanetarias y, a la vez, reducir dramáticamente sus costes. En este contexto, el paradigma de arquitectura de sistemas puede ser un buen punto de partida. Mediante la descomposición formal del sistema, puede ayudar a determinar las decisiones que tienen un impacto cabal en el diseño de la arquitectura así como identificar las áreas con tecnologías similares y de menor coste. Esta tesis presenta un marco teórico general para evaluar sistemas de comunicaciones espaciales para misiones que orbitan la Tierra. Adicionalmente, la tesis indica los principales orígenes de complejidad durante el modelado del sistema y presenta una discusión sobre la validez de anteriores estrategias para analizar el problema. En concreto, propone una comparación de anteriores modelos respecto el paradigma de arquitectura de sistemas y su grado de adecuación para evaluar y comprar arquitecturas. A continuación, la tesis introduce un modelo computacional para simular y evaluar el rendimiento de sistemas de repetidores por satélite. La herramienta utiliza un rule-based expert system específicamente diseñado con el fin de almacenar los principales elementos constitutivos de la arquitectura y comprender las interacciones lógicas entre ellos. Análogamente, el modelo usa métodos numéricos con el fin de calcular la evolución temporal de la topología de la red en un determinado intervalo de tiempo, así como su capa física y un posible programa de contactos. En concreto, presenta un nuevo scheduler heurístico que garantiza la correcta ordenación de las misiones y servicios a la vez que asegura un tiempo computacional aceptable.[CATALÀ] L'inici de la cursa espacial va iniciar-se l'any 1957 amb el llançament i operació del primer satèl·lit artificial, l'Sputnik 1 de la URSS. Des d'aleshores s'han dut a terme múltiples programes espacials que han portat al límit tant la tecnologia com la ciència i han permès desvelar els misteris de l'univers. En tots aquests casos, la necessitat de sistemes de comunicació flexibles i fiables ha sigut primordial per tal d'assegurar el retorn de les dades científiques recopilades i, en certs casos, garantir el benestar i seguretat dels astronautes. Com a conseqüència, múltiples xarxes de comunicacions espacials han sigut desplegades, ja sigui a través d'antenes globalment distribuïdes a través de la superfície terrestre o mitjançant satèl·lits repetidors. Fins ara la majoria d'aquests sistemes s'han basat en estàndards tecnològics madurs i testats, els quals s'han adaptat per tal de satisfer les necessitats específiques de cada missió i client. Això no obstant, les tendències actuals en el disseny dels nous programes espacials indica que un canvi de paradigma és necessari: una xarxa espacial a imatge d'Internet permetria incrementar la capacitat i fiabilitat de les comunicacions interplanetàries i, alhora, reduir dramàticament els seu costs. En aquest context, el paradigma d'arquitectura de sistemes pot ser un bon punt de partida. Mitjançant la descomposició formal del sistema, pot ajudar a determinar les decisions que tenen un impacte cabdal en el disseny de l'arquitectura així com permetre identificar àrees amb tecnologies similars i de menor cost. Aquesta tesi presenta un marc teòric general per avaluar sistemes de comunicacions espacials per missions orbitant la Terra. Addicionalment, la tesi indica els principals orígens de complexitat durant el modelatge del sistema i presenta una discussió sobre la validesa d'anteriors estratègies per analitzar el problema. En concret, proposa una comparació d'anteriors models respecte el paradigma d'arquitectura de sistemes i el seu grau d'adequació per avaluar i comparar arquitectures. A continuació, la tesi introdueix un model computacional per simular i avaluar el rendiment de sistemes de repetidors per satèl·lit. L'eina empra un rule-based expert system específicament dissenyat per tal d'emmagatzemar els principals elements constitutius de l'arquitectura i comprendre les interaccions lògiques entre ells. Anàlogament, el model utilitza mètodes numèrics per tal de calcular l'evolució temporal de la topologia de la xarxa en un determinat interval de temps, així com calcular la seva capa física i un possible programa de contactes. En concret, presenta un nou scheduler heurístic que garanteix la correcte ordenació de les missions i serveis tot assegurant un temps de computació acceptable

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Advances in Grid Computing

    Get PDF
    This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Radio Communications

    Get PDF
    In the last decades the restless evolution of information and communication technologies (ICT) brought to a deep transformation of our habits. The growth of the Internet and the advances in hardware and software implementations modified our way to communicate and to share information. In this book, an overview of the major issues faced today by researchers in the field of radio communications is given through 35 high quality chapters written by specialists working in universities and research centers all over the world. Various aspects will be deeply discussed: channel modeling, beamforming, multiple antennas, cooperative networks, opportunistic scheduling, advanced admission control, handover management, systems performance assessment, routing issues in mobility conditions, localization, web security. Advanced techniques for the radio resource management will be discussed both in single and multiple radio technologies; either in infrastructure, mesh or ad hoc networks
    corecore