1,865 research outputs found

    Modeling and Optimization of Next-Generation Wireless Access Networks

    Get PDF
    The ultimate goal of the next generation access networks is to provide all network users, whether they are fixed or mobile, indoor or outdoor, with high data rate connectivity, while ensuring a high quality of service. In order to realize this ambitious goal, delay, jitter, error rate and packet loss should be minimized: a goal that can only be achieved through integrating different technologies, including passive optical networks, 4th generation wireless networks, and femtocells, among others. This thesis focuses on medium access control and physical layers of future networks. In this regard, the first part of this thesis discusses techniques to improve the end-to-end quality of service in hybrid optical-wireless networks. In these hybrid networks, users are connected to a wireless base station that relays their data to the core network through an optical connection. Hence, by integrating wireless and optical parts of these networks, a smart scheduler can predict the incoming traffic to the optical network. The prediction data generated herein is then used to propose a traffic-aware dynamic bandwidth assignment algorithm for reducing the end-to-end delay. The second part of this thesis addresses the challenging problem of interference management in a two-tier macrocell/femtocell network. A high quality, high speed connection for indoor users is ensured only if the network has a high signal to noise ratio. A requirement that can be fulfilled with using femtocells in cellular networks. However, since femtocells generate harmful interference to macrocell users in proximity of them, careful analysis and realistic models should be developed to manage the introduced interference. Thus, a realistic model for femtocell interference outside suburban houses is proposed and several performance measures, e.g., signal to interference and noise ratio and outage probability are derived mathematically for further analysis. The quality of service of cellular networks can be degraded by several factors. For example, in industrial environments, simultaneous fading and strong impulsive noise significantly deteriorate the error rate performance. In the third part of this thesis, a technique to improve the bit error rate of orthogonal frequency division multiplexing systems in industrial environments is presented. This system is the most widely used technology in next-generation networks, and is very susceptible to impulsive noise, especially in fading channels. Mathematical analysis proves that the proposed method can effectively mitigate the degradation caused by impulsive noise and significantly improve signal to interference and noise ratio and bit error rate, even in frequency-selective fading channels

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Artificial intelligence (AI) methods in optical networks: A comprehensive survey

    Get PDF
    Producción CientíficaArtificial intelligence (AI) is an extensive scientific discipline which enables computer systems to solve problems by emulating complex biological processes such as learning, reasoning and self-correction. This paper presents a comprehensive review of the application of AI techniques for improving performance of optical communication systems and networks. The use of AI-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation. Then, applications related to optical network control and management are also reviewed, including topics like optical network planning and operation in both transport and access networks. Finally, the paper also presents a summary of opportunities and challenges in optical networking where AI is expected to play a key role in the near future.Ministerio de Economía, Industria y Competitividad (Project EC2014-53071-C3-2-P, TEC2015-71932-REDT

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions
    corecore