958 research outputs found

    Channel Assignment Algorithms in Cognitive Radio Networks: Taxonomy, Open Issues, and Challenges

    Full text link

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    An Autonomous Channel Selection Algorithm for WLANs

    Get PDF
    IEEE 802.11 wireless devices need to select a channel in order to transmit their packets. However, as a result of the contention-based nature of the IEEE 802.11 CSMA/CA MAC mechanism, the capacity experienced by a station is not fixed. When a station cannot win a sufficient number of transmission opportunities to satisfy its traffic load, it will become saturated. If the saturation condition persists, more and more packets are stored in the transmit queue and congestion occurs. Congestion leads to high packet delay and may ultimately result in catastrophic packet loss when the transmit queue’s capacity is exceeded. In this thesis, we propose an autonomous channel selection algorithm with neighbour forcing (NF) to minimize the incidence of congestion on all stations using the channels. All stations reassign the channels based on the local monitoring information. This station will change the channel once it finds a channel that has sufficient available bandwidth to satisfy its traffic load requirement or it will force its neighbour stations into saturation by reducing its PHY transmission rate if there exists at least one successful channel assignment according to a predicting module which checks all the possible channel assignments. The results from a simple C++ simulator show that the NF algorithm has a higher probability than the dynamic channel assignment without neighbour forcing (NONF) to successfully reassign the channel once stations have become congested. In an experimental testbed, the Madwifi open source wireless driver has been modified to incorporate the channel selection mechanism. The results demonstrate that the NF algorithm also has a better performance than the NONF algorithm in reducing the congestion time of the network where at least one station has become congested

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Enabling Technologies for Cognitive Optical Networks

    Get PDF

    Data Collection in Two-Tier IoT Networks with Radio Frequency (RF) Energy Harvesting Devices and Tags

    Get PDF
    The Internet of things (IoT) is expected to connect physical objects and end-users using technologies such as wireless sensor networks and radio frequency identification (RFID). In addition, it will employ a wireless multi-hop backhaul to transfer data collected by a myriad of devices to users or applications such as digital twins operating in a Metaverse. A critical issue is that the number of packets collected and transferred to the Internet is bounded by limited network resources such as bandwidth and energy. In this respect, IoT networks have adopted technologies such as time division multiple access (TDMA), signal interference cancellation (SIC) and multiple-input multiple-output (MIMO) in order to increase network capacity. Another fundamental issue is energy. To this end, researchers have exploited radio frequency (RF) energy-harvesting technologies to prolong the lifetime of energy constrained sensors and smart devices. Specifically, devices with RF energy harvesting capabilities can rely on ambient RF sources such as access points, television towers, and base stations. Further, an operator may deploy dedicated power beacons that serve as RF-energy sources. Apart from that, in order to reduce energy consumption, devices can adopt ambient backscattering communication technologies. Advantageously, backscattering allows devices to communicate using negligible amount of energy by modulating ambient RF signals. To address the aforementioned issues, this thesis first considers data collection in a two-tier MIMO ambient RF energy-harvesting network. The first tier consists of routers with MIMO capability and a set of source-destination pairs/flows. The second tier consists of energy harvesting devices that rely on RF transmissions from routers for energy supply. The problem is to determine a minimum-length TDMA link schedule that satisfies the traffic demand of source-destination pairs and energy demand of energy harvesting devices. It formulates the problem as a linear program (LP), and outlines a heuristic to construct transmission sets that are then used by the said LP. In addition, it outlines a new routing metric that considers the energy demand of energy harvesting devices to cope with routing requirements of IoT networks. The simulation results show that the proposed algorithm on average achieves 31.25% shorter schedules as compared to competing schemes. In addition, the said routing metric results in link schedules that are at most 24.75% longer than those computed by the LP
    • …
    corecore