19 research outputs found

    Proactive Received Power Prediction Using Machine Learning and Depth Images for mmWave Networks

    Full text link
    This study demonstrates the feasibility of the proactive received power prediction by leveraging spatiotemporal visual sensing information toward the reliable millimeter-wave (mmWave) networks. Since the received power on a mmWave link can attenuate aperiodically due to a human blockage, the long-term series of the future received power cannot be predicted by analyzing the received signals before the blockage occurs. We propose a novel mechanism that predicts a time series of the received power from the next moment to even several hundred milliseconds ahead. The key idea is to leverage the camera imagery and machine learning (ML). The time-sequential images can involve the spatial geometry and the mobility of obstacles representing the mmWave signal propagation. ML is used to build the prediction model from the dataset of sequential images labeled with the received power in several hundred milliseconds ahead of when each image is obtained. The simulation and experimental evaluations using IEEE 802.11ad devices and a depth camera show that the proposed mechanism employing convolutional LSTM predicted a time series of the received power in up to 500 ms ahead at an inference time of less than 3 ms with a root-mean-square error of 3.5 dB

    Intelligent beam blockage prediction for seamless connectivity in vision-aided next-generation wireless networks

    Get PDF
    The upsurge in wireless devices and real-time service demands force the move to a higher frequency spectrum. Millimetre-wave (mmWave) and terahertz (THz) bands combined with the beamforming technology offer significant performance enhancements for future wireless networks. Unfortunately, shrinking cell coverage and severe penetration loss experienced at higher spectrum render mobility management a critical issue in high-frequency wireless networks, especially optimizing beam blockages and frequent handover (HO). Mobility management challenges have become prevalent in city centres and urban areas. To address this, we propose a novel mechanism driven by exploiting wireless signals and on-road surveillance systems to intelligently predict possible blockages in advance and perform timely HO. This paper employs computer vision (CV) to determine obstacles and users’ location and speed. In addition, this study introduces a new HO event, called block event (BLK), defined by the presence of a blocking object and a user moving towards the blocked area. Moreover, the multivariate regression technique predicts the remaining time until the user reaches the blocked area, hence determining best HO decision. Compared to conventional wireless networks without blockage prediction, simulation results show that our BLK detection and proactive HO algorithm achieves 40% improvement in maintaining user connectivity and the required quality of experience (QoE)

    Point Cloud-based Proactive Link Quality Prediction for Millimeter-wave Communications

    Full text link
    This study demonstrates the feasibility of point cloud-based proactive link quality prediction for millimeter-wave (mmWave) communications. Previous studies have proposed machine learning-based methods to predict received signal strength for future time periods using time series of depth images to mitigate the line-of-sight (LOS) path blockage by pedestrians in mmWave communication. However, these image-based methods have limited applicability due to privacy concerns as camera images may contain sensitive information. This study proposes a point cloud-based method for mmWave link quality prediction and demonstrates its feasibility through experiments. Point clouds represent three-dimensional (3D) spaces as a set of points and are sparser and less likely to contain sensitive information than camera images. Additionally, point clouds provide 3D position and motion information, which is necessary for understanding the radio propagation environment involving pedestrians. This study designs the mmWave link quality prediction method and conducts realistic indoor experiments, where the link quality fluctuates significantly due to human blockage, using commercially available IEEE 802.11ad-based 60 GHz wireless LAN devices and Kinect v2 RGB-D camera and Velodyne VLP-16 light detection and ranging (LiDAR) for point cloud acquisition. The experimental results showed that our proposed method can predict future large attenuation of mmWave received signal strength and throughput induced by the LOS path blockage by pedestrians with comparable or superior accuracy to image-based prediction methods. Hence, our point cloud-based method can serve as a viable alternative to image-based methods.Comment: Submitted to IEEE Transactions on Machine Learning in Communications and Networkin

    Multi-User Matching and Resource Allocation in Vision Aided Communications

    Full text link
    Visual perception is an effective way to obtain the spatial characteristics of wireless channels and to reduce the overhead for communications system. A critical problem for the visual assistance is that the communications system needs to match the radio signal with the visual information of the corresponding user, i.e., to identify the visual user that corresponds to the target radio signal from all the environmental objects. In this paper, we propose a user matching method for environment with a variable number of objects. Specifically, we apply 3D detection to extract all the environmental objects from the images taken by multiple cameras. Then, we design a deep neural network (DNN) to estimate the location distribution of users by the images and beam pairs at multiple moments, and thereby identify the users from all the extracted environmental objects. Moreover, we present a resource allocation method based on the taken images to reduce the time and spectrum overhead compared to traditional resource allocation methods. Simulation results show that the proposed user matching method outperforms the existing methods, and the proposed resource allocation method can achieve 92%92\% transmission rate of the traditional resource allocation method but with the time and spectrum overhead significantly reduced.Comment: 34 pages, 21 figure

    Facilitating Internet of Things on the Edge

    Get PDF
    The evolution of electronics and wireless technologies has entered a new era, the Internet of Things (IoT). Presently, IoT technologies influence the global market, bringing benefits in many areas, including healthcare, manufacturing, transportation, and entertainment. Modern IoT devices serve as a thin client with data processing performed in a remote computing node, such as a cloud server or a mobile edge compute unit. These computing units own significant resources that allow prompt data processing. The user experience for such an approach relies drastically on the availability and quality of the internet connection. In this case, if the internet connection is unavailable, the resulting operations of IoT applications can be completely disrupted. It is worth noting that emerging IoT applications are even more throughput demanding and latency-sensitive which makes communication networks a practical bottleneck for the service provisioning. This thesis aims to eliminate the limitations of wireless access, via the improvement of connectivity and throughput between the devices on the edge, as well as their network identification, which is fundamentally important for IoT service management. The introduction begins with a discussion on the emerging IoT applications and their demands. Subsequent chapters introduce scenarios of interest, describe the proposed solutions and provide selected performance evaluation results. Specifically, we start with research on the use of degraded memory chips for network identification of IoT devices as an alternative to conventional methods, such as IMEI; these methods are not vulnerable to tampering and cloning. Further, we introduce our contributions for improving connectivity and throughput among IoT devices on the edge in a case where the mobile network infrastructure is limited or totally unavailable. Finally, we conclude the introduction with a summary of the results achieved

    A survey of machine learning applications to handover management in 5G and beyond

    Get PDF
    Handover (HO) is one of the key aspects of next-generation (NG) cellular communication networks that need to be properly managed since it poses multiple threats to quality-of-service (QoS) such as the reduction in the average throughput as well as service interruptions. With the introduction of new enablers for fifth-generation (5G) networks, such as millimetre wave (mm-wave) communications, network densification, Internet of things (IoT), etc., HO management is provisioned to be more challenging as the number of base stations (BSs) per unit area, and the number of connections has been dramatically rising. Considering the stringent requirements that have been newly released in the standards of 5G networks, the level of the challenge is multiplied. To this end, intelligent HO management schemes have been proposed and tested in the literature, paving the way for tackling these challenges more efficiently and effectively. In this survey, we aim at revealing the current status of cellular networks and discussing mobility and HO management in 5G alongside the general characteristics of 5G networks. We provide an extensive tutorial on HO management in 5G networks accompanied by a discussion on machine learning (ML) applications to HO management. A novel taxonomy in terms of the source of data to be utilized in training ML algorithms is produced, where two broad categories are considered; namely, visual data and network data. The state-of-the-art on ML-aided HO management in cellular networks under each category is extensively reviewed with the most recent studies, and the challenges, as well as future research directions, are detailed
    corecore