12 research outputs found

    Machine Learning Classifier Approach with Gaussian Process, Ensemble boosted Trees, SVM, and Linear Regression for 5G Signal Coverage Mapping

    Get PDF
    This article offers a thorough analysis of the machine learning classifiers approaches for the collected Received Signal Strength Indicator (RSSI) samples which can be applied in predicting propagation loss, used for network planning to achieve maximum coverage. We estimated the RMSE of a machine learning classifier on multivariate RSSI data collected from the cluster of 6 Base Transceiver Stations (BTS) across a hilly terrain of Uttarakhand-India. Variable attributes comprise topology, environment, and forest canopy. Four machine learning classifiers have been investigated to identify the classifier with the least RMSE: Gaussian Process, Ensemble Boosted Tree, SVM, and Linear Regression. Gaussian Process showed the lowest RMSE, R- Squared, MSE, and MAE of 1.96, 0.98, 3.8774, and 1.3202 respectively as compared to other classifiers

    Deep Learning Aided Parametric Channel Covariance Matrix Estimation for Millimeter Wave Hybrid Massive MIMO

    Full text link
    Millimeter-wave (mmWave) channels, which occupy frequency ranges much higher than those being used in previous wireless communications systems, are utilized to meet the increased throughput requirements that come with 5G communications. The high levels of attenuation experienced by electromagnetic waves in these frequencies causes MIMO channels to have high spatial correlation. To attain desirable error performances, systems require knowledge about the channel correlations. In this thesis, a deep neural network aided method is proposed for the parametric estimation of the channel covariance matrix (CCM), which contains information regarding the channel correlations. When compared to some methods found in the literature, the proposed method yields satisfactory performance in terms of both computational complexity and channel estimation errors.Comment: M.Sc. Thesis, published at: https://open.metu.edu.tr/handle/11511/9319

    Application of Reinforcement Learning in 5G Millimeter-Wave Networks

    Get PDF
    The increasingly growing number of mobile communications users and smart devices have attracted researchers and industry pioneers to the largely under-utilized spectrum in the millimeter-wave (mmWave) frequency bands for the 5th generation of wireless networks. This could provide hundreds of times more capacity as compared to 4G cellular networks. The main reason for ignoring the mmWave spectrum until now, has been its vulnerability to signal blockages and possible disconnection or interruption in service. Considering that today’s mobile users expect high reliability and throughput connections, the mmWave signal sensitivity to blockages must be addressed. This research proposes to predict base stations that can service a user without disconnections, given the user’s path or destination in the network. In modern networks, reinforcement learning has been effectively utilized to obtain optimal decisions (or actions being taken) in small state-action spaces. Deep reinforcement learning has been able find optimal policies in larger network spaces. In this work, similar techniques are employed to find ways to serve the user without service disconnection or interruption. First, using dynamic programming for a fixed user path, the exact optimal serving base stations are listed. Then, using Q-learning, the network will learn to predict the optimal user path and serving base stations listed, given a fixed destination for the user. Lastly, deep Q-learning is used to approximate optimal user paths and base station lists along that path, similar to the Q-learning results, which can also be applied to networks with more sophisticated state spaces
    corecore