489 research outputs found

    GWO-BP neural network based OP performance prediction for mobile multiuser communication networks

    Get PDF
    The complexity and variability of wireless channels makes reliable mobile multiuser communications challenging. As a consequence, research on mobile multiuser communication networks has increased significantly in recent years. The outage probability (OP) is commonly employed to evaluate the performance of these networks. In this paper, exact closed-form OP expressions are derived and an OP prediction algorithm is presented. Monte-Carlo simulation is used to evaluate the OP performance and verify the analysis. Then, a grey wolf optimization back-propagation (GWO-BP) neural network based OP performance prediction algorithm is proposed. Theoretical results are used to generate training data. We also examine the extreme learning machine (ELM), locally weighted linear regression (LWLR), support vector machine (SVM), BP neural network, and wavelet neural network methods. Compared to the wavelet neural network, LWLR, SVM, BP, and ELM methods, the results obtained show that the GWO-BP method provides the best OP performance prediction

    Five Facets of 6G: Research Challenges and Opportunities

    Full text link
    Whilst the fifth-generation (5G) systems are being rolled out across the globe, researchers have turned their attention to the exploration of radical next-generation solutions. At this early evolutionary stage we survey five main research facets of this field, namely {\em Facet~1: next-generation architectures, spectrum and services, Facet~2: next-generation networking, Facet~3: Internet of Things (IoT), Facet~4: wireless positioning and sensing, as well as Facet~5: applications of deep learning in 6G networks.} In this paper, we have provided a critical appraisal of the literature of promising techniques ranging from the associated architectures, networking, applications as well as designs. We have portrayed a plethora of heterogeneous architectures relying on cooperative hybrid networks supported by diverse access and transmission mechanisms. The vulnerabilities of these techniques are also addressed and carefully considered for highlighting the most of promising future research directions. Additionally, we have listed a rich suite of learning-driven optimization techniques. We conclude by observing the evolutionary paradigm-shift that has taken place from pure single-component bandwidth-efficiency, power-efficiency or delay-optimization towards multi-component designs, as exemplified by the twin-component ultra-reliable low-latency mode of the 5G system. We advocate a further evolutionary step towards multi-component Pareto optimization, which requires the exploration of the entire Pareto front of all optiomal solutions, where none of the components of the objective function may be improved without degrading at least one of the other components

    Deep Learning Designs for Physical Layer Communications

    Get PDF
    Wireless communication systems and their underlying technologies have undergone unprecedented advances over the last two decades to assuage the ever-increasing demands for various applications and emerging technologies. However, the traditional signal processing schemes and algorithms for wireless communications cannot handle the upsurging complexity associated with fifth-generation (5G) and beyond communication systems due to network expansion, new emerging technologies, high data rate, and the ever-increasing demands for low latency. This thesis extends the traditional downlink transmission schemes to deep learning-based precoding and detection techniques that are hardware-efficient and of lower complexity than the current state-of-the-art. The thesis focuses on: precoding/beamforming in massive multiple-inputs-multiple-outputs (MIMO), signal detection and lightweight neural network (NN) architectures for precoder and decoder designs. We introduce a learning-based precoder design via constructive interference (CI) that performs the precoding on a symbol-by-symbol basis. Instead of conventionally training a NN without considering the specifics of the optimisation objective, we unfold a power minimisation symbol level precoding (SLP) formulation based on the interior-point-method (IPM) proximal ‘log’ barrier function. Furthermore, we propose a concept of NN compression, where the weights are quantised to lower numerical precision formats based on binary and ternary quantisations. We further introduce a stochastic quantisation technique, where parts of the NN weight matrix are quantised while the remaining is not. Finally, we propose a systematic complexity scaling of deep neural network (DNN) based MIMO detectors. The model uses a fraction of the DNN inputs by scaling their values through weights that follow monotonically non-increasing functions. Furthermore, we investigate performance complexity tradeoffs via regularisation constraints on the layer weights such that, at inference, parts of network layers can be removed with minimal impact on the detection accuracy. Simulation results show that our proposed learning-based techniques offer better complexity-vs-BER (bit-error-rate) and complexity-vs-transmit power performances compared to the state-of-the-art MIMO detection and precoding techniques

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Energy Efficiency in 5G Communications – Conventional to Machine Learning Approaches, Journal of Telecommunications and Information Technology, 2020, nr 4

    Get PDF
    Demand for wireless and mobile data is increasing along with development of virtual reality (VR), augmented reality (AR), mixed reality (MR), and extended reality (ER) applications. In order to handle ultra-high data exchange rates while offering low latency levels, fifth generation (5G) networks have been proposed. Energy efficiency is one of the key objectives of 5G networks. The notion is defined as the ratio of throughput and total power consumption, and is measured using the number of transmission bits per Joule. In this paper, we review state-of-the-art techniques ensuring good energy efficiency in 5G wireless networks. We cover the base-station on/off technique, simultaneous wireless information and power transfer, small cells, coexistence of long term evolution (LTE) and 5G, signal processing algorithms, and the latest machine learning techniques. Finally, a comparison of a few recent research papers focusing on energy-efficient hybrid beamforming designs in massive multiple-input multiple-output (MIMO) systems is presented. Results show that machine learningbased designs may replace best performing conventional techniques thanks to a reduced complexity machine learning encode

    6G Radio Testbeds: Requirements, Trends, and Approaches

    Full text link
    The proof of the pudding is in the eating - that is why 6G testbeds are essential in the progress towards the next generation of wireless networks. Theoretical research towards 6G wireless networks is proposing advanced technologies to serve new applications and drastically improve the energy performance of the network. Testbeds are indispensable to validate these new technologies under more realistic conditions. This paper clarifies the requirements for 6G radio testbeds, reveals trends, and introduces approaches towards their development

    Performance evaluation of edge-computing platforms for the prediction of low temperatures in agriculture using deep learning

    Full text link
    [EN] The Internet of Things (IoT) is driving the digital revolution. AlSome palliative measures aremost all economic sectors are becoming "Smart" thanks to the analysis of data generated by IoT. This analysis is carried out by advance artificial intelligence (AI) techniques that provide insights never before imagined. The combination of both IoT and AI is giving rise to an emerging trend, called AIoT, which is opening up new paths to bring digitization into the new era. However, there is still a big gap between AI and IoT, which is basically in the computational power required by the former and the lack of computational resources offered by the latter. This is particularly true in rural IoT environments where the lack of connectivity (or low-bandwidth connections) and power supply forces the search for "efficient" alternatives to provide computational resources to IoT infrastructures without increasing power consumption. In this paper, we explore edge computing as a solution for bridging the gaps between AI and IoT in rural environment. We evaluate the training and inference stages of a deep-learning-based precision agriculture application for frost prediction in modern Nvidia Jetson AGX Xavier in terms of performance and power consumption. Our experimental results reveal that cloud approaches are still a long way off in terms of performance, but the inclusion of GPUs in edge devices offers new opportunities for those scenarios where connectivity is still a challenge.This work was partially supported by the Fundacion Seneca del Centro de Coordinacion de la Investigacion de la Region de Murcia under Project 20813/PI/18, and by Spanish Ministry of Science, Innovation and Universities under grants RTI2018-096384-B-I00 (AEI/FEDER, UE) and RTC-2017-6389-5.Guillén-Navarro, MA.; Llanes, A.; Imbernón, B.; Martínez-España, R.; Bueno-Crespo, A.; Cano, J.; Cecilia-Canales, JM. (2021). Performance evaluation of edge-computing platforms for the prediction of low temperatures in agriculture using deep learning. The Journal of Supercomputing. 77:818-840. https://doi.org/10.1007/s11227-020-03288-w8188407
    corecore