64 research outputs found

    The Novel Applications of Deep Reservoir Computing in Cyber-Security and Wireless Communication

    Get PDF
    This chapter introduces the novel applications of deep reservoir computing (RC) systems in cyber-security and wireless communication. The RC systems are a new class of recurrent neural networks (RNNs). Traditional RNNs are very challenging to train due to vanishing/exploding gradients. However, the RC systems are easier to train and have shown similar or even better performances compared with traditional RNNs. It is very essential to study the spatio-temporal correlations in cyber-security and wireless communication domains. Therefore, RC models are good choices to explore the spatio-temporal correlations. In this chapter, we explore the applications and performance of delayed feedback reservoirs (DFRs), and echo state networks (ESNs) in the cyber-security of smart grids and symbol detection in MIMO-OFDM systems, respectively. DFRs and ESNs are two different types of RC models. We also introduce the spiking structure of DFRs as spiking artificial neural networks are more energy efficient and biologically plausible as well

    Visible Light Communication (VLC)

    Get PDF
    Visible light communication (VLC) using light-emitting diodes (LEDs) or laser diodes (LDs) has been envisioned as one of the key enabling technologies for 6G and Internet of Things (IoT) systems, owing to its appealing advantages, including abundant and unregulated spectrum resources, no electromagnetic interference (EMI) radiation and high security. However, despite its many advantages, VLC faces several technical challenges, such as the limited bandwidth and severe nonlinearity of opto-electronic devices, link blockage and user mobility. Therefore, significant efforts are needed from the global VLC community to develop VLC technology further. This Special Issue, “Visible Light Communication (VLC)”, provides an opportunity for global researchers to share their new ideas and cutting-edge techniques to address the above-mentioned challenges. The 16 papers published in this Special Issue represent the fascinating progress of VLC in various contexts, including general indoor and underwater scenarios, and the emerging application of machine learning/artificial intelligence (ML/AI) techniques in VLC

    On Investigations of Machine Learning and Deep Learning Techniques for MIMO Detection

    Get PDF
    This paper reviews in detail the various types of multiple input multiple output (MIMO) detector algorithms. The current MIMO detectors are not suitable for massive MIMO (mMIMO) scenarios where there are a large number of antennas. Their performance degrades with the increase in number of antennas in the MIMO system. For combatting the issues, machine learning (ML) and deep learning (DL) based detection algorithms are being researched and developed. An extensive survey of these detectors is provided in this paper, alongwith their advantages and challenges. The issues discussed have to be resolved before using them for final deployment

    Detect to Learn: Structure Learning with Attention and Decision Feedback for MIMO-OFDM Receive Processing

    Full text link
    The limited over-the-air (OTA) pilot symbols in multiple-input-multiple-output orthogonal-frequency-division-multiplexing (MIMO-OFDM) systems presents a major challenge for detecting transmitted data symbols at the receiver, especially for machine learning-based approaches. While it is crucial to explore effective ways to exploit pilots, one can also take advantage of the data symbols to improve detection performance. Thus, this paper introduces an online attention-based approach, namely RC-AttStructNet-DF, that can efficiently utilize pilot symbols and be dynamically updated with the detected payload data using the decision feedback (DF) mechanism. Reservoir computing (RC) is employed in the time domain network to facilitate efficient online training. The frequency domain network adopts the novel 2D multi-head attention (MHA) module to capture the time and frequency correlations, and the structural-based StructNet to facilitate the DF mechanism. The attention loss is designed to learn the frequency domain network. The DF mechanism further enhances detection performance by dynamically tracking the channel changes through detected data symbols. The effectiveness of the RC-AttStructNet-DF approach is demonstrated through extensive experiments in MIMO-OFDM and massive MIMO-OFDM systems with different modulation orders and under various scenarios.Comment: Accepted to IEEE Transactions on Communication

    Resource allocation technique for powerline network using a modified shuffled frog-leaping algorithm

    Get PDF
    Resource allocation (RA) techniques should be made efficient and optimized in order to enhance the QoS (power & bit, capacity, scalability) of high-speed networking data applications. This research attempts to further increase the efficiency towards near-optimal performance. RA’s problem involves assignment of subcarriers, power and bit amounts for each user efficiently. Several studies conducted by the Federal Communication Commission have proven that conventional RA approaches are becoming insufficient for rapid demand in networking resulted in spectrum underutilization, low capacity and convergence, also low performance of bit error rate, delay of channel feedback, weak scalability as well as computational complexity make real-time solutions intractable. Mainly due to sophisticated, restrictive constraints, multi-objectives, unfairness, channel noise, also unrealistic when assume perfect channel state is available. The main goal of this work is to develop a conceptual framework and mathematical model for resource allocation using Shuffled Frog-Leap Algorithm (SFLA). Thus, a modified SFLA is introduced and integrated in Orthogonal Frequency Division Multiplexing (OFDM) system. Then SFLA generated random population of solutions (power, bit), the fitness of each solution is calculated and improved for each subcarrier and user. The solution is numerically validated and verified by simulation-based powerline channel. The system performance was compared to similar research works in terms of the system’s capacity, scalability, allocated rate/power, and convergence. The resources allocated are constantly optimized and the capacity obtained is constantly higher as compared to Root-finding, Linear, and Hybrid evolutionary algorithms. The proposed algorithm managed to offer fastest convergence given that the number of iterations required to get to the 0.001% error of the global optimum is 75 compared to 92 in the conventional techniques. Finally, joint allocation models for selection of optima resource values are introduced; adaptive power and bit allocators in OFDM system-based Powerline and using modified SFLA-based TLBO and PSO are propose

    Emerging opportunities and challenges for the future of reservoir computing

    Get PDF
    Reservoir computing originates in the early 2000s, the core idea being to utilize dynamical systems as reservoirs (nonlinear generalizations of standard bases) to adaptively learn spatiotemporal features and hidden patterns in complex time series. Shown to have the potential of achieving higher-precision prediction in chaotic systems, those pioneering works led to a great amount of interest and follow-ups in the community of nonlinear dynamics and complex systems. To unlock the full capabilities of reservoir computing towards a fast, lightweight, and significantly more interpretable learning framework for temporal dynamical systems, substantially more research is needed. This Perspective intends to elucidate the parallel progress of mathematical theory, algorithm design and experimental realizations of reservoir computing, and identify emerging opportunities as well as existing challenges for large-scale industrial adoption of reservoir computing, together with a few ideas and viewpoints on how some of those challenges might be resolved with joint efforts by academic and industrial researchers across multiple disciplines

    Emerging opportunities and challenges for the future of reservoir computing

    Get PDF
    Reservoir computing originates in the early 2000s, the core idea being to utilize dynamical systems as reservoirs (nonlinear generalizations of standard bases) to adaptively learn spatiotemporal features and hidden patterns in complex time series. Shown to have the potential of achieving higher-precision prediction in chaotic systems, those pioneering works led to a great amount of interest and follow-ups in the community of nonlinear dynamics and complex systems. To unlock the full capabilities of reservoir computing towards a fast, lightweight, and significantly more interpretable learning framework for temporal dynamical systems, substantially more research is needed. This Perspective intends to elucidate the parallel progress of mathematical theory, algorithm design and experimental realizations of reservoir computing, and identify emerging opportunities as well as existing challenges for large-scale industrial adoption of reservoir computing, together with a few ideas and viewpoints on how some of those challenges might be resolved with joint efforts by academic and industrial researchers across multiple disciplines
    corecore