7 research outputs found

    Using machine learning technologies to solve the problem of classifying infrasound background monitoring signals

    Get PDF
    It is widely known that among sound signals generated by natural and anthropogenic phenomena, the most long-lived are waves of frequency less than 20 Hz, called infrasound. This property allows tracking at a distance by infrasound monitoring the occurrence of high-energy events on regional scales (up to 200–300 km). At the same time, the separation of useful infrasound signals from background noise is a non-trivial task in real-time and post-facto signal processing. In this paper we propose a new method for classification of specific signals in infrasound monitoring data using Shannon permutation entropy and vectors of frequency distribution of occurrence frequencies of permutations of consecutive sample values of rank 3 (number of permutation elements). To evaluate the validity of the proposed entropy-based classification method, two machine learning methods β€” random forest method and classical neural network approach β€” implemented in Python language using Scikit-lean, TensorFlow and Keras libraries were used. The classification quality was evaluated against the traditional frequency-based method of class extraction based on Fourier transform. Recognition was performed on the prepared infrasound monitoring data in the Altai Republic. The results of computational experiment on the separation of 5 classes of signals showed that classification by the proposed method gives the same results of recognition by neural network with in comparison with frequency classification of the original data; the recognition accuracy was 51–58 %. For the random forests method, the recognition accuracy of frequency classes was slightly higher: 51 % vs. 45 % for classes using the permutation entropy method. The analysis of the results of the computational experiment shows sufficient competitiveness of the method of classification by permutation entropy in the recognition of infrasound signals. In addition, the proposed method is much easier to implement for inline signal processing in lowconsumption microcontroller systems. The next step is to test the method at infrasound signal registration points and as part of the infrasound monitoring data processing system for real-time event detection

    FPGA-Based Adaptive Digital Beamforming Using Machine Learning for MIMO Systems

    Get PDF
    In modern Multiple-Input and Multiple-Output (MIMO) systems, such as cellular and Wi-Fi technology, an array of antenna elements is used to spatially steer RF signals with the goal of changing the overall antenna gain pattern to achieve a higher Signal-to-interference-plus-noise ratio (SINR). Digital Beamforming (DBF) achieves this steering effect by applying weighted coefficients to antenna elements- similar to digital filtering- which adjust the phase and gain of the received, or transmitted, signals. Since real world MIMO systems are often used in dynamic environments, Adaptive Beamforming techniques have been used to overcome variable challenges to system SINR- such as dispersive channels or inter-device interference- by applying statistically-based algorithms to calculate weights adaptively. However, large element count array systems, with their high degrees of freedom (DOF), can face many challenges in real application of these adaptive algorithms. These statistical matrix methods can be either computationally prohibitive, or utilize non-optimal simplifications, in order to provide adaptive weights in time for an application, especially given a certain system's computational capability; for instance, MIMO communication devices with strict size, weight and power (SWaP) constraints often have processing limitations due to use of low-power processors or Field-Programmable Gate Arrays (FPGAs). Thus, this thesis research investigation will show novel progress in these adaptive MIMO challenges in a twofold approach. First, it will be shown that advances in Machine Learning (ML) and Deep Neural Networks (DNNs) can be directly applied to the computationally complex problem of calculating optimal adaptive beamforming weights via a custom Convolutional Neural Net (CNN). Secondly, the derived adaptive beamforming CNN will be shown to efficiently map to programmable logic FPGA resources which can update adaptive coefficients in real-time. This machine learning implementation is contrasted against the current state-of-the-art FPGA architecture for adaptive beamforming- which uses traditional, Recursive Least Squares (RLS) computation- and is shown to provide adaptive beamforming weights faster, and with fewer FPGA logic resources. The reduction in both processing latency and FPGA fabric utilization enables SWaP constrained MIMO processors to perform adaptive beamforming for higher channel count systems than currently possible with traditional computation methods
    corecore