34,133 research outputs found

    Recurrent Neural Networks and Matrix Methods for Cognitive Radio Spectrum Prediction and Security

    Get PDF
    In this work, machine learning tools, including recurrent neural networks (RNNs), matrix completion, and non-negative matrix factorization (NMF), are used for cognitive radio problems. Specifically addressed are a missing data problem and a blind signal separation problem. A specialized RNN called Cellular Simultaneous Recurrent Network (CSRN), typically used in image processing applications, has been modified. The CRSN performs well for spatial spectrum prediction of radio signals with missing data. An algorithm called soft-impute for matrix completion used together with an RNN performs well for missing data problems in the radio spectrum time-frequency domain. Estimating missing spectrum data can improve cognitive radio efficiency. An NMF method called tuning pruning is used for blind source separation of radio signals in simulation. An NMF optimization technique using a geometric constraint is proposed to limit the solution space of blind signal separation. Both NMF methods are promising in addressing a security problem known as spectrum sensing data falsification attack

    A Robust Hybrid Neural Network Architecture for Blind Source Separation of Speech Signals Exploiting Deep Learning

    Get PDF
    In the contemporary era, blind source separation has emerged as a highly appealing and significant research topic within the field of signal processing. The imperative for the integration of blind source separation techniques within the context of beyond fifth-generation and sixth-generation networks arises from the increasing demand for reliable and efficient communication systems that can effectively handle the challenges posed by high-density networks, dynamic interference environments, and the coexistence of diverse signal sources, thereby enabling enhanced signal extraction and separation for improved system performance. Particularly, audio processing presents a critical domain where the challenge lies in effectively handling files containing a mixture of human speech, silence, and music. Addressing this challenge, speech separation systems can be regarded as a specialized form of human speech recognition or audio signal classification systems that are leveraged to separate, identify, or delineate segments of audio signals encompassing human speech. In various applications such as volume reduction, quality enhancement, detection, and identification, the need arises to separate human speech by eliminating silence, music, or environmental noise from the audio signals. Consequently, the development of robust methods for accurate and efficient speech separation holds paramount importance in optimizing audio signal processing tasks. This study proposes a novel three-way neural network architecture that incorporates transfer learning, a pre-trained dual-path recurrent neural network, and a transformer. In addition to learning the time series associated with audio signals, this network possesses the unique capability of direct context-awareness for modeling the speech sequence within the transformer framework. A comprehensive array of simulations is meticulously conducted to evaluate the performance of the proposed model, which is benchmarked with seven prominent state-of-the-art deep learning-based architectures. The results obtained from these evaluations demonstrate notable advancements in multiple objective metrics. Specifically, our proposed solution showcases an average improvement of 4.60% in terms of short-time objective intelligibility, 14.84% in source-to-distortion ratio, and 9.87% in scale-invariant signal-to-noise ratio. These extraordinary advancements surpass those achieved by the nearest rival, namely the dual-path recurrent neural network time-domain audio separation network, firmly establishing the superiority of our proposed model's performance

    Improving Source Separation via Multi-Speaker Representations

    Get PDF
    Lately there have been novel developments in deep learning towards solving the cocktail party problem. Initial results are very promising and allow for more research in the domain. One technique that has not yet been explored in the neural network approach to this task is speaker adaptation. Intuitively, information on the speakers that we are trying to separate seems fundamentally important for the speaker separation task. However, retrieving this speaker information is challenging since the speaker identities are not known a priori and multiple speakers are simultaneously active. There is thus some sort of chicken and egg problem. To tackle this, source signals and i-vectors are estimated alternately. We show that blind multi-speaker adaptation improves the results of the network and that (in our case) the network is not capable of adequately retrieving this useful speaker information itself

    MISEP - Linear and Nonlinear ICA Based on Mutual Information

    Get PDF
    MISEP is a method for linear and nonlinear ICA, that is able to handle a large variety of situations. It is an extension of the well known INFOMAX method, in two directions: (1) handling of nonlinear mixtures, and (2) learning the nonlinearities to be used at the outputs. The method can therefore separate linear and nonlinear mixtures of components with a wide range of statistical distributions. This paper presents the basis of the MISEP method, as well as experimental results obtained with it. The results illustrate the applicability of the method to various situations, and show that, although the nonlinear blind separation problem is ill-posed, use of regularization allows the problem to be solved when the nonlinear mixture is relatively smooth
    • …
    corecore