8 research outputs found

    Incoherent Frequency Fusion for Broadband Steered Response Power Algorithms in Noisy Environments

    Get PDF
    The steered response power (SRP) algorithms have been shown to be among the most effective and robust ones in noisy environments for direction of arrival (DOA) estimation. In broadband signal applications, the SRP methods typically perform their computations in the frequency-domain by applying a fast Fourier transform (FFT) on a signal portion, calculating the response power on each frequency bin, and subsequently fusing these estimates to obtain the final result. We introduce a frequency response incoherent fusion method based on a normalized arithmetic mean (NAM). Experiments are presented that rely on the SRP algorithms for the localization of motor vehicles in a noisy outdoor environment, focusing our discussion on performance differences with respect to different signal-to-noise ratios (SNR), and on spatial resolution issues for closely spaced sources. We demonstrate that the proposed fusion method provides higher resolution for the delay-and-sum SRP, and improved performances for minimum variance distortionless response (MVDR) and multiple signal classification (MUSIC

    A Sequence Matching Network for Polyphonic Sound Event Localization and Detection

    Full text link
    Polyphonic sound event detection and direction-of-arrival estimation require different input features from audio signals. While sound event detection mainly relies on time-frequency patterns, direction-of-arrival estimation relies on magnitude or phase differences between microphones. Previous approaches use the same input features for sound event detection and direction-of-arrival estimation, and train the two tasks jointly or in a two-stage transfer-learning manner. We propose a two-step approach that decouples the learning of the sound event detection and directional-of-arrival estimation systems. In the first step, we detect the sound events and estimate the directions-of-arrival separately to optimize the performance of each system. In the second step, we train a deep neural network to match the two output sequences of the event detector and the direction-of-arrival estimator. This modular and hierarchical approach allows the flexibility in the system design, and increase the performance of the whole sound event localization and detection system. The experimental results using the DCASE 2019 sound event localization and detection dataset show an improved performance compared to the previous state-of-the-art solutions.Comment: to be published in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP

    Performance of DOA estimation algorithms for acoustic localization of indoor flying drones using artificial sound source

    Get PDF
    Flying unmanned aerial vehicles (UAVs) in swarms can have numerous advantages. However, to maintain a safe distance between them during flight is very challenging. To achieve this, each UAV in the swarms needs to know its relative location with respect to one another. This work proposes a method for relative localization using the chirping sound emitted from UAVs flying together indoors. The strategy is simulated to assess localization performance of three different types of chirping sounds indoors using six microphone arrays. The estimated direction of arrival (DOA) of the chirping sound is calculated using several published algorithms that include MUSIC, CSSM, SRP-PHAT, TOPS and WAVES. The sound is produced in a simulated flying indoor environment with several different settings of sound-to-noise ratio (SNR) and reverberation time (RT). Based on the results, it has been found that chirping sound with a wider frequency band produced better results in terms of mean values of DOA estimation error. The chirping sound performance is also tested with the actual UAVs operating under different rotor speeds. Similarly, it is observed that the chirping sound with wider band also produced better results in three of the algorithms, which is reflected in their absolute mean error. Nevertheless, further work has to be done to filter out the UAVs’ rotor noise and also the indoor reverberation effects for better performance

    Onboard Audio and Video Processing for Secure Detection, Localization, and Tracking in Counter-UAV Applications

    Get PDF
    Nowadays, UAVs are of fundamental importance in numerous civil applications like search and rescue and military applications like monitoring and patrolling or counter-UAV where the remote UAV nodes collect sensor data. In the last case, flying UAVs collect environmental data to be used to contrast external attacks launched by adversary drones. However, due to the limited computing resources on board of the acquisition UAVs, most of the signal processing is still performed on a ground central unit where the sensor data is sent wirelessly. This poses serious security problems from malicious entities such as cyber attacks that exploit vulnerabilities at the application level. One possibility to reduce the risk is to concentrate part of the computing onboard of the remote nodes. In this context, we propose a framework where detection of nearby drones and their localization and tracking can be performed in real-time on the small computing devices mounted on board of the drones. Background subtraction is applied to the video frames for pre-processing with the objective of an on-board UAV detection using machine-vision algorithms. For the localization and tracking of the detected UAV, multi-channel acoustic signals are instead considered and DOA estimations are obtained through the MUSIC algorithm. In this work, the proposed idea is described in detail along with some experiments and, then, methods of effective implementation are provided
    corecore