7 research outputs found

    Tracking a moving sound source from a multi-rotor drone

    Get PDF
    We propose a method to track from a multirotor drone a moving source, such as a human speaker or an emergency whistle, whose sound is mixed with the strong ego-noise generated by rotating motors and propellers. The proposed method is independent of the specific drone and does not need pre-training nor reference signals. We first employ a time-frequency spatial filter to estimate, on short audio segments, the direction of arrival of the moving source and then we track these noisy estimations with a particle filter. We quantitatively evaluate the results using a ground-truth trajectory of the sound source obtained with an on-board camera and compare the performance of the proposed method with baseline solutions

    Microphone-Array Ego-Noise Reduction Algorithms for Auditory Micro Aerial Vehicles

    Get PDF

    Acoustic Sensing From a Multi-Rotor Drone

    Get PDF

    Deep learning assisted sound source localization from a flying drone

    Get PDF

    Sound Based Positioning

    Get PDF
    With a growing interest in non-GPS positioning, navigation, and timing (PNT), sound based positioning provides a precise way to locate both sound sources and microphones through audible signals of opportunity (SoOPs). Exploiting SoOPs allows for passive location estimation. But, attributing each signal to a specific source location when signals are simultaneously emitting proves problematic. Using an array of microphones, unique SoOPs are identified and located through steered response beamforming. Sound source signals are then isolated through time-frequency masking to provide clear reference stations by which to estimate the location of a separate microphone through time difference of arrival measurements. Results are shown for real data
    corecore