799 research outputs found

    Simulation tool implementing centralized and distributed algorithms for tracking acoustic targets

    Get PDF
    The goal of this document is the implementation of a software tool for the simulation of the acoustic tracking problem over a wireless sensor network working in a centralized or distributed manner. Its Graphical User Interface (GUI) allows the user to configure the parameters associated to the diffusion adaptive algorithms implemented in the simulation tool, in order to offer a visual representation of the behavior of a real sensor network working with those settings. For illustration we ran several simulations, which allowed us to visualize the performance of different network configurations. The results obtained with the implemented simulation tool show it can be very helpful to study the audio target tracking problem and ultimately for the design of sensor networks that can guarantee certain performance criteria. Moreover, we have developed the code for the implementation of a real acoustictracking sensor network working in a centralized manner, using ©Libelium’sWaspmote™ sensor boards as the network nodes and using ©Libelium’s Meshlium-Xtreme™ as central node.Ingeniería de Sistemas Audiovisuale

    Audio Fingerprinting for Multi-Device Self-Localization

    Get PDF
    This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) under Grant EP/K007491/1

    Spatial, Spectral, and Perceptual Nonlinear Noise Reduction for Hands-free Microphones in a Car

    Get PDF
    Speech enhancement in an automobile is a challenging problem because interference can come from engine noise, fans, music, wind, road noise, reverberation, echo, and passengers engaging in other conversations. Hands-free microphones make the situation worse because the strength of the desired speech signal reduces with increased distance between the microphone and talker. Automobile safety is improved when the driver can use a hands-free interface to phones and other devices instead of taking his eyes off the road. The demand for high quality hands-free communication in the automobile requires the introduction of more powerful algorithms. This thesis shows that a unique combination of five algorithms can achieve superior speech enhancement for a hands-free system when compared to beamforming or spectral subtraction alone. Several different designs were analyzed and tested before converging on the configuration that achieved the best results. Beamforming, voice activity detection, spectral subtraction, perceptual nonlinear weighting, and talker isolation via pitch tracking all work together in a complementary iterative manner to create a speech enhancement system capable of significantly enhancing real world speech signals. The following conclusions are supported by the simulation results using data recorded in a car and are in strong agreement with theory. Adaptive beamforming, like the Generalized Side-lobe Canceller (GSC), can be effectively used if the filters only adapt during silent data frames because too much of the desired speech is cancelled otherwise. Spectral subtraction removes stationary noise while perceptual weighting prevents the introduction of offensive audible noise artifacts. Talker isolation via pitch tracking can perform better when used after beamforming and spectral subtraction because of the higher accuracy obtained after initial noise removal. Iterating the algorithm once increases the accuracy of the Voice Activity Detection (VAD), which improves the overall performance of the algorithm. Placing the microphone(s) on the ceiling above the head and slightly forward of the desired talker appears to be the best location in an automobile based on the experiments performed in this thesis. Objective speech quality measures show that the algorithm removes a majority of the stationary noise in a hands-free environment of an automobile with relatively minimal speech distortion

    Acoustic Sensor Networks and Mobile Robotics for Sound Source Localization

    Full text link
    © 2019 IEEE. Localizing a sound source is a fundamental but still challenging issue in many applications, where sound information is gathered by static and local microphone sensors. Therefore, this work proposes a new system by exploiting advances in sensor networks and robotics to more accurately address the problem of sound source localization. By the use of the network infrastructure, acoustic sensors are more efficient to spatially monitor acoustical phenomena. Furthermore, a mobile robot is proposed to carry an extra microphone array in order to collect more acoustic signals when it travels around the environment. Driving the robot is guided by the need to increase the quality of the data gathered by the static acoustic sensors, which leads to better probabilistic fusion of all the information gained, so that an increasingly accurate map of the sound source can be built. The proposed system has been validated in a real-life environment, where the obtained results are highly promising
    corecore