1,056 research outputs found

    An ultrasonic/RF GP-based sensor model robotic solution for indoors/outdoors person tracking

    Full text link
    © 2014 IEEE. An non-linear Bayesian regression engine for robotic tracking based on an ultrasonic/RF sensor unit is presented in this paper. The proposed system is able to maintain systematic tracking of a leading human in indoor/outdoor settings with minimalistic instrumentation. Compared to popular camera based localization system the sonar array/RF based system has the advantage of being insensitive to background light intensity changes, a primary concern in outdoor environments. In contrast to single-plane laser range finder based tracking the proposed scheme is able to better adapt to small terrain variations, while at the same time being a significantly more affordable proposition for tracking with a robotic unit. A key novelty in this work is the utilisation of Gaussian Process Regression (GPR) to build a model for the sensor unit, which is shown to compare favourably against traditional linear triangulation approaches. The covariance function yield by the GPR sensor model also provides the additional benefit of outlier rejection. We present experimental results of indoors and outdoors tracking by mounting the sensor unit on a Garden Utility Transportation System (GUTS) robot and compare the proposed approach with linear triangulation which clearly show the inference engine capability to generalise relative localisation of human and a marked improvement in tracking accuracy and robustness

    Simultaneous asynchronous microphone array calibration and sound source localisation

    Full text link
    © 2015 IEEE. In this paper, an approach for sound source localisation and calibration of an asynchronous microphone array is proposed to be solved simultaneously. A graph-based Simultaneous Localisation and Mapping (SLAM) method is used for this purpose. Traditional sound source localisation using a microphone array has two main requirements. Firstly, geometrical information of microphone array is needed. Secondly, a multichannel analog-to-digital converter is required to obtain synchronous readings of the audio signal. Recent works aim at releasing these two requirements by estimating the time offset between each pair of microphones. However, it was assumed that the clock timing in each microphone sound card is exactly the same, which requires the clocks in the sound cards to be identically manufactured. A methodology is hereby proposed to calibrate an asynchronous microphone array using a graph-based optimisation method borrowed from the SLAM literature, effectively estimating the array geometry, time offset and clock difference/drift rate of each microphone together with the sound source locations. Simulation and experimental results are presented, which prove the effectiveness of the proposed methodology in achieving accurate estimates of the microphone array characteristics needed to be used on realistic settings with asynchronous sound devices

    Daytime Electron Density At the F1-Region in Europe During Geomagnetic Storms

    Get PDF
    This study attempts to demonstrate changes in the ionospheric F1-region daytime ionization during geomagnetic storms. The F1-region is explored using available data from several European middle latitude and lower latitude observatories and a set of geomagnetic storms encompassing a range of seasons and solar activity levels. The results of analysis suggest systematic seasonal and partly latitudinal differences in the F1-region response to geomagnetic storm. The pattern of the response of the F1-region at higher middle latitudes, a decrease in electron density, does not depend on the type of response of the F2-region and on solar activity. A brief interpretation of these findings is presented

    Real-time sound source localisation for target tracking applications using an asynchronous microphone array

    Full text link
    © 2015 IEEE. This paper presents a strategy for sound source localisation using an asynchronous microphone array. The proposed method is suitable for target tracking applications, in which the sound source with a known frequency is attached to the target. Conventional microphone array technologies require a multi-channel A/D converter for inter-microphone synchronization making the technology relatively expensive. In this work, the requirement of synchronization between channels is relaxed by adding an external reference audio signal. The only assumption is that the frequencies of the reference signal and the sound source attached to the target are fixed and known beforehand. By exploiting the information provided by the known reference signal, the Direction Of Arrival (DOA) of target sound source can be calculated in real-time. The key idea of the algorithm is to use the reference source to 'pseudo-align' the audio signals from different channels. Once the channels are 'pseudo-aligned', a dedicated DOA estimation method based on Time Difference Of Arrival (TDOA) can be employed to find the relative bearing information between the target sound source and microphone array. Due to the narrow band of frequency of target sound source, the proposed approach is proven to be robust to low signals-to-noise ratios. Comprehensive simulations and experimental results are presented to show the validity of the algorithm

    Towards real-time 3D sound sources mapping with linear microphone arrays

    Full text link
    © 2017 IEEE. In this paper, we present a method for real-time 3D sound sources mapping using an off-the-shelf robotic perception sensor equipped with a linear microphone array. Conventional approaches to map sound sources in 3D scenarios use dedicated 3D microphone arrays, as this type of arrays provide two degrees of freedom (DOF) observations. Our method addresses the problem of 3D sound sources mapping using a linear microphone array, which only provides one DOF observations making the estimation of the sound sources location more challenging. In the proposed method, multi hypotheses tracking is combined with a new sound source parametrisation to provide with a good initial guess for an online optimisation strategy. A joint optimisation is carried out to estimate 6 DOF sensor poses and 3 DOF landmarks together with the sound sources locations. Additionally, a dedicated sensor model is proposed to accurately model the noise of the Direction of Arrival (DOA) observation when using a linear microphone array. Comprehensive simulation and experimental results show the effectiveness of the proposed method. In addition, a real-time implementation of our method has been made available as open source software for the benefit of the community

    Modelling in-pipe acoustic signal propagation for condition assessment of multi-layer water pipelines

    Full text link
    © 2015 IEEE. A solution to the condition assessment of fluid-filled conduits based on the analysis of in-pipe acoustic signal propagation is presented in this paper. The sensor arrangement consists of an acoustic emitter from which a known sonic pulse is generated, and a collocated hydrophone receiver that records the arrival acoustic wave at a high sampling rate. The proposed method exploits the influence of the surrounding environment on the propagation of an acoustic wave to estimate the condition of the pipeline. Specifically, the propagation speed of an acoustic wave is influenced by the hoop stiffness of the surrounding materials, a fact that has been exploited in the analysis of boreholes in the literature. In this work, this finding is extended to validate the analytical expression derived to infer the condition of uniform, axis-symmetric lined waterworks, a first step to ultimately be able to predict the remaining active life (time-to-failure) of pipelines with arbitrary geometries through finite element analysis (FEA). An investigation of the various aspects of the proposed methodology with typical pipe material and structures is presented to appreciate the advantages of modelling acoustic waves behaviours in fluid-filled cylindrical cavities for condition assessment of water pipelines

    Split conditional independent mapping for sound source localisation with inverse-depth parametrisation

    Full text link
    © 2016 IEEE. In this paper, we propose a framework to map stationary sound sources while simultaneously localise a moving robot. Conventional methods for localisation and sound source mapping rely on a microphone array and either, 1) a proprioceptive sensor only (such as wheel odometry) or 2) an additional exteroceptive sensor (such as cameras or lasers) to get accurately the robot locations. Since odometry drifts over time and sound observations are bearing-only, sparse and extremely noisy, the former can only deal with relatively short trajectories before the whole map drifts. In comparison, the latter can get more accurate trajectory estimation over long distances and a better estimation of the sound source map as a result. However, in most of the work in the literature, trajectory estimation and sound source mapping are treated as uncorrelated, which means an update on the robot trajectory does not propagate properly to the sound source map. In this paper, we proposed an efficient method to correlate robot trajectory with sound source mapping by exploiting the conditional independence property between two maps estimated by two different Simultaneous Localisation and Mapping (SLAM) algorithms running in parallel. In our approach, the first map has the flexibility that can be built with any SLAM algorithm (filtering or optimisation) to estimate robot poses with an exteroceptive sensor. The second map is built by using a filtering-based SLAM algorithm locating all stationary sound sources parametrised with Inverse Depth Parametrisation (IDP). Robot locations used during IDP initialisation are the common features shared between the two SLAM maps, which allow to propagate information accordingly. Comprehensive simulations and experimental results show the effectiveness of the proposed method

    Robust sound source mapping using three-layered selective audio rays for mobile robots

    Full text link
    © 2016 IEEE. This paper investigates sound source mapping in a real environment using a mobile robot. Our approach is based on audio ray tracing which integrates occupancy grids and sound source localization using a laser range finder and a microphone array. Previous audio ray tracing approaches rely on all observed rays and grids. As such observation errors caused by sound reflection, sound occlusion, wall occlusion, sounds at misdetected grids, etc. can significantly degrade the ability to locate sound sources in a map. A three-layered selective audio ray tracing mechanism is proposed in this work. The first layer conducts frame-based unreliable ray rejection (sensory rejection) considering sound reflection and wall occlusion. The second layer introduces triangulation and audio tracing to detect falsely detected sound sources, rejecting audio rays associated to these misdetected sounds sources (short-term rejection). A third layer is tasked with rejecting rays using the whole history (long-term rejection) to disambiguate sound occlusion. Experimental results under various situations are presented, which proves the effectiveness of our method
    corecore