4,548 research outputs found

    Radio Sensor for Monitoring of UMTS Mobile Terminals

    Get PDF
    Relatively simple and low-cost radio sensor for monitoring of 3rd generation (3G) UMTS mobile terminals (i.e., phones) has been designed and practically tested. The main purpose of this sensor is to serve as an extending module that can be installed into systems used for monitoring of standard 2nd generation (2G) GSM and DCS mobile phones in highly guarded buildings and areas. Since the transmitted powers of UMTS mobile terminals can be very low in relation to GSM and DCS specifications, the new UMTS sensor is based on a highly sensitive receiver and additional signal processing. The radio sensor was practically tested in several scenarios representing worst-case mobile terminal - base station relations. The measured detection ranges attain values from approx. 11 m inside of rooms to more than 30 m in corridors, which seems to be sufficient for the expected application. Results of all performed tests correspond fairly well with the presented theoretical descriptions. An extended version of the radio sensor can be used for monitoring of mobile terminals of all existing voice or data formats

    Symphony: Localizing Multiple Acoustic Sources with a Single Microphone Array

    Full text link
    Sound recognition is an important and popular function of smart devices. The location of sound is basic information associated with the acoustic source. Apart from sound recognition, whether the acoustic sources can be localized largely affects the capability and quality of the smart device's interactive functions. In this work, we study the problem of concurrently localizing multiple acoustic sources with a smart device (e.g., a smart speaker like Amazon Alexa). The existing approaches either can only localize a single source, or require deploying a distributed network of microphone arrays to function. Our proposal called Symphony is the first approach to tackle the above problem with a single microphone array. The insight behind Symphony is that the geometric layout of microphones on the array determines the unique relationship among signals from the same source along the same arriving path, while the source's location determines the DoAs (direction-of-arrival) of signals along different arriving paths. Symphony therefore includes a geometry-based filtering module to distinguish signals from different sources along different paths and a coherence-based module to identify signals from the same source. We implement Symphony with different types of commercial off-the-shelf microphone arrays and evaluate its performance under different settings. The results show that Symphony has a median localization error of 0.694m, which is 68% less than that of the state-of-the-art approach

    Reflection-Aware Sound Source Localization

    Full text link
    We present a novel, reflection-aware method for 3D sound localization in indoor environments. Unlike prior approaches, which are mainly based on continuous sound signals from a stationary source, our formulation is designed to localize the position instantaneously from signals within a single frame. We consider direct sound and indirect sound signals that reach the microphones after reflecting off surfaces such as ceilings or walls. We then generate and trace direct and reflected acoustic paths using inverse acoustic ray tracing and utilize these paths with Monte Carlo localization to estimate a 3D sound source position. We have implemented our method on a robot with a cube-shaped microphone array and tested it against different settings with continuous and intermittent sound signals with a stationary or a mobile source. Across different settings, our approach can localize the sound with an average distance error of 0.8m tested in a room of 7m by 7m area with 3m height, including a mobile and non-line-of-sight sound source. We also reveal that the modeling of indirect rays increases the localization accuracy by 40% compared to only using direct acoustic rays.Comment: Submitted to ICRA 2018. The working video is available at (https://youtu.be/TkQ36lMEC-M

    Feasibility of discriminating UAV propellers noise from distress signals to locate people in enclosed environments using MEMS microphone arrays

    Get PDF
    ProducciĂłn CientĂ­ficaDetecting and finding people are complex tasks when visibility is reduced. This happens, for example, if a fire occurs. In these situations, heat sources and large amounts of smoke are generated. Under these circumstances, locating survivors using thermal or conventional cameras is not possible and it is necessary to use alternative techniques. The challenge of this work was to analyze if it is feasible the integration of an acoustic camera, developed at the University of Valladolid, on an unmanned aerial vehicle (UAV) to locate, by sound, people who are calling for help, in enclosed environments with reduced visibility. The acoustic array, based on MEMS (micro-electro-mechanical system) microphones, locates acoustic sources in space, and the UAV navigates autonomously by closed enclosures. This paper presents the first experimental results locating the angles of arrival of multiple sound sources, including the cries for help of a person, in an enclosed environment. The results are promising, as the system proves able to discriminate the noise generated by the propellers of the UAV, at the same time it identifies the angles of arrival of the direct sound signal and its first echoes reflected on the reflective surfaces.Junta de Castilla y LeĂłn (project VA082G18

    SoundCompass: a distributed MEMS microphone array-based sensor for sound source localization

    Get PDF
    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass's hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field

    Acoustic Sensing: Mobile Applications and Frameworks

    Full text link
    Acoustic sensing has attracted significant attention from both academia and industry due to its ubiquity. Since smartphones and many IoT devices are already equipped with microphones and speakers, it requires nearly zero additional deployment cost. Acoustic sensing is also versatile. For example, it can detect obstacles for distracted pedestrians (BumpAlert), remember indoor locations through recorded echoes (EchoTag), and also understand the touch force applied to mobile devices (ForcePhone). In this dissertation, we first propose three acoustic sensing applications, BumpAlert, EchoTag, and ForcePhone, and then introduce a cross-platform sensing framework called LibAS. LibAS is designed to facilitate the development of acoustic sensing applications. For example, LibAS can let developers prototype and validate their sensing ideas and apps on commercial devices without the detailed knowledge of platform-dependent programming. LibAS is shown to require less than 30 lines of code in Matlab to implement the prototype of ForcePhone on Android/iOS/Tizen/Linux devices.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143971/1/yctung_1.pd

    A system for room acoustic simulation for one's own voice

    Get PDF
    The real-time simulation of room acoustical environments for one’s own voice, using generic software, has been difficult until very recently due to the computational load involved: requiring real-time convolution of a person’s voice with a potentially large number of long room impulse responses. This thesis is presenting a room acoustical simulation system with a software-based solution to perform real-time convolutions with headtracking; to simulate the effect of room acoustical environments on the sound of one’s own voice, using binaural technology. In order to gather data to implement headtracking in the system, human head- movements are characterized while reading a text aloud. The rooms that are simulated with the system are actual rooms that are characterized by measuring the room impulse response from the mouth to ears of the same head (oral binaural room impulse response, OBRIR). By repeating this process at 2o increments in the yaw angle on the horizontal plane, the rooms are binaurally scanned around a given position to obtain a collection of OBRIRs, which is then used by the software-based convolution system. In the rooms that are simulated with the system, a person equipped with a near- mouth microphone and near-ear loudspeakers can speak or sing, and hear their voice as it would sound in the measured rooms, while physically being in an anechoic room. By continually updating the person’s head orientation using headtracking, the corresponding OBRIR is chosen for convolution with their voice. The system described in this thesis achieves the low latency that is required to simulate nearby reflections, and it can perform convolution with long room impulse responses. The perceptual validity of the system is studied with two experiments, involving human participants reading aloud a set-text. The system presented in this thesis can be used to design experiments that study the various aspects of the auditory perception of the sound of one’s own voice in room environments. The system can also be adapted to incorporate a module that enables listening to the sound of one’s own voice in commercial applications such as architectural acoustic room simulation software, teleconferencing systems, virtual reality and gaming applications, etc
    • …
    corecore