3,400 research outputs found

    Acoustic detection in superconducting magnets for performance characterization and diagnostics

    Full text link
    Quench diagnostics in superconducting accelerator magnets is essential for understanding performance limitations and improving magnet design. Applicability of the conventional quench diagnostics methods such as voltage taps or quench antennas is limited for long magnets or complex winding geometries, and alternative approaches are desirable. Here, we discuss acoustic sensing technique for detecting mechanical vibrations in superconducting magnets. Using LARP high-field Nb3Sn quadrupole HQ01 [1], we show how acoustic data is connected with voltage instabilities measured simultaneously in the magnet windings during provoked extractions and current ramps to quench. Instrumentation and data analysis techniques for acoustic sensing are reviewed.Comment: 5 pages, Contribution to WAMSDO 2013: Workshop on Accelerator Magnet, Superconductor, Design and Optimization; 15 - 16 Jan 2013, CERN, Geneva, Switzerlan

    Field testing of modular borehole monitoring with simultaneous distributed acoustic sensing and geophone vertical seismic profiles at Citronelle, Alabama

    Get PDF
    A modular borehole monitoring concept has been implemented to provide a suite of well-based monitoring tools that can be deployed cost effectively in a flexible and robust package. The initial modular borehole monitoring system was deployed as part of a CO2 injection test operated by the Southeast Regional Carbon Sequestration Partnership near Citronelle, Alabama. The Citronelle modular monitoring system transmits electrical power and signals, fibre-optic light pulses, and fluids between the surface and a reservoir. Additionally, a separate multi-conductor tubing-encapsulated line was used for borehole geophones, including a specialized clamp for casing clamping with tubing deployment. The deployment of geophones and fibre-optic cables allowed comparison testing of distributed acoustic sensing. We designed a large source effort (>64 sweeps per source point) to test fibre-optic vertical seismic profile and acquired data in 2013. The native measurement in the specific distributed acoustic sensing unit used (an iDAS from Silixa Ltd) is described as a localized strain rate. Following a processing flow of adaptive noise reduction and rebalancing the signal to dimensionless strain, improvement from repeated stacking of the source was observed. Conversion of the rebalanced strain signal to equivalent velocity units, via a scaling by local apparent velocity, allows quantitative comparison of distributed acoustic sensing and geophone data in units of velocity. We see a very good match of uncorrelated time series in both amplitude and phase, demonstrating that velocity-converted distributed acoustic sensing data can be analyzed equivalent to vertical geophones. We show that distributed acoustic sensing data, when averaged over an interval comparable to typical geophone spacing, can obtain signal-to-noise ratios of 18 dB to 24 dB below clamped geophones, a result that is variable with noise spectral amplitude because the noise characteristics are not identical. With vertical seismic profile processing, we demonstrate the effectiveness of downgoing deconvolution from the large spatial sampling of distributed acoustic sensing data, along with improved upgoing reflection quality. We conclude that the extra source effort currently needed for tubing-deployed distributed acoustic sensing vertical seismic profile, as part of a modular monitoring system, is well compensated by the extra spatial sampling and lower deployment cost as compared with conventional borehole geophones

    Deep Room Recognition Using Inaudible Echos

    Full text link
    Recent years have seen the increasing need of location awareness by mobile applications. This paper presents a room-level indoor localization approach based on the measured room's echos in response to a two-millisecond single-tone inaudible chirp emitted by a smartphone's loudspeaker. Different from other acoustics-based room recognition systems that record full-spectrum audio for up to ten seconds, our approach records audio in a narrow inaudible band for 0.1 seconds only to preserve the user's privacy. However, the short-time and narrowband audio signal carries limited information about the room's characteristics, presenting challenges to accurate room recognition. This paper applies deep learning to effectively capture the subtle fingerprints in the rooms' acoustic responses. Our extensive experiments show that a two-layer convolutional neural network fed with the spectrogram of the inaudible echos achieve the best performance, compared with alternative designs using other raw data formats and deep models. Based on this result, we design a RoomRecognize cloud service and its mobile client library that enable the mobile application developers to readily implement the room recognition functionality without resorting to any existing infrastructures and add-on hardware. Extensive evaluation shows that RoomRecognize achieves 99.7%, 97.7%, 99%, and 89% accuracy in differentiating 22 and 50 residential/office rooms, 19 spots in a quiet museum, and 15 spots in a crowded museum, respectively. Compared with the state-of-the-art approaches based on support vector machine, RoomRecognize significantly improves the Pareto frontier of recognition accuracy versus robustness against interfering sounds (e.g., ambient music).Comment: 29 page

    Acoustic Sensing: Mobile Applications and Frameworks

    Full text link
    Acoustic sensing has attracted significant attention from both academia and industry due to its ubiquity. Since smartphones and many IoT devices are already equipped with microphones and speakers, it requires nearly zero additional deployment cost. Acoustic sensing is also versatile. For example, it can detect obstacles for distracted pedestrians (BumpAlert), remember indoor locations through recorded echoes (EchoTag), and also understand the touch force applied to mobile devices (ForcePhone). In this dissertation, we first propose three acoustic sensing applications, BumpAlert, EchoTag, and ForcePhone, and then introduce a cross-platform sensing framework called LibAS. LibAS is designed to facilitate the development of acoustic sensing applications. For example, LibAS can let developers prototype and validate their sensing ideas and apps on commercial devices without the detailed knowledge of platform-dependent programming. LibAS is shown to require less than 30 lines of code in Matlab to implement the prototype of ForcePhone on Android/iOS/Tizen/Linux devices.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143971/1/yctung_1.pd

    MEMS based hair flow-sensors as model systems for acoustic perception studies

    Get PDF
    Arrays of MEMS fabricated flow sensors inspired by the acoustic flow-sensitive hairs found on the cerci of crickets, have been designed, fabricated and characterized. The hairs consist of up to 1 mm long SU-8 structures mounted on suspended membranes with normal translational and rotational degrees of freedom. Electrodes on the membrane and on the substrate form variable capacitors allowing for capacitive read-out. Capacitance versus voltage, frequency dependency and directional sensitivity measurements have been successfully carried out on fabricated sensor arrays, showing the viability of the concept. The sensors form a model-system allowing for investigations on sensory acoustics by their arrayed nature, their adaptivity via electrostatic interaction (frequency tuning and parametric amplifica- tion) and their susceptibility to noise (stochastic resonance
    • …
    corecore