4,493 research outputs found
Rotational CARS application to simultaneous and multiple-point temperature and concentration determination in a turbulent flow
Coherent anti-Stokes Raman scattering (CARS) from the pure rotational Raman lines of N2 is employed to measure the instantaneous (approximately 10 ns) rotational temperature of N2 gas at room temperature and below with good spatial resolution (0.2 x 0.2 x 3.0 cu mm). A broad bandwidth dye laser is used to obtain the entire rotational spectrum from a single laser pulse; the CARS signal is then dispersed by a spectrograph and recorded on an optical multichannel analyzer. A best fit temperature is found in several seconds with the aid of a computer for each experimental spectrum by a least squares comparison with calculated spectra. The model used to calculate the theoretical spectra incorporates the temperature and pressure dependence of the pressure-broadened rotational Raman lines, includes the nonresonant background susceptibility, and assumes that the pump laser has a finite linewidth. Temperatures are fit to experimental spectra recorded over the temperature range of 135 to 296 K, and over the pressure range of .13 to 15.3 atm
Analysis of Dynamic Brain Imaging Data
Modern imaging techniques for probing brain function, including functional
Magnetic Resonance Imaging, intrinsic and extrinsic contrast optical imaging,
and magnetoencephalography, generate large data sets with complex content. In
this paper we develop appropriate techniques of analysis and visualization of
such imaging data, in order to separate the signal from the noise, as well as
to characterize the signal. The techniques developed fall into the general
category of multivariate time series analysis, and in particular we extensively
use the multitaper framework of spectral analysis. We develop specific
protocols for the analysis of fMRI, optical imaging and MEG data, and
illustrate the techniques by applications to real data sets generated by these
imaging modalities. In general, the analysis protocols involve two distinct
stages: `noise' characterization and suppression, and `signal' characterization
and visualization. An important general conclusion of our study is the utility
of a frequency-based representation, with short, moving analysis windows to
account for non-stationarity in the data. Of particular note are (a) the
development of a decomposition technique (`space-frequency singular value
decomposition') that is shown to be a useful means of characterizing the image
data, and (b) the development of an algorithm, based on multitaper methods, for
the removal of approximately periodic physiological artifacts arising from
cardiac and respiratory sources.Comment: 40 pages; 26 figures with subparts including 3 figures as .gif files.
Originally submitted to the neuro-sys archive which was never publicly
announced (was 9804003
Deep Learning for Audio Signal Processing
Given the recent surge in developments of deep learning, this article
provides a review of the state-of-the-art deep learning techniques for audio
signal processing. Speech, music, and environmental sound processing are
considered side-by-side, in order to point out similarities and differences
between the domains, highlighting general methods, problems, key references,
and potential for cross-fertilization between areas. The dominant feature
representations (in particular, log-mel spectra and raw waveform) and deep
learning models are reviewed, including convolutional neural networks, variants
of the long short-term memory architecture, as well as more audio-specific
neural network models. Subsequently, prominent deep learning application areas
are covered, i.e. audio recognition (automatic speech recognition, music
information retrieval, environmental sound detection, localization and
tracking) and synthesis and transformation (source separation, audio
enhancement, generative models for speech, sound, and music synthesis).
Finally, key issues and future questions regarding deep learning applied to
audio signal processing are identified.Comment: 15 pages, 2 pdf figure
Active Noise Cancellation of Drone Propeller Noise through Waveform Approximation and Pitch-Shifting
The use of drones introduces the problem of noise pollution due to the audio noise generated from its propeller rotations. To mitigate the noise pollution from drone propellers, this thesis explores a method of using active noise cancellation ANC. This thesis hypothesizes that by analyzing the waveform of the drone propeller noise, an approximated wave function can be produced and used as an anti-noise signal that can effectively nullify the drone noise. In order to align the phase of the anti-noise signal to maximize drone noise reduction, this thesis presents a signal pitch-shifting approach, to guide areas of destructive interference to a desired target such as a microphone, at a desired location. Through experimental evaluation using a prototype of the proposed Pitch-Aligned Active Noise Cancellation system PA-ANC, this thesis reveals that the proposed technique can achieve a 43.82% reduction of drone noise
Communications Biophysics
Contains reports on eight research projects split into four sections.National Institutes of Health (Grant 5 P01 NS13126)National Institutes of Health (Grant 5 K04 NS00113)National Institutes of Health (Training Grant 5 T32 NS07047)National Science Foundation (Grant BNS80-06369)National Institutes of Health (Grant 5 ROl NS11153)National Institutes of Health (Fellowship 1 F32 NS06544)National Science Foundation (Grant BNS77-16861)National Institutes of Health (Grant 5 R01 NS10916)National Institutes of Health (Grant 5 RO1 NS12846)National Science Foundation (Grant BNS77-21751)National Institutes of Health (Grant 1 R01 NS14092)National Institutes of Health (Grant 2 R01 NS11680)National Institutes of Health (Grant 5 ROl1 NS11080)National Institutes of Health (Training Grant 5 T32 GM07301
Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments
Eliminating the negative effect of non-stationary environmental noise is a
long-standing research topic for automatic speech recognition that stills
remains an important challenge. Data-driven supervised approaches, including
ones based on deep neural networks, have recently emerged as potential
alternatives to traditional unsupervised approaches and with sufficient
training, can alleviate the shortcomings of the unsupervised methods in various
real-life acoustic environments. In this light, we review recently developed,
representative deep learning approaches for tackling non-stationary additive
and convolutional degradation of speech with the aim of providing guidelines
for those involved in the development of environmentally robust speech
recognition systems. We separately discuss single- and multi-channel techniques
developed for the front-end and back-end of speech recognition systems, as well
as joint front-end and back-end training frameworks
- …