36,026 research outputs found
Dynamic Decomposition of Spatiotemporal Neural Signals
Neural signals are characterized by rich temporal and spatiotemporal dynamics
that reflect the organization of cortical networks. Theoretical research has
shown how neural networks can operate at different dynamic ranges that
correspond to specific types of information processing. Here we present a data
analysis framework that uses a linearized model of these dynamic states in
order to decompose the measured neural signal into a series of components that
capture both rhythmic and non-rhythmic neural activity. The method is based on
stochastic differential equations and Gaussian process regression. Through
computer simulations and analysis of magnetoencephalographic data, we
demonstrate the efficacy of the method in identifying meaningful modulations of
oscillatory signals corrupted by structured temporal and spatiotemporal noise.
These results suggest that the method is particularly suitable for the analysis
and interpretation of complex temporal and spatiotemporal neural signals
Transfer function-noise modeling and spatial interpolation to evaluate the risk of extreme (shallow) water-table levels in the Brazilian Cerrados
Water regimes in the Brazilian Cerrados are sensitive to climatological disturbances and human intervention. The risk that critical water-table levels are exceeded over long periods of time can be estimated by applying stochastic methods in modeling the dynamic relationship between water levels and driving forces such as precipitation and evapotranspiration. In this study, a transfer function-noise model, the so called PIRFICT-model, is applied to estimate the dynamic relationship between water-table depth and precipitation surplus/deficit in a watershed with a groundwater monitoring scheme in the Brazilian Cerrados. Critical limits were defined for a period in the Cerrados agricultural calendar, the end of the rainy season, when extremely shallow levels
Blind MultiChannel Identification and Equalization for Dereverberation and Noise Reduction based on Convolutive Transfer Function
This paper addresses the problems of blind channel identification and
multichannel equalization for speech dereverberation and noise reduction. The
time-domain cross-relation method is not suitable for blind room impulse
response identification, due to the near-common zeros of the long impulse
responses. We extend the cross-relation method to the short-time Fourier
transform (STFT) domain, in which the time-domain impulse responses are
approximately represented by the convolutive transfer functions (CTFs) with
much less coefficients. The CTFs suffer from the common zeros caused by the
oversampled STFT. We propose to identify CTFs based on the STFT with the
oversampled signals and the critical sampled CTFs, which is a good compromise
between the frequency aliasing of the signals and the common zeros problem of
CTFs. In addition, a normalization of the CTFs is proposed to remove the gain
ambiguity across sub-bands. In the STFT domain, the identified CTFs is used for
multichannel equalization, in which the sparsity of speech signals is
exploited. We propose to perform inverse filtering by minimizing the
-norm of the source signal with the relaxed -norm fitting error
between the micophone signals and the convolution of the estimated source
signal and the CTFs used as a constraint. This method is advantageous in that
the noise can be reduced by relaxing the -norm to a tolerance
corresponding to the noise power, and the tolerance can be automatically set.
The experiments confirm the efficiency of the proposed method even under
conditions with high reverberation levels and intense noise.Comment: 13 pages, 5 figures, 5 table
Aerospace medicine and biology: A continuing bibliography with indexes, supplement 130, July 1974
This special bibliography lists 291 reports, articles, and other documents introduced into the NASA scientific and technical information system in June 1974
Lidar waveform based analysis of depth images constructed using sparse single-photon data
This paper presents a new Bayesian model and algorithm used for depth and
intensity profiling using full waveforms from the time-correlated single photon
counting (TCSPC) measurement in the limit of very low photon counts. The model
proposed represents each Lidar waveform as a combination of a known impulse
response, weighted by the target intensity, and an unknown constant background,
corrupted by Poisson noise. Prior knowledge about the problem is embedded in a
hierarchical model that describes the dependence structure between the model
parameters and their constraints. In particular, a gamma Markov random field
(MRF) is used to model the joint distribution of the target intensity, and a
second MRF is used to model the distribution of the target depth, which are
both expected to exhibit significant spatial correlations. An adaptive Markov
chain Monte Carlo algorithm is then proposed to compute the Bayesian estimates
of interest and perform Bayesian inference. This algorithm is equipped with a
stochastic optimization adaptation mechanism that automatically adjusts the
parameters of the MRFs by maximum marginal likelihood estimation. Finally, the
benefits of the proposed methodology are demonstrated through a serie of
experiments using real data
Optimising the assessment of cerebral autoregulation from black box models
Cerebral autoregulation (CA) mechanisms maintain blood flow approximately stable despite changes in arterial blood pressure. Mathematical models that characterise this system have been used extensively in the quantitative assessment of function/impairment of CA. Using spontaneous fluctuations in arterial blood pressure (ABP) as input and cerebral blood flow velocity (CBFV) as output, the autoregulatory mechanism can be modelled using linear and non-linear approaches, from which indexes can be extracted to provide an overall assessment of CA. Previous studies have considered a single – or at most a couple of measures, making it difficult to compare the performance of different CA parameters. We compare the performance of established autoregulatory parameters and propose novel measures. The key objective is to identify which model and index can best distinguish between normal and impaired CA. To this end 26 recordings of ABP and CBFV from normocapnia and hypercapnia (which temporarily impairs CA) in 13 healthy adults were analysed. In the absence of a ‘gold’ standard for the study of dynamic CA, lower inter- and intra-subject variability of the parameters in relation to the difference between normo- and hypercapnia were considered as criteria for identifying improved measures of CA. Significantly improved performance compared to some conventional approaches was achieved, with the simplest method emerging as probably the most promising for future studies
Direction of Arrival with One Microphone, a few LEGOs, and Non-Negative Matrix Factorization
Conventional approaches to sound source localization require at least two
microphones. It is known, however, that people with unilateral hearing loss can
also localize sounds. Monaural localization is possible thanks to the
scattering by the head, though it hinges on learning the spectra of the various
sources. We take inspiration from this human ability to propose algorithms for
accurate sound source localization using a single microphone embedded in an
arbitrary scattering structure. The structure modifies the frequency response
of the microphone in a direction-dependent way giving each direction a
signature. While knowing those signatures is sufficient to localize sources of
white noise, localizing speech is much more challenging: it is an ill-posed
inverse problem which we regularize by prior knowledge in the form of learned
non-negative dictionaries. We demonstrate a monaural speech localization
algorithm based on non-negative matrix factorization that does not depend on
sophisticated, designed scatterers. In fact, we show experimental results with
ad hoc scatterers made of LEGO bricks. Even with these rudimentary structures
we can accurately localize arbitrary speakers; that is, we do not need to learn
the dictionary for the particular speaker to be localized. Finally, we discuss
multi-source localization and the related limitations of our approach.Comment: This article has been accepted for publication in IEEE/ACM
Transactions on Audio, Speech, and Language processing (TASLP
MIT Space Engineering Research Center
The Space Engineering Research Center (SERC) at MIT, started in Jul. 1988, has completed two years of research. The Center is approaching the operational phase of its first testbed, is midway through the construction of a second testbed, and is in the design phase of a third. We presently have seven participating faculty, four participating staff members, ten graduate students, and numerous undergraduates. This report reviews the testbed programs, individual graduate research, other SERC activities not funded by the Center, interaction with non-MIT organizations, and SERC milestones. Published papers made possible by SERC funding are included at the end of the report
Aerospace Medicine and Biology. A continuing bibliography with indexes
This bibliography lists 244 reports, articles, and other documents introduced into the NASA scientific and technical information system in February 1981. Aerospace medicine and aerobiology topics are included. Listings for physiological factors, astronaut performance, control theory, artificial intelligence, and cybernetics are included
- …