1,545 research outputs found
SimpleTrack:Adaptive Trajectory Compression with Deterministic Projection Matrix for Mobile Sensor Networks
Some mobile sensor network applications require the sensor nodes to transfer
their trajectories to a data sink. This paper proposes an adaptive trajectory
(lossy) compression algorithm based on compressive sensing. The algorithm has
two innovative elements. First, we propose a method to compute a deterministic
projection matrix from a learnt dictionary. Second, we propose a method for the
mobile nodes to adaptively predict the number of projections needed based on
the speed of the mobile nodes. Extensive evaluation of the proposed algorithm
using 6 datasets shows that our proposed algorithm can achieve sub-metre
accuracy. In addition, our method of computing projection matrices outperforms
two existing methods. Finally, comparison of our algorithm against a
state-of-the-art trajectory compression algorithm show that our algorithm can
reduce the error by 10-60 cm for the same compression ratio
Opportunistic and Context-aware Affect Sensing on Smartphones: The Concept, Challenges and Opportunities
Opportunistic affect sensing offers unprecedented potential for capturing
spontaneous affect ubiquitously, obviating biases inherent in the laboratory
setting. Facial expression and voice are two major affective displays, however
most affect sensing systems on smartphone avoid them due to extensive power
requirement. Encouragingly, due to the recent advent of low-power DSP (Digital
Signal Processing) co-processor and GPU (Graphics Processing Unit) technology,
audio and video sensing are becoming more feasible. To properly evaluate
opportunistically captured facial expression and voice, contextual information
about the dynamic audio-visual stimuli needs to be inferred. This paper
discusses recent advances of affect sensing on the smartphone and identifies
the key barriers and potential solutions of implementing opportunistic and
context-aware affect sensing on smartphone platforms
Stimulus-invariant processing and spectrotemporal reverse correlation in primary auditory cortex
The spectrotemporal receptive field (STRF) provides a versatile and
integrated, spectral and temporal, functional characterization of single cells
in primary auditory cortex (AI). In this paper, we explore the origin of, and
relationship between, different ways of measuring and analyzing an STRF. We
demonstrate that STRFs measured using a spectrotemporally diverse array of
broadband stimuli -- such as dynamic ripples, spectrotemporally white noise,
and temporally orthogonal ripple combinations (TORCs) -- are very similar,
confirming earlier findings that the STRF is a robust linear descriptor of the
cell. We also present a new deterministic analysis framework that employs the
Fourier series to describe the spectrotemporal modulations contained in the
stimuli and responses. Additional insights into the STRF measurements,
including the nature and interpretation of measurement errors, is presented
using the Fourier transform, coupled to singular-value decomposition (SVD), and
variability analyses including bootstrap. The results promote the utility of
the STRF as a core functional descriptor of neurons in AI.Comment: 42 pages, 8 Figures; to appear in Journal of Computational
Neuroscienc
Sparse representation of sounds in the unanesthetized auditory cortex
How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli) in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second) in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second). At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons
- …