14,276 research outputs found
Automatic Conflict Detection in Police Body-Worn Audio
Automatic conflict detection has grown in relevance with the advent of
body-worn technology, but existing metrics such as turn-taking and overlap are
poor indicators of conflict in police-public interactions. Moreover, standard
techniques to compute them fall short when applied to such diversified and
noisy contexts. We develop a pipeline catered to this task combining adaptive
noise removal, non-speech filtering and new measures of conflict based on the
repetition and intensity of phrases in speech. We demonstrate the effectiveness
of our approach on body-worn audio data collected by the Los Angeles Police
Department.Comment: 5 pages, 2 figures, 1 tabl
The role of anterior cingulate cortex in the affective evaluation of conflict
An influential theory of anterior cingulate cortex (ACC) function argues that this brain region plays a crucial role in the affective evaluation of performance monitoring and control demands. Specifically, control-demanding processes such as response conflict are thought to be registered as aversive signals by ACC, which in turn triggers processing adjustments to support avoidance learning. In support of conflict being treated as an aversive event, recent behavioral studies demonstrated that incongruent (i.e., conflict inducing), relative to congruent, stimuli can speed up subsequent negative, relative to positive, affective picture processing. Here, we used fMRI to investigate directly whether ACC activity in response to negative versus positive pictures is modulated by preceding control demands, consisting of conflict and task-switching conditions. The results show that negative, relative to positive, pictures elicited higher ACC activation after congruent, relative to incongruent, trials, suggesting that ACC's response to negative (positive) pictures was indeed affectively primed by incongruent (congruent) trials. Interestingly, this pattern of results was observed on task repetitions but disappeared on task alternations. This study supports the proposal that conflict induces negative affect and is the first to show that this affective signal is reflected in ACC activation
Age differences in fMRI adaptation for sound identity and location
We explored age differences in auditory perception by measuring fMRI adaptation of brain activity to repetitions of sound identity (what) and location (where), using meaningful environmental sounds. In one condition, both sound identity and location were repeated allowing us to assess non-specific adaptation. In other conditions, only one feature was repeated (identity or location) to assess domain-specific adaptation. Both young and older adults showed comparable non-specific adaptation (identity and location) in bilateral temporal lobes, medial parietal cortex, and subcortical regions. However, older adults showed reduced domain-specific adaptation to location repetitions in a distributed set of regions, including frontal and parietal areas, and to identity repetition in anterior temporal cortex. We also re-analyzed data from a previously published 1-back fMRI study, in which participants responded to infrequent repetition of the identity or location of meaningful sounds. This analysis revealed age differences in domain-specific adaptation in a set of brain regions that overlapped substantially with those identified in the adaptation experiment. This converging evidence of reductions in the degree of auditory fMRI adaptation in older adults suggests that the processing of specific auditory “what” and “where” information is altered with age, which may influence cognitive functions that depend on this processing
Unsupervised extraction of recurring words from infant-directed speech
To date, most computational models of infant word segmentation have worked from phonemic or phonetic input, or have used toy datasets. In this paper, we present an algorithm for word extraction that works directly from naturalistic acoustic input: infant-directed speech from the CHILDES corpus. The algorithm identifies recurring acoustic patterns that are candidates for identification as words or phrases, and then clusters together the most similar patterns. The recurring patterns are found in a single pass through the corpus using an incremental method, where only a small number of utterances are considered at once. Despite this limitation, we show that the algorithm is able to extract a number of recurring words, including some that infants learn earliest, such as Mommy and the child’s name. We also introduce a novel information-theoretic evaluation measure
Unsupervised discovery of temporal sequences in high-dimensional datasets, with applications to neuroscience.
Identifying low-dimensional features that describe large-scale neural recordings is a major challenge in neuroscience. Repeated temporal patterns (sequences) are thought to be a salient feature of neural dynamics, but are not succinctly captured by traditional dimensionality reduction techniques. Here, we describe a software toolbox-called seqNMF-with new methods for extracting informative, non-redundant, sequences from high-dimensional neural data, testing the significance of these extracted patterns, and assessing the prevalence of sequential structure in data. We test these methods on simulated data under multiple noise conditions, and on several real neural and behavioral datas. In hippocampal data, seqNMF identifies neural sequences that match those calculated manually by reference to behavioral events. In songbird data, seqNMF discovers neural sequences in untutored birds that lack stereotyped songs. Thus, by identifying temporal structure directly from neural data, seqNMF enables dissection of complex neural circuits without relying on temporal references from stimuli or behavioral outputs
Linear-time Online Action Detection From 3D Skeletal Data Using Bags of Gesturelets
Sliding window is one direct way to extend a successful recognition system to
handle the more challenging detection problem. While action recognition decides
only whether or not an action is present in a pre-segmented video sequence,
action detection identifies the time interval where the action occurred in an
unsegmented video stream. Sliding window approaches for action detection can
however be slow as they maximize a classifier score over all possible
sub-intervals. Even though new schemes utilize dynamic programming to speed up
the search for the optimal sub-interval, they require offline processing on the
whole video sequence. In this paper, we propose a novel approach for online
action detection based on 3D skeleton sequences extracted from depth data. It
identifies the sub-interval with the maximum classifier score in linear time.
Furthermore, it is invariant to temporal scale variations and is suitable for
real-time applications with low latency
- …