8,658 research outputs found
Temporally coherent 4D reconstruction of complex dynamic scenes
This paper presents an approach for reconstruction of 4D temporally coherent
models of complex dynamic scenes. No prior knowledge is required of scene
structure or camera calibration allowing reconstruction from multiple moving
cameras. Sparse-to-dense temporal correspondence is integrated with joint
multi-view segmentation and reconstruction to obtain a complete 4D
representation of static and dynamic objects. Temporal coherence is exploited
to overcome visual ambiguities resulting in improved reconstruction of complex
scenes. Robust joint segmentation and reconstruction of dynamic objects is
achieved by introducing a geodesic star convexity constraint. Comparative
evaluation is performed on a variety of unstructured indoor and outdoor dynamic
scenes with hand-held cameras and multiple people. This demonstrates
reconstruction of complete temporally coherent 4D scene models with improved
nonrigid object segmentation and shape reconstruction.Comment: To appear in The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 . Video available at:
https://www.youtube.com/watch?v=bm_P13_-Ds
Enhanced tracking and recognition of moving objects by reasoning about spatio-temporal continuity.
A framework for the logical and statistical analysis and annotation of dynamic scenes containing occlusion and other uncertainties is presented. This framework consists
of three elements; an object tracker module, an object recognition/classification module and a logical consistency, ambiguity and error reasoning engine. The principle behind the object tracker and object recognition modules is to reduce error by increasing ambiguity (by merging objects in close proximity and presenting multiple
hypotheses). The reasoning engine deals with error, ambiguity and occlusion in a unified framework to produce a hypothesis that satisfies fundamental constraints
on the spatio-temporal continuity of objects. Our algorithm finds a globally consistent model of an extended video sequence that is maximally supported by a voting function based on the output of a statistical classifier. The system results
in an annotation that is significantly more accurate than what would be obtained
by frame-by-frame evaluation of the classifier output. The framework has been implemented
and applied successfully to the analysis of team sports with a single
camera.
Key words: Visua
Learning Behavioural Context
The original publication is available at www.springerlink.co
Mind over chatter: plastic up-regulation of the fMRI alertness network by EEG neurofeedback
EEG neurofeedback (NFB) is a brain-computer interface (BCI) approach used to shape brain oscillations by means of real-time feedback from the electroencephalogram (EEG), which is known to reflect neural activity across cortical networks. Although NFB is being evaluated as a novel tool for treating brain disorders, evidence is scarce on the mechanism of its impact on brain function. In this study with 34 healthy participants, we examined whether, during the performance of an attentional auditory oddball task, the functional connectivity strength of distinct fMRI networks would be plastically altered after a 30-min NFB session of alpha-band reduction (n=17) versus a sham-feedback condition (n=17). Our results reveal that compared to sham, NFB induced a specific increase of functional connectivity within the alertness/salience network (dorsal anterior and mid cingulate), which was detectable 30 minutes after termination of training. Crucially, these effects were significantly correlated with reduced mind-wandering 'on-task' and were coupled to NFB-mediated resting state reductions in the alpha-band (8-12 Hz). No such relationships were evident for the sham condition. Although group default-mode network (DMN) connectivity was not significantly altered following NFB, we observed a positive association between modulations of resting alpha amplitude and precuneal connectivity, both correlating positively with frequency of mind-wandering. Our findings demonstrate a temporally direct, plastic impact of NFB on large-scale brain functional networks, and provide promising neurobehavioral evidence supporting its use as a noninvasive tool to modulate brain function in health and disease
Temporally Coherent General Dynamic Scene Reconstruction
Existing techniques for dynamic scene reconstruction from multiple
wide-baseline cameras primarily focus on reconstruction in controlled
environments, with fixed calibrated cameras and strong prior constraints. This
paper introduces a general approach to obtain a 4D representation of complex
dynamic scenes from multi-view wide-baseline static or moving cameras without
prior knowledge of the scene structure, appearance, or illumination.
Contributions of the work are: An automatic method for initial coarse
reconstruction to initialize joint estimation; Sparse-to-dense temporal
correspondence integrated with joint multi-view segmentation and reconstruction
to introduce temporal coherence; and a general robust approach for joint
segmentation refinement and dense reconstruction of dynamic scenes by
introducing shape constraint. Comparison with state-of-the-art approaches on a
variety of complex indoor and outdoor scenes, demonstrates improved accuracy in
both multi-view segmentation and dense reconstruction. This paper demonstrates
unsupervised reconstruction of complete temporally coherent 4D scene models
with improved non-rigid object segmentation and shape reconstruction and its
application to free-viewpoint rendering and virtual reality.Comment: Submitted to IJCV 2019. arXiv admin note: substantial text overlap
with arXiv:1603.0338
- …