15,149 research outputs found

    Scalable analysis of movement data for extracting and exploring significant places

    Get PDF
    Place-oriented analysis of movement data, i.e., recorded tracks of moving objects, includes finding places of interest in which certain types of movement events occur repeatedly and investigating the temporal distribution of event occurrences in these places and, possibly, other characteristics of the places and links between them. For this class of problems, we propose a visual analytics procedure consisting of four major steps: 1) event extraction from trajectories; 2) extraction of relevant places based on event clustering; 3) spatiotemporal aggregation of events or trajectories; 4) analysis of the aggregated data. All steps can be fulfilled in a scalable way with respect to the amount of the data under analysis; therefore, the procedure is not limited by the size of the computer's RAM and can be applied to very large data sets. We demonstrate the use of the procedure by example of two real-world problems requiring analysis at different spatial scales

    Spatiotemporal dynamics of feature-based attention spread: evidence from combined electroencephalographic and magnetoencephalographic recordings

    Get PDF
    Attentional selection on the basis of nonspatial stimulus features induces a sensory gain enhancement by increasing the firing-rate of individual neurons tuned to the attended feature, while responses of neurons tuned to opposite feature-values are suppressed. Here we recorded event-related potentials (ERPs) and magnetic fields (ERMFs) in human observers to investigate the underlying neural correlates of feature-based attention at the population level. During the task subjects attended to a moving transparent surface presented in the left visual field, while task-irrelevant probe stimuli executing brief movements into varying directions were presented in the opposite visual field. ERP and ERMF amplitudes elicited by the unattended task-irrelevant probes were modulated as a function of the similarity between their movement direction and the task-relevant movement direction in the attended visual field. These activity modulations reflecting globally enhanced processing of the attended feature were observed to start not before 200 ms poststimulus and were localized to the motion-sensitive area hMT. The current results indicate that feature-based attention operates in a global manner but needs time to spread and provide strong support for the feature-similarity gain model

    Dance-the-music : an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Get PDF
    In this article, a computational platform is presented, entitled “Dance-the-Music”, that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers’ models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method can determine the quality of a student’s performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures

    Predictive Encoding of Contextual Relationships for Perceptual Inference, Interpolation and Prediction

    Full text link
    We propose a new neurally-inspired model that can learn to encode the global relationship context of visual events across time and space and to use the contextual information to modulate the analysis by synthesis process in a predictive coding framework. The model learns latent contextual representations by maximizing the predictability of visual events based on local and global contextual information through both top-down and bottom-up processes. In contrast to standard predictive coding models, the prediction error in this model is used to update the contextual representation but does not alter the feedforward input for the next layer, and is thus more consistent with neurophysiological observations. We establish the computational feasibility of this model by demonstrating its ability in several aspects. We show that our model can outperform state-of-art performances of gated Boltzmann machines (GBM) in estimation of contextual information. Our model can also interpolate missing events or predict future events in image sequences while simultaneously estimating contextual information. We show it achieves state-of-art performances in terms of prediction accuracy in a variety of tasks and possesses the ability to interpolate missing frames, a function that is lacking in GBM
    • …
    corecore