89,569 research outputs found

    Detecting multineuronal temporal patterns in parallel spike trains

    Get PDF
    We present a non-parametric and computationally efficient method that detects spatiotemporal firing patterns and pattern sequences in parallel spike trains and tests whether the observed numbers of repeating patterns and sequences on a given timescale are significantly different from those expected by chance. The method is generally applicable and uncovers coordinated activity with arbitrary precision by comparing it to appropriate surrogate data. The analysis of coherent patterns of spatially and temporally distributed spiking activity on various timescales enables the immediate tracking of diverse qualities of coordinated firing related to neuronal state changes and information processing. We apply the method to simulated data and multineuronal recordings from rat visual cortex and show that it reliably discriminates between data sets with random pattern occurrences and with additional exactly repeating spatiotemporal patterns and pattern sequences. Multineuronal cortical spiking activity appears to be precisely coordinated and exhibits a sequential organization beyond the cell assembly concept

    Segmental Spatiotemporal CNNs for Fine-grained Action Segmentation

    Full text link
    Joint segmentation and classification of fine-grained actions is important for applications of human-robot interaction, video surveillance, and human skill evaluation. However, despite substantial recent progress in large-scale action classification, the performance of state-of-the-art fine-grained action recognition approaches remains low. We propose a model for action segmentation which combines low-level spatiotemporal features with a high-level segmental classifier. Our spatiotemporal CNN is comprised of a spatial component that uses convolutional filters to capture information about objects and their relationships, and a temporal component that uses large 1D convolutional filters to capture information about how object relationships change across time. These features are used in tandem with a semi-Markov model that models transitions from one action to another. We introduce an efficient constrained segmental inference algorithm for this model that is orders of magnitude faster than the current approach. We highlight the effectiveness of our Segmental Spatiotemporal CNN on cooking and surgical action datasets for which we observe substantially improved performance relative to recent baseline methods.Comment: Updated from the ECCV 2016 version. We fixed an important mathematical error and made the section on segmental inference cleare

    Quasi-periodic spatiotemporal models of brain activation in single-trial MEG experiments

    Get PDF
    Magneto-encephalography (MEG) is an imaging technique which measures neuronal activity in the brain. Even when a subject is in a resting state, MEG data show characteristic spatial and temporal patterns, resulting from electrical current at specific locations in the brain. The key pattern of interest is a ‘dipole’, consisting of two adjacent regions of high and low activation which oscillate over time in an out-of-phase manner. Standard approaches are based on averages over large numbers of trials in order to reduce noise. In contrast, this article addresses the issue of dipole modelling for single trial data, as this is of interest in application areas. There is also clear evidence that the frequency of this oscillation in single trials generally changes over time and so exhibits quasi-periodic rather than periodic behaviour. A framework for the modelling of dipoles is proposed through estimation of a spatiotemporal smooth function constructed as a parametric function of space and a smooth function of time. Quasi-periodic behaviour is expressed in phase functions which are allowed to evolve smoothly over time. The model is fitted in two stages. First, the spatial location of the dipole is identified and the smooth signals characterizing the amplitude functions for each separate pole are estimated. Second, the phase and frequency of the amplitude signals are estimated as smooth functions. The model is applied to data from a real MEG experiment focusing on motor and visual brain processes. In contrast to existing standard approaches, the model allows the variability across trials and subjects to be identified. The nature of this variability is informative about the resting state of the brain

    Going Deeper into Action Recognition: A Survey

    Full text link
    Understanding human actions in visual data is tied to advances in complementary research areas including object recognition, human dynamics, domain adaptation and semantic segmentation. Over the last decade, human action analysis evolved from earlier schemes that are often limited to controlled environments to nowadays advanced solutions that can learn from millions of videos and apply to almost all daily activities. Given the broad range of applications from video surveillance to human-computer interaction, scientific milestones in action recognition are achieved more rapidly, eventually leading to the demise of what used to be good in a short time. This motivated us to provide a comprehensive review of the notable steps taken towards recognizing human actions. To this end, we start our discussion with the pioneering methods that use handcrafted representations, and then, navigate into the realm of deep learning based approaches. We aim to remain objective throughout this survey, touching upon encouraging improvements as well as inevitable fallbacks, in the hope of raising fresh questions and motivating new research directions for the reader
    corecore