233 research outputs found
Persistent Homology of Attractors For Action Recognition
In this paper, we propose a novel framework for dynamical analysis of human
actions from 3D motion capture data using topological data analysis. We model
human actions using the topological features of the attractor of the dynamical
system. We reconstruct the phase-space of time series corresponding to actions
using time-delay embedding, and compute the persistent homology of the
phase-space reconstruction. In order to better represent the topological
properties of the phase-space, we incorporate the temporal adjacency
information when computing the homology groups. The persistence of these
homology groups encoded using persistence diagrams are used as features for the
actions. Our experiments with action recognition using these features
demonstrate that the proposed approach outperforms other baseline methods.Comment: 5 pages, Under review in International Conference on Image Processin
Life, Law, and Abandonment in Giorgio Agamben
The present article deals with the political philosophy of Giorgio Agamben and explores his seminal concepts like âhomo sacerâ and âstate of exceptionâ to examine the relationship between law and human life and probes into the philosopherâs thoughts on the function of the biopolitical machine in the modern state to allocate the positions of terror vis-a-vis legality and the function of sovereignty. Working through Agambenâs body of thought and relating it to a host of other political thinkers like Schmitt and Mbembe for example, it sketches out the fundamental definition of politics and what it means to be in relation to that in our modern times
State Space Approaches for Modeling Activities in Video Streams
The objective is to discern events and behavior in activities using video sequences, which conform to common human experience. It has several applications such as recognition, temporal segmentation, video indexing and anomaly detection. Activity modeling offers compelling challenges to computational vision systems at several levels ranging from low-level vision tasks for detection and segmentation to high-level models for extracting perceptually salient information. With a focus on the latter, the following approaches are presented: event detection in discrete state space, epitomic representation in continuous state space, temporal segmentation using mixed state models, key frame detection using antieigenvalues and spatio-temporal activity volumes.
Significant changes in motion properties are said to be events. We present an event probability sequence representation in which the probability of event occurrence is computed using stable changes at the state level of the discrete state hidden Markov model that generates the observed trajectories. Reliance on a trained model however, can be a limitation. A data-driven antieigenvalue-based approach is proposed for detecting changes. Antieigenvalues are sensitive to turnings whereas eigenvalues capture directions of maximum variance in the data. In both these approaches, events are assumed to be instantaneous quantities. This is relaxed using an epitomic representation in continuous state space.
Video sequences are segmented using a sliding window within which the dynamics of each object is assumed to be linear. The system matrix, initial state value and the input signal statistics are said to form an epitome. The system matrices are decomposed using the Iwasawa matrix decomposition to isolate the effect of rotation, scaling and projection of the state vector. It is used to compute physically meaningful distances between epitomes. Epitomes reveal dominant primitives of activities that have an abstracted interpretation. A mixed state approach for activities is presented in which higher-level primitives of behavior is encoded in the discrete state component and observed dynamics in the continuous state component. The effectiveness of mixed state models is demonstrated using temporal segmentation. In addition to motion trajectories, the volume carved out in an xyt cube by a moving object is characterized using Morse functions
A Depth Video-based Human Detection and Activity Recognition using Multi-features and Embedded Hidden Markov Models for Health Care Monitoring Systems
Increase in number of elderly people who are living independently needs especial care in the form of healthcare monitoring systems. Recent advancements in depth video technologies have made human activity recognition (HAR) realizable for elderly healthcare applications. In this paper, a depth video-based novel method for HAR is presented using robust multi-features and embedded Hidden Markov Models (HMMs) to recognize daily life activities of elderly people living alone in indoor environment such as smart homes. In the proposed HAR framework, initially, depth maps are analyzed by temporal motion identification method to segment human silhouettes from noisy background and compute depth silhouette area for each activity to track human movements in a scene. Several representative features, including invariant, multi-view differentiation and spatiotemporal body joints features were fused together to explore gradient orientation change, intensity differentiation, temporal variation and local motion of specific body parts. Then, these features are processed by the dynamics of their respective class and learned, modeled, trained and recognized with specific embedded HMM having active feature values. Furthermore, we construct a new online human activity dataset by a depth sensor to evaluate the proposed features. Our experiments on three depth datasets demonstrated that the proposed multi-features are efficient and robust over the state of the art features for human action and activity recognition
Learning long-range spatial dependencies with horizontal gated-recurrent units
Progress in deep learning has spawned great successes in many engineering
applications. As a prime example, convolutional neural networks, a type of
feedforward neural networks, are now approaching -- and sometimes even
surpassing -- human accuracy on a variety of visual recognition tasks. Here,
however, we show that these neural networks and their recent extensions
struggle in recognition tasks where co-dependent visual features must be
detected over long spatial ranges. We introduce the horizontal gated-recurrent
unit (hGRU) to learn intrinsic horizontal connections -- both within and across
feature columns. We demonstrate that a single hGRU layer matches or outperforms
all tested feedforward hierarchical baselines including state-of-the-art
architectures which have orders of magnitude more free parameters. We further
discuss the biological plausibility of the hGRU in comparison to anatomical
data from the visual cortex as well as human behavioral data on a classic
contour detection task.Comment: Published at NeurIPS 2018
https://papers.nips.cc/paper/7300-learning-long-range-spatial-dependencies-with-horizontal-gated-recurrent-unit
- âŚ