1,337 research outputs found
Object movement identification via sparse representation
Object Movement Identification from videos is very challenging, and has got
numerous applications in sports evaluation, video surveillance, elder/child care, etc. In
thisresearch, a model using sparse representation is presented for the human activity detection
from the video data. This is done using a linear combination of atoms from a dictionary and a
sparse coefficient matrix. The dictionary is created using a Spatio Temporal Interest Points
(STIP) algorithm. The Spatio temporal features are extracted for the training video data as well
as the testing video data. The K-Singular Value Decomposition (KSVD)algorithm is used for
learning dictionaries for the trainingvideo dataset. Finally, human action is classified using
aminimum threshold residual value of the corresponding actionclass in the testing video dataset.
Experiments are conducted onthe KTH dataset which contains a number of actions. Thecurrent
approach performed well in classifying activities with asuccess rate of 90%
Using basic image features for texture classification
Representing texture images statistically as histograms over a discrete vocabulary of local features has proven widely effective for texture classification tasks. Images are described locally by vectors of, for example, responses to some filter bank; and a visual vocabulary is defined as a partition of this descriptor-response space, typically based on clustering. In this paper, we investigate the performance of an approach which represents textures as histograms over a visual vocabulary which is defined geometrically, based on the Basic Image Features of Griffin and Lillholm (Proc. SPIE 6492(09):1-11, 2007), rather than by clustering. BIFs provide a natural mathematical quantisation of a filter-response space into qualitatively distinct types of local image structure. We also extend our approach to deal with intra-class variations in scale. Our algorithm is simple: there is no need for a pre-training step to learn a visual dictionary, as in methods based on clustering, and no tuning of parameters is required to deal with different datasets. We have tested our implementation on three popular and challenging texture datasets and find that it produces consistently good classification results on each, including what we believe to be the best reported for the KTH-TIPS and equal best reported for the UIUCTex databases
Keyframe-based visual–inertial odometry using nonlinear optimization
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy
Recommended from our members
A biologically inspired spiking model of visual processing for image feature detection
To enable fast reliable feature matching or tracking in scenes, features need to be discrete and meaningful, and hence edge or corner features, commonly called interest points are often used for this purpose. Experimental research has illustrated that biological vision systems use neuronal circuits to extract particular features such as edges or corners from visual scenes. Inspired by this biological behaviour, this paper proposes a biologically inspired spiking neural network for the purpose of image feature extraction. Standard digital images are processed and converted to spikes in a manner similar to the processing that transforms light into spikes in the retina. Using a hierarchical spiking network, various types of biologically inspired receptive fields are used to extract progressively complex image features. The performance of the network is assessed by examining the repeatability of extracted features with visual results presented using both synthetic and real images
Corner Detection on hexagonal pixel based images
Corner detection is used in many computer vision applications that require fast and efficient feature matching. In addition, hexagonal pixel based images have been recently investigated for image capture and processing due to their ability to represent curved structures that are common in real images better than traditional rectangular pixel based images. Therefore, we present an approach to corner detection on hexagonal images and demonstrate that accuracy is comparable to well-known existing corner detectors applied to rectangular pixel based images
- …