11,289 research outputs found

    Comparison of classifiers for human activity recognition

    Full text link
    The human activity recognition in video sequences is a field where many types of classifiers have been used as well as a wide range of input features that feed these classifiers. This work has a double goal. First of all, we extracted the most relevant features for the activity recognition by only utilizing motion features provided by a simple tracker based on the 2D centroid coordinates and the height and width of each person's blob. Second, we present a performance comparison among seven different classifiers (two Hidden Markov Models (HMM), a J.48 tree, two Bayesian classifiers, a classifier based on rules and a Neuro-Fuzzy system). The video sequences under study present four human activities (inactive, active, walking and running) that have been manual labeled previously. The results show that the classifiers reveal different performance according to the number of features employed and the set of classes to sort. Moreover, the basic motion features are not enough to have a complete description of the problem and obtain a good classification. © Springer-Verlag Berlin Heidelberg 2007

    Fast Fight Detection

    Get PDF
    Action recognition has become a hot topic within computer vision. However, the action recognition community has focused mainly on relatively simple actions like clapping, walking, jogging, etc. The detection of specific events with direct practical use such as fights or in general aggressive behavior has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like prisons, psychiatric centers or even embedded in camera phones. As a consequence, there is growing interest in developing violence detection algorithms. Recent work considered the well-known Bag-of-Words framework for the specific problem of fight detection. Under this framework, spatio-temporal features are extracted from the video sequences and used for classification. Despite encouraging results in which high accuracy rates were achieved, the computational cost of extracting such features is prohibitive for practical applications. This work proposes a novel method to detect violence sequences. Features extracted from motion blobs are used to discriminate fight and non-fight sequences. Although the method is outperformed in accuracy by state of the art, it has a significantly faster computation time thus making it amenable for real-time applications

    Online real-time crowd behavior detection in video sequences

    Get PDF
    Automatically detecting events in crowded scenes is a challenging task in Computer Vision. A number of offline approaches have been proposed for solving the problem of crowd behavior detection, however the offline assumption limits their application in real-world video surveillance systems. In this paper, we propose an online and real-time method for detecting events in crowded video sequences. The proposed approach is based on the combination of visual feature extraction and image segmentation and it works without the need of a training phase. A quantitative experimental evaluation has been carried out on multiple publicly available video sequences, containing data from various crowd scenarios and different types of events, to demonstrate the effectiveness of the approach

    Embedding Robotic Agents in the Social Environment

    Get PDF
    This paper discusses the interactive vision approach, which advocates using knowledge from the human sciences on the structure and dynamics of human-human interaction in the development of machine vision systems and interactive robots. While this approach is discussed generally, the particular case of the system being developed for the Aurora project (which aims to produce a robot to be used as a tool in the therapy of children with autism) is especially considered, with description of the design of the machine vision system being employed and discussion of ideas from the human sciences with particular reference to the Aurora system. An example architecture for a simple interactive agent, which will likely form the basis for the first implementation of this system, is briefly described and a description of hardware used for the Aurora system is given.Peer reviewe

    Detecting Hands in Egocentric Videos: Towards Action Recognition

    Full text link
    Recently, there has been a growing interest in analyzing human daily activities from data collected by wearable cameras. Since the hands are involved in a vast set of daily tasks, detecting hands in egocentric images is an important step towards the recognition of a variety of egocentric actions. However, besides extreme illumination changes in egocentric images, hand detection is not a trivial task because of the intrinsic large variability of hand appearance. We propose a hand detector that exploits skin modeling for fast hand proposal generation and Convolutional Neural Networks for hand recognition. We tested our method on UNIGE-HANDS dataset and we showed that the proposed approach achieves competitive hand detection results

    Automatic Recognition of Light Microscope Pollen Images

    Get PDF
    This paper is a progress report on a project aimed at the realization of a low-cost, automatic, trainable system "AutoStage" for recognition and counting of pollen. Previous work on image feature selection and classification has been extended by design and integration of an XY stage to allow slides to be scanned, an auto focus system, and segmentation software. The results of a series of classification tests are reported, and verified by comparison with classification performance by expert palynologists. A number of technical issues are addressed, including pollen slide preparation and slide sampling protocols

    Low complexity object detection with background subtraction for intelligent remote monitoring

    Get PDF
    • …
    corecore