6,174 research outputs found

    Hierarchical Subquery Evaluation for Active Learning on a Graph

    Get PDF
    To train good supervised and semi-supervised object classifiers, it is critical that we not waste the time of the human experts who are providing the training labels. Existing active learning strategies can have uneven performance, being efficient on some datasets but wasteful on others, or inconsistent just between runs on the same dataset. We propose perplexity based graph construction and a new hierarchical subquery evaluation algorithm to combat this variability, and to release the potential of Expected Error Reduction. Under some specific circumstances, Expected Error Reduction has been one of the strongest-performing informativeness criteria for active learning. Until now, it has also been prohibitively costly to compute for sizeable datasets. We demonstrate our highly practical algorithm, comparing it to other active learning measures on classification datasets that vary in sparsity, dimensionality, and size. Our algorithm is consistent over multiple runs and achieves high accuracy, while querying the human expert for labels at a frequency that matches their desired time budget.Comment: CVPR 201

    Semi-wildlife gait patterns classification using Statistical Methods and Artificial Neural Networks

    Get PDF
    Several studies have focused on classifying behavioral patterns in wildlife and captive species to monitor their activities and so to understanding the interactions of animals and control their welfare, for biological research or commercial purposes. The use of pattern recognition techniques, statistical methods and Overall Dynamic Body Acceleration (ODBA) are well known for animal behavior recognition tasks. The reconfigurability and scalability of these methods are not trivial, since a new study has to be done when changing any of the configuration parameters. In recent years, the use of Artificial Neural Networks (ANN) has increased for this purpose due to the fact that they can be easily adapted when new animals or patterns are required. In this context, a comparative study between a theoretical research is presented, where statistical and spectral analyses were performed and an embedded implementation of an ANN on a smart collar device was placed on semi-wild animals. This system is part of a project whose main aim is to monitor wildlife in real time using a wireless sensor network infrastructure. Different classifiers were tested and compared for three different horse gaits. Experimental results in a real time scenario achieved an accuracy of up to 90.7%, proving the efficiency of the embedded ANN implementation.Junta de Andalucía P12-TIC-1300Ministerio de Economía y Competitividad TEC2016-77785-

    Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping

    Full text link
    We present an autoencoder-based semi-supervised approach to classify perceived human emotions from walking styles obtained from videos or motion-captured data and represented as sequences of 3D poses. Given the motion on each joint in the pose at each time step extracted from 3D pose sequences, we hierarchically pool these joint motions in a bottom-up manner in the encoder, following the kinematic chains in the human body. We also constrain the latent embeddings of the encoder to contain the space of psychologically-motivated affective features underlying the gaits. We train the decoder to reconstruct the motions per joint per time step in a top-down manner from the latent embeddings. For the annotated data, we also train a classifier to map the latent embeddings to emotion labels. Our semi-supervised approach achieves a mean average precision of 0.84 on the Emotion-Gait benchmark dataset, which contains both labeled and unlabeled gaits collected from multiple sources. We outperform current state-of-art algorithms for both emotion recognition and action recognition from 3D gaits by 7%--23% on the absolute. More importantly, we improve the average precision by 10%--50% on the absolute on classes that each makes up less than 25% of the labeled part of the Emotion-Gait benchmark dataset.Comment: In proceedings of the 16th European Conference on Computer Vision, 2020. Total pages 18. Total figures 5. Total tables

    Cross-domain self-supervised complete geometric representation learning for real-scanned point cloud based pathological gait analysis

    Get PDF
    Accurate lower-limb pose estimation is a prereq-uisite of skeleton based pathological gait analysis. To achievethis goal in free-living environments for long-term monitoring,single depth sensor has been proposed in research. However,the depth map acquired from a single viewpoint encodes onlypartial geometric information of the lower limbs and exhibitslarge variations across different viewpoints. Existing off-the-shelfthree-dimensional (3D) pose tracking algorithms and publicdatasets for depth based human pose estimation are mainlytargeted at activity recognition applications. They are relativelyinsensitive to skeleton estimation accuracy, especially at thefoot segments. Furthermore, acquiring ground truth skeletondata for detailed biomechanics analysis also requires consid-erable efforts. To address these issues, we propose a novelcross-domain self-supervised complete geometric representationlearning framework, with knowledge transfer from the unlabelledsynthetic point clouds of full lower-limb surfaces. The proposedmethod can significantly reduce the number of ground truthskeletons (with only 1%) in the training phase, meanwhileensuring accurate and precise pose estimation and capturingdiscriminative features across different pathological gait patternscompared to other methods

    SmartWheels: Detecting urban features for wheelchair users’ navigation

    Get PDF
    People with mobility impairments have heterogeneous needs and abilities while moving in an urban environment and hence they require personalized navigation instructions. Providing these instructions requires the knowledge of urban features like curb ramps, steps or other obstacles along the way. Since these urban features are not available from maps and change in time, crowdsourcing this information from end-users is a scalable and promising solution. However, it is inconvenient for wheelchair users to input data while on the move. Hence, an automatic crowdsourcing mechanism is needed. In this contribution we present SmartWheels, a solution to detect urban features by analyzing inertial sensors data produced by wheelchair movements. Activity recognition techniques are used to process the sensors data stream. SmartWheels is evaluated on data collected from 17 real wheelchair users navigating in a controlled environment (10 users) and in-the-wild (7 users). Experimental results show that SmartWheels is a viable solution to detect urban features, in particular by applying specific strategies based on the confidence assigned to predictions by the classifier
    corecore