578 research outputs found

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Lost in spatial translation - A novel tool to objectively assess spatial disorientation in Alzheimer's disease and frontotemporal dementia

    Get PDF
    Spatial disorientation is a prominent feature of early Alzheimer's disease (AD) attributed to degeneration of medial temporal and parietal brain regions, including the retrosplenial cortex (RSC). By contrast, frontotemporal dementia (FTD) syndromes show generally intact spatial orientation at presentation. However, currently no clinical tasks are routinely administered to objectively assess spatial orientation in these neurodegenerative conditions. In this study we investigated spatial orientation in 58 dementia patients and 23 healthy controls using a novel virtual supermarket task as well as voxel-based morphometry (VBM). We compared performance on this task with visual and verbal memory function, which has traditionally been used to discriminate between AD and FTD. Participants viewed a series of videos from a first person perspective travelling through a virtual supermarket and were required to maintain orientation to a starting location. Analyses revealed significantly impaired spatial orientation in AD, compared to FTD patient groups. Spatial orientation performance was found to discriminate AD and FTD patient groups to a very high degree at presentation. More importantly, integrity of the RSC was identified as a key neural correlate of orientation performance. These findings confirm the notion that i) it is feasible to assess spatial orientation objectively via our novel Supermarket task; ii) impaired orientation is a prominent feature that can be applied clinically to discriminate between AD and FTD and iii) the RSC emerges as a critical biomarker to assess spatial orientation deficits in these neurodegenerative conditions

    Covariate conscious approach for Gait recognition based upon Zernike moment invariants

    Full text link
    Gait recognition i.e. identification of an individual from his/her walking pattern is an emerging field. While existing gait recognition techniques perform satisfactorily in normal walking conditions, there performance tend to suffer drastically with variations in clothing and carrying conditions. In this work, we propose a novel covariate cognizant framework to deal with the presence of such covariates. We describe gait motion by forming a single 2D spatio-temporal template from video sequence, called Average Energy Silhouette image (AESI). Zernike moment invariants (ZMIs) are then computed to screen the parts of AESI infected with covariates. Following this, features are extracted from Spatial Distribution of Oriented Gradients (SDOGs) and novel Mean of Directional Pixels (MDPs) methods. The obtained features are fused together to form the final well-endowed feature set. Experimental evaluation of the proposed framework on three publicly available datasets i.e. CASIA dataset B, OU-ISIR Treadmill dataset B and USF Human-ID challenge dataset with recently published gait recognition approaches, prove its superior performance.Comment: 11 page

    Automated Face Recognition: Challenges and Solutions

    Get PDF
    Automated face recognition (AFR) aims to identify people in images or videos using pattern recognition techniques. Automated face recognition is widely used in applications ranging from social media to advanced authentication systems. Whilst techniques for face recognition are well established, the automatic recognition of faces captured by digital cameras in unconstrained, real‐world environment is still very challenging, since it involves important variations in both acquisition conditions as well as in facial expressions and in pose changes. Thus, this chapter introduces the topic of computer automated face recognition in light of the main challenges in that research field and the developed solutions and applications based on image processing and artificial intelligence methods

    Extracting discriminative features using task-oriented gaze maps measured from observers for personal attribute classification

    Get PDF
    We discuss how to reveal and use the gaze locations of observers who view pedestrian images for personal attribute classification. Observers look at informative regions when attempting to classify the attributes of pedestrians in images. Thus, we hypothesize that the regions in which observers’ gaze locations are clustered will contain discriminative features for the classifiers of personal attributes. Our method acquires the distribution of gaze locations from several observers while they perform the task of manually classifying each personal attribute. We term this distribution a task-oriented gaze map. To extract discriminative features, we assign large weights to the region with a cluster of gaze locations in the task-oriented gaze map. In our experiments, observers mainly looked at different regions of body parts when classifying each personal attribute. Furthermore, our experiments show that the gaze-based feature extraction method significantly improved the performance of personal attribute classification when combined with a convolutional neural network or metric learning technique
    • 

    corecore