57 research outputs found

    Analysis of the hands in egocentric vision: A survey

    Full text link
    Egocentric vision (a.k.a. first-person vision - FPV) applications have thrived over the past few years, thanks to the availability of affordable wearable cameras and large annotated datasets. The position of the wearable camera (usually mounted on the head) allows recording exactly what the camera wearers have in front of them, in particular hands and manipulated objects. This intrinsic advantage enables the study of the hands from multiple perspectives: localizing hands and their parts within the images; understanding what actions and activities the hands are involved in; and developing human-computer interfaces that rely on hand gestures. In this survey, we review the literature that focuses on the hands using egocentric vision, categorizing the existing approaches into: localization (where are the hands or parts of them?); interpretation (what are the hands doing?); and application (e.g., systems that used egocentric hand cues for solving a specific problem). Moreover, a list of the most prominent datasets with hand-based annotations is provided

    Examining the use of visualisation methods for the design of interactive systems

    Get PDF
    Human-Computer Interaction (HCI) design has historically involved people from different fields. Designing HCI systems with people of varying background and expertise can bring different perspectives and ideas, but discipline-specific language and design methods can hinder such collaborations. The application of visualisation methods is a way to overcome these challenges, but to date selection tools tend to focus on a facet of HCI design methods and no research has been attempted to assemble a collection of HCI visualisation methods. To fill this gap, this research seeks to establish an inventory of HCI visualisation methods and identify ways of selecting amongst them. Creating the inventory of HCI methods would enable designers to discover and learn about methods that they may not have used before or be familiar with. Categorising the methods provides a structure for new and experienced designers to determine appropriate methods for their design project. The aim of this research is to support designers in the development of Human-Computer Interaction (HCI) systems through better selection and application of visualisation methods. This is achieved through four phases. In the first phase, three case studies are conducted to investigate the challenges and obstacles that influence the choice of a design approach in the development of HCI systems. The findings from the three case studies helped to form the design requirements for a visualisation methods selection and application guide. In the second phase, the Guide is developed. The third phase aims to evaluate the Guide. The Guide is employed in the development of a serious training game to demonstrate its applicability. In the fourth phase, a user study was designed to evaluate the serious training game. Through the evaluation of the serious training game, the Guide is validated. This research has contributed to the knowledge surrounding visualisation tools used in the design of interactive systems. The compilation of HCI visualisation methods establishes an inventory of methods for interaction design. The identification of Selection Approaches brings together the ways in which visualisation methods are organised and grouped. By mapping visualisation methods to Selection Approaches, this study has provided a way for practitioners to select a visualisation method to support their design practice. The development of the Selection Guide provided five filters, which helps designers to identify suitable visualisation methods based on the nature of the design challenge. The development of the Application Guide presented the methodology of each visualisation method in a consistent format. This enables the ease of method comparison and to ensure there is comprehensive information for each method. A user study showing the evaluation of a serious training game is presented. Two learning objectives were identified and mapped to Bloom’s Taxonomy to advocate an approach for like-to-like comparison with future studies

    People Identification Based on Person Image and Additional Physical Parameters Comparison

    Get PDF
    This paper proposes and presents one approach for people identification based on image and additional physical parameters, height and step length, of a person comparison. People identification is very important in many areas of human life. There are large number of identification methods (biometric methods) that include a different scope of methods, for example fingerprint identification, hand geometry identification, facial recognition, methods based on human eye identification (retina and iris), gait recognition etc. Most of that methods require some kind of interaction with a person, what could be a problem in many practical applications. The method that does not require any interaction with a person is gait recognition. One approach for a people identification based on gait recognition, that uses silhouettes of a person and parameters of person height and step length, is proposed and presented in this paper

    4D Unconstrained Real-time Face Recognition Using a Commodity Depthh Camera

    Get PDF
    Robust unconstrained real-time face recognition still remains a challenge today. The recent addition to the market of lightweight commodity depth sensors brings new possibilities for human-machine interaction and therefore face recognition. This article accompanies the reader through a succinct survey of the current literature on face recognition in general and 3D face recognition using depth sensors in particular. Consequent to the assessment of experiments performed using implementations of the most established algorithms, it can be concluded that the majority are biased towards qualitative performance and are lacking in speed. A novel method which uses noisy data from such a commodity sensor to build dynamic internal representations of faces is proposed. Distances to a surface normal to the face are measured in real-time and used as input to a specific type of recurrent neural network, namely long short-term memory. This enables the prediction of facial structure in linear time and also increases robustness towards partial occlusions
    • …
    corecore