170 research outputs found

    Analysis of the hands in egocentric vision: A survey

    Full text link
    Egocentric vision (a.k.a. first-person vision - FPV) applications have thrived over the past few years, thanks to the availability of affordable wearable cameras and large annotated datasets. The position of the wearable camera (usually mounted on the head) allows recording exactly what the camera wearers have in front of them, in particular hands and manipulated objects. This intrinsic advantage enables the study of the hands from multiple perspectives: localizing hands and their parts within the images; understanding what actions and activities the hands are involved in; and developing human-computer interfaces that rely on hand gestures. In this survey, we review the literature that focuses on the hands using egocentric vision, categorizing the existing approaches into: localization (where are the hands or parts of them?); interpretation (what are the hands doing?); and application (e.g., systems that used egocentric hand cues for solving a specific problem). Moreover, a list of the most prominent datasets with hand-based annotations is provided

    An Effective and Efficient Method for Detecting Hands in Egocentric Videos for Rehabilitation Applications

    Full text link
    Objective: Individuals with spinal cord injury (SCI) report upper limb function as their top recovery priority. To accurately represent the true impact of new interventions on patient function and independence, evaluation should occur in a natural setting. Wearable cameras can be used to monitor hand function at home, using computer vision to automatically analyze the resulting videos (egocentric video). A key step in this process, hand detection, is difficult to do robustly and reliably, hindering deployment of a complete monitoring system in the home and community. We propose an accurate and efficient hand detection method that uses a simple combination of existing detection and tracking algorithms. Methods: Detection, tracking, and combination methods were evaluated on a new hand detection dataset, consisting of 167,622 frames of egocentric videos collected on 17 individuals with SCI performing activities of daily living in a home simulation laboratory. Results: The F1-scores for the best detector and tracker alone (SSD and Median Flow) were 0.90±\pm0.07 and 0.42±\pm0.18, respectively. The best combination method, in which a detector was used to initialize and reset a tracker, resulted in an F1-score of 0.87±\pm0.07 while being two times faster than the fastest detector alone. Conclusion: The combination of the fastest detector and best tracker improved the accuracy over online trackers while improving the speed of detectors. Significance: The method proposed here, in combination with wearable cameras, will help clinicians directly measure hand function in a patient's daily life at home, enabling independence after SCI.Comment: 7 pages, 3 figures, 5 table

    Detecting Hands in Egocentric Videos: Towards Action Recognition

    Full text link
    Recently, there has been a growing interest in analyzing human daily activities from data collected by wearable cameras. Since the hands are involved in a vast set of daily tasks, detecting hands in egocentric images is an important step towards the recognition of a variety of egocentric actions. However, besides extreme illumination changes in egocentric images, hand detection is not a trivial task because of the intrinsic large variability of hand appearance. We propose a hand detector that exploits skin modeling for fast hand proposal generation and Convolutional Neural Networks for hand recognition. We tested our method on UNIGE-HANDS dataset and we showed that the proposed approach achieves competitive hand detection results

    Designing an Egocentric Video-Based Dashboard to Report Hand Performance Measures for Outpatient Rehabilitation of Cervical Spinal Cord Injury

    Get PDF
    Background: Functional use of the upper extremities (UEs) is a top recovery priority for individuals with cervical spinal cord injury (cSCI), but the inability to monitor recovery at home and limitations in hand function outcome measures impede optimal recovery. Objectives: We developed a framework using wearable cameras to monitor hand use at home and aimed to identify the best way to report information to clinicians. Methods: A dashboard was iteratively developed with clinician (n = 7) input through focus groups and interviews, creating low-fidelity prototypes based on recurring feedback until no new information emerged. Affinity diagramming was used to identify themes and subthemes from interview data. User stories were developed and mapped to specific features to create a high-fidelity prototype. Results: Useful elements identified for a dashboard reporting hand performance included summaries to interpret graphs, a breakdown of hand posture and activity to provide context, video snippets to qualitatively view hand use at home, patient notes to understand patient satisfaction or struggles, and time series graphing of metrics to measure trends over time. Conclusion: Involving end-users in the design process and breaking down user requirements into user stories helped identify necessary interface elements for reporting hand performance metrics to clinicians. Clinicians recognized the dashboard's potential to monitor rehabilitation progress, provide feedback on hand use, and track progress over time. Concerns were raised about the implementation into clinical practice, therefore further inquiry is needed to determine the tool's feasibility and usefulness in clinical practice for individuals with UE impairments

    Are You "Tilting at Windmills" or Undertaking a Valid Clinical Trial?

    Get PDF
    In this review, several aspects surrounding the choice of a therapeutic intervention and the conduct of clinical trials are discussed. Some of the background for why human studies have evolved to their current state is also included. Specifically, the following questions have been addressed: 1) What criteria should be used to determine whether a scientific discovery or invention is worthy of translation to human application? 2) What recent scientific advance warrants a deeper understanding of clinical trials by everyone? 3) What are the different types and phases of a clinical trial? 4) What characteristics of a human disorder should be noted, tracked, or stratified for a clinical trial and what inclusion /exclusion criteria are important to enrolling appropriate trial subjects? 5) What are the different study designs that can be used in a clinical trial program? 6) What confounding factors can alter the accurate interpretation of clinical trial outcomes? 7) What are the success rates of clinical trials and what can we learn from previous clinical trials? 8) What are the essential principles for the conduct of valid clinical trials

    Hand contour detection in wearable camera video using an adaptive histogram region of interest

    Get PDF
    BACKGROUND: Monitoring hand function at home is needed to better evaluate the effectiveness of rehabilitation interventions. Our objective is to develop wearable computer vision systems for hand function monitoring. The specific aim of this study is to develop an algorithm that can identify hand contours in video from a wearable camera that records the user’s point of view, without the need for markers. METHODS: The two-step image processing approach for each frame consists of: (1) Detecting a hand in the image, and choosing one seed point that lies within the hand. This step is based on a priori models of skin colour. (2) Identifying the contour of the region containing the seed point. This is accomplished by adaptively determining, for each frame, the region within a colour histogram that corresponds to hand colours, and backprojecting the image using the reduced histogram. RESULTS: In four test videos relevant to activities of daily living, the hand detector classification accuracy was 88.3%. The contour detection results were compared to manually traced contours in 97 test frames, and the median F-score was 0.86. CONCLUSION: This algorithm will form the basis for a wearable computer-vision system that can monitor and log the interactions of the hand with its environment

    A comparison of extraneural approaches for selective recording in the peripheral nervous system

    Get PDF
    The peripheral nervous system is a key target for the development of neural interfaces. However, recording from the peripheral nerves can be challenging especially when chronic implantation is desired. Nerve cuffs are frequently employed using either two or three contacts to provide a single recording channel. Advancements in manufacturing technology have enabled multi-contact cuffs, enabling measurement of both temporal (i.e., velocity) and spatial information (i.e., spatial location). Selective techniques have been developed with different time resolutions but it is unclear how the number of contacts and their spatial configuration affect their performance. Thus, this paper investigates two extraneural recording techniques (LDA and spatiotemporal signatures) and compares them using recordings made from the sciatic nerve of rats using high density (HD, 56 contact), reduced-HD (16 contacts), and low density (LD, 16 contact) datasets. Performance of the two techniques was evaluated using classification accuracy and F1-score. Both techniques show an expected improvement in classification accuracy with the spatiotemporal signature approach showing a 21.6 (LD to HD) - 24.6% (reduced HD to HD) increase and the LDA approach showing a 2.9 (reduced HD to HD) - 41.3% (LD to HD) increase and had comparable results in both the LD and HD datasets.</p

    A comparison of extraneural approaches for selective recording in the peripheral nervous system

    Get PDF
    The peripheral nervous system is a key target for the development of neural interfaces. However, recording from the peripheral nerves can be challenging especially when chronic implantation is desired. Nerve cuffs are frequently employed using either two or three contacts to provide a single recording channel. Advancements in manufacturing technology have enabled multi-contact cuffs, enabling measurement of both temporal (i.e., velocity) and spatial information (i.e., spatial location). Selective techniques have been developed with different time resolutions but it is unclear how the number of contacts and their spatial configuration affect their performance. Thus, this paper investigates two extraneural recording techniques (LDA and spatiotemporal signatures) and compares them using recordings made from the sciatic nerve of rats using high density (HD, 56 contact), reduced-HD (16 contacts), and low density (LD, 16 contact) datasets. Performance of the two techniques was evaluated using classification accuracy and F1-score. Both techniques show an expected improvement in classification accuracy with the spatiotemporal signature approach showing a 21.6 (LD to HD) - 24.6% (reduced HD to HD) increase and the LDA approach showing a 2.9 (reduced HD to HD) - 41.3% (LD to HD) increase and had comparable results in both the LD and HD datasets.</p

    Tutorial: A guide to techniques for analysing recordings from the peripheral nervous system

    Get PDF
    The nervous system, through a combination of conscious and automatic processes, enables the regulation of the body and its interactions with the environment. The peripheral nervous system is an excellent target for technologies that seek to modulate, restore or enhance these abilities as it carries sensory and motor information that most directly relates to a target organ or function. However, many applications require a combination of both an effective peripheral nerve interface and effective signal processing techniques to provide selective and stable recordings. While there are many reviews on the design of peripheral nerve interfaces, reviews of data analysis techniques and translational considerations are limited. Thus, this tutorial aims to support new and existing researchers in the understanding of the general guiding principles, and introduces a taxonomy for electrode configurations, techniques and translational models to consider
    • 

    corecore