17 research outputs found
An Effective and Efficient Method for Detecting Hands in Egocentric Videos for Rehabilitation Applications
Objective: Individuals with spinal cord injury (SCI) report upper limb
function as their top recovery priority. To accurately represent the true
impact of new interventions on patient function and independence, evaluation
should occur in a natural setting. Wearable cameras can be used to monitor hand
function at home, using computer vision to automatically analyze the resulting
videos (egocentric video). A key step in this process, hand detection, is
difficult to do robustly and reliably, hindering deployment of a complete
monitoring system in the home and community. We propose an accurate and
efficient hand detection method that uses a simple combination of existing
detection and tracking algorithms. Methods: Detection, tracking, and
combination methods were evaluated on a new hand detection dataset, consisting
of 167,622 frames of egocentric videos collected on 17 individuals with SCI
performing activities of daily living in a home simulation laboratory. Results:
The F1-scores for the best detector and tracker alone (SSD and Median Flow)
were 0.900.07 and 0.420.18, respectively. The best combination
method, in which a detector was used to initialize and reset a tracker,
resulted in an F1-score of 0.870.07 while being two times faster than the
fastest detector alone. Conclusion: The combination of the fastest detector and
best tracker improved the accuracy over online trackers while improving the
speed of detectors. Significance: The method proposed here, in combination with
wearable cameras, will help clinicians directly measure hand function in a
patient's daily life at home, enabling independence after SCI.Comment: 7 pages, 3 figures, 5 table
A Wearable Computer Vision System for Monitoring Hand Use at Home
Impairments in hand function lead to reduced independence and quality of life after cervical spinal cord injury (SCI). In order to develop effective rehabilitation interventions for individuals with cervical SCI, it is important to assess hand function throughout the rehabilitation process. Currently, the efficacy of new treatments is measured by assessments limited to a controlled setting or based on self-report; there is currently no viable method to collect quantitative information once the patient has returned to the community. This thesis attempts to solve this gap by developing a computer vision-based wearable camera system for monitoring hand use. Our research involved (1) the collection of egocentric video that represents activities of daily living, (2) the development of an algorithm that captures interactions between the hand and objects in the environment, and (3) the evaluation of the system in both laboratory and home settings.
Four studies were executed, involving three video datasets (20 able-bodied participants and 17 participants with SCI in a home simulation laboratory, as well as 3 participants with SCI in their home). We introduced the concept of hand-object interaction detection, defined as a binary decision about whether or not the hand is manipulating an object for a functional purpose. The datasets were used in the development and evaluation of an algorithmic pipeline consisting of hand detection and segmentation, followed by hand-object interaction detection. For this step, a random forest classifier was trained on hand motion, hand shape and scene colour features. The frame-by-frame binary output data over time was further analysed to extract three functional hand-use metrics: (1) the amount of total interaction as a percentage of testing time, (2) the average duration of interactions in seconds, and (3) the number of interactions per hour. The final study investigated the views of participants with SCI on the use of wearable cameras.
With appropriate strategies determined by input from individuals with SCI, this thesis demonstrates the potential of a wearable egocentric camera as a unique tool to allow researchers and clinicians to gauge the user's level of independence at home in activities involving upper limb function.Ph.D
Interaction Detection in Egocentric Video: Toward a Novel Outcome Measure for Upper Extremity Function
In order to develop effective interventions for restoring upper extremity function after cervical spinal cord injury, tools are needed to accurately measure hand function throughout the rehabilitation process. However, there is currently no suitable method to collect information about hand function in the community, when patients are not under direct observation of a clinician. We propose a wearable system that can monitor functional hand use using computer vision techniques applied to egocentric camera videos. To this end, in this study we demonstrate the feasibility of detecting interactions of the hand with objects in the environment from egocentric video. The system consists of a preprocessing step where the hand is segmented out from the background. The algorithm then extracts features associated with hand-object interactions. This includes comparing motion cues in the region near the hand (i.e., where the object is most likely to be located) to the motion of the hand itself, as well as to the motion of the background. Features representing hand shape are also extracted. The features serve as inputs to a random forest classifier, which was tested with a dataset of 14 activities of daily living as well as noninteractive tasks in five environment (total video duration of 44.16 min). The average F-score for the classifier was 0.85 for leave-one-activity out in our dataset set and 0.91 for a publicly available set (1.72 min) when filtered with a moving average. These results suggest that using egocentric video to monitor functional hand use at home is feasible.This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (RGPIN-2014-05498) and the Rick Hansen Institute (G2015-30)
Interaction Detection in Egocentric Video: Toward a Novel Outcome Measure for Upper Extremity Function
In order to develop effective interventions for restoring upper extremity function after cervical spinal cord injury, tools are needed to accurately measure hand function throughout the rehabilitation process. However, there is currently no suitable method to collect information about hand function in the community, when patients are not under direct observation of a clinician. We propose a wearable system that can monitor functional hand use using computer vision techniques applied to egocentric camera videos. To this end, in this study we demonstrate the feasibility of detecting interactions of the hand with objects in the environment from egocentric video. The system consists of a preprocessing step where the hand is segmented out from the background. The algorithm then extracts features associated with hand-object interactions. This includes comparing motion cues in the region near the hand (i.e., where the object is most likely to be located) to the motion of the hand itself, as well as to the motion of the background. Features representing hand shape are also extracted. The features serve as inputs to a random forest classifier, which was tested with a dataset of 14 activities of daily living as well as noninteractive tasks in five environment (total video duration of 44.16 min). The average F-score for the classifier was 0.85 for leave-one-activity out in our dataset set and 0.91 for a publicly available set (1.72 min) when filtered with a moving average. These results suggest that using egocentric video to monitor functional hand use at home is feasible.This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (RGPIN-2014-05498) and the Rick Hansen Institute (G2015-30)
Influence of upper limb movement patterns on accelerometer measurements: a pediatric case series
Objective: Previous studies showed success using wrist-worn accelerometers to monitor upper-limb activity in adults and children with hemiparesis. However, a knowledge gap exists regarding which specific joint movements are reflected in accelerometry readings. We conducted a case series intended to enrich data interpretation by characterizing the influence of different pediatric upper-limb movements on accelerometry data. Approach: The study recruited six typically developing children and five children with hemiparetic cerebral palsy. The participants performed unilateral and bilateral activities, and their upper limb movements were measured with wrist-worn accelerometers and the Microsoft Kinect, a markerless motion-capture system that tracks skeletal data. The Kinect data were used to quantify specific upper limb movements through joint angle calculations (trunk, shoulder, elbow and wrist). Correlation coefficients (r) were calculated to quantify the influence of individual joint movements on accelerometry data. Regression analyses were performed to examine multi-joint patterns and explain variability across different activities and participants. Main results: Single-joint correlation results suggest that pediatric wrist-worn accelerometry data are not biased to particular individual joint movements. Rather, the accelerometry data could best be explained by the movements of the joints with the most functional relevance to the performed activity. Significance: This case series provides deeper insight into the interpretation of wrist-worn accelerometry data, and supports the use of this tool in quantifying functional upper-limb movements in pediatric populations.This work was supported by funding from the Toronto Rehabilitation Institute – University Health Network and an Undergraduate Summer Research Award from the Natural Sciences and Engineering Research Council of Canada
Views of individuals with spinal cord injury on the use of wearable cameras to monitor upper limb function in the home and community
Objective: Hand function impairment after cervical spinal cord injury (SCI) can significantly reduce independence. Unlike current hand function assessments, wearable camera systems could potentially measure functional hand usage at home, and thus benefit the development of neurorehabilitation strategies. The objective of this study was to understand the views of individuals with SCI on the use of wearable cameras to track neurorehabilitation progress and outcomes in the community.
Design: Questionnaires.
Setting: Home simulation laboratory.
Participants: 15 individuals with cervical SCI.
Outcome Measures: After using wearable cameras in the simulated home environment, participants completed custom questionnaires, comprising open-ended and structured questions.
Results: Participants showed relatively low concerns related to data confidentiality when first-person videos are used by clinicians (1.93 + 1.28 on a 5-point Likert scale) or researchers (2.00 + 1.31). Storing only automatically extracted metrics reduced privacy concerns. Though participants reported moderate privacy concerns (2.53 + 1.51) about wearing a camera in daily life due to certain sensitive situations (e.g. washrooms), they felt that information about their hand usage at home is useful for researchers (4.73 + 0.59), clinicians (4.47 + 0.83), and themselves (4.40 + 0.83). Participants found the system moderately comfortable (3.27 + 1.44), but expressed low desire to use it frequently (2.87 + 1.36).
Conclusion: Despite some privacy and comfort concerns, participants believed that the information obtained would be useful. With appropriate strategies to minimize the data stored and recording duration, wearable cameras can be a well-accepted tool to track function in the home and community after SCI.This study was supported by the Rick Hansen Institute (G2015-30) and the Natural Sciences and Engineering Research Council of Canada (RGPIN-2014-05498). The authors wish to thank the study participants
Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home
Abstract
Background
Current upper extremity outcome measures for persons with cervical spinal cord injury (cSCI) lack the ability to directly collect quantitative information in home and community environments. A wearable first-person (egocentric) camera system is presented that aims to monitor functional hand use outside of clinical settings.
Methods
The system is based on computer vision algorithms that detect the hand, segment the hand outline, distinguish the user’s left or right hand, and detect functional interactions of the hand with objects during activities of daily living. The algorithm was evaluated using egocentric video recordings from 9 participants with cSCI, obtained in a home simulation laboratory. The system produces a binary hand-object interaction decision for each video frame, based on features reflecting motion cues of the hand, hand shape and colour characteristics of the scene.
Results
The output from the algorithm was compared with a manual labelling of the video, yielding F1-scores of 0.74 ± 0.15 for the left hand and 0.73 ± 0.15 for the right hand. From the resulting frame-by-frame binary data, functional hand use measures were extracted: the amount of total interaction as a percentage of testing time, the average duration of interactions in seconds, and the number of interactions per hour. Moderate and significant correlations were found when comparing these output measures to the results of the manual labelling, with ρ = 0.40, 0.54 and 0.55 respectively.
Conclusions
These results demonstrate the potential of a wearable egocentric camera for capturing quantitative measures of hand use at home
What is the prevalence of prosopagnosia? An empirical assessment of different diagnostic cutoffs
The prevalence of developmental prosopagnosia (DP), lifelong face recognition deficits, is widely reported to be 2-2.5%. However, DP has been diagnosed in different ways across studies, resulting in differing prevalence rates. In the current investigation, we estimated the range of DP prevalence by administering well-validated objective and subjective face recognition measures to an unselected web-based sample of 3,116 18-55 year-olds and applying DP diagnostic cutoffs from the last 13 years. We found estimated prevalence rates ranged from 0.64-5.42% when using a z-score approach and 0.13- 2.95% when using a percentile approach, with the most commonly used cutoffs by researchers having a prevalence rate of .93% (z-score, .45% when using percentiles). We next used multiple cluster analyses to examine whether there was a natural grouping of poorer face recognizers but failed to find consistent grouping beyond those with generally above versus below average face recognition. Lastly, we investigated whether DP studies with more relaxed diagnostic cutoffs were associated with better performance on the Cambridge Face Perception Test. In a sample of 43 studies, there was no significant association between diagnostic strictness and DP face perception accuracy (Kendall’s tau-b correlation, τb=.176 z-score; τb=.111 percentiles). Together, these results suggest that researchers have used more conservative DP diagnostic cutoffs than the widely reported 2-2.5% prevalence. We discuss the strengths and weaknesses of using more inclusive cutoffs, such as identifying mild and major forms of DP based on DSM-5
Deep Learning-based Detection of Intravenous Contrast Enhancement on CT Scans
Identifying the presence of intravenous contrast material on CT scans is an important component of data curation for medical imaging-based artificial intelligence model development and deployment. Use of intravenous contrast material is often poorly documented in imaging metadata, necessitating impractical manual annotation by clinician experts. Authors developed a convolutional neural network (CNN)-based deep learning platform to identify intravenous contrast enhancement on CT scans. For model development and validation, authors used six independent datasets of head and neck (HN) and chest CT scans, totaling 133 480 axial two-dimensional sections from 1979 scans, which were manually annotated by clinical experts. Five CNN models were trained first on HN scans for contrast enhancement detection. Model performances were evaluated at the patient level on a holdout set and external test set. Models were then fine-tuned on chest CT data and externally validated. This study found that Digital Imaging and Communications in Medicine metadata tags for intravenous contrast material were missing or erroneous for 1496 scans (75.6%). An EfficientNetB4-based model showed the best performance, with areas under the curve (AUCs) of 0.996 and 1.0 in HN holdout (n = 216) and external (n = 595) sets, respectively, and AUCs of 1.0 and 0.980 in the chest holdout (n = 53) and external (n = 402) sets, respectively. This automated, scan-to-prediction platform is highly accurate at CT contrast enhancement detection and may be helpful for artificial intelligence model development and clinical application. Keywords: CT, Head and Neck, Supervised Learning, Transfer Learning, Convolutional Neural Network (CNN), Machine Learning Algorithms, Contrast Material Supplemental material is available for this article. © RSNA, 2022