98 research outputs found

    Deep Learning based Real-time Recognition of Dynamic Finger Gestures using a Data Glove

    Get PDF
    In this article, a real-time dynamic finger gesture recognition using a soft sensor embedded data glove is presented, which measures the metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joint angles of five fingers. In the gesture recognition field, a challenging problem is that of separating meaningful dynamic gestures from a continuous data stream. Unconscious hand motions or sudden tremors, which can easily lead to segmentation ambiguity, makes this problem difficult. Furthermore, the hand shapes and speeds of users differ when performing the same dynamic gesture, and even those made by one user often vary. To solve the problem of separating meaningful dynamic gestures, we propose a deep learning-based gesture spotting algorithm that detects the start/end of a gesture sequence in a continuous data stream. The gesture spotting algorithm takes window data and estimates a scalar value named gesture progress sequence (GPS). GPS is a quantity that represents gesture progress. Moreover, to solve the gesture variation problem, we propose a sequence simplification algorithm and a deep learning-based gesture recognition algorithm. The proposed three algorithms (gesture spotting algorithm, sequence simplification algorithm, and gesture recognition algorithm) are unified into the real-time gesture recognition system and the system was tested with 11 dynamic finger gestures in real-time. The proposed system took only 6 ms to estimate a GPS and no more than 12 ms to recognize the completed gesture in real-time

    Does Order Matter? Investigating the Effect of Sequence on Glance Duration During On-Road Driving

    Get PDF
    Previous literature has shown that vehicle crash risks increases as driversā€™ off-road glance duration increases. Many factors influence driversā€™ glance duration such as individual differences, driving environment, or task characteristics. Theories and past studies suggest that glance duration increases as the task progresses, but the exact relationship between glance sequence and glance durations is not fully understood. The purpose of this study was to examine the effect of glance sequence on glance duration among drivers completing a visual-manual radio tuning task and an auditory-vocal based multi-modal navigation entry task. Eighty participants drove a vehicle on urban highways while completing radio tuning and navigation entry tasks. Forty participants drove under an experimental protocol that required three button presses followed by rotation of a tuning knob to complete the radio tuning task while the other forty participants completed the task with one less button press. Multiple statistical analyses were conducted to measure the effect of glance sequence on glance duration. Results showed that across both tasks and a variety of statistical tests, glance sequence had inconsistent effects on glance durationā€”the effects varied according to the number of glances, task type, and data set that was being evaluated. Results suggest that other aspects of the task as well as interface design effect glance duration and should be considered in the context of examining driver attention or lack thereof. All in all, interface design and task characteristics have a more influential impact on glance duration than glance sequence, suggesting that classical design considerations impacting driver attention, such as the size and location of buttons, remain fundamental in designing in-vehicle interfaces

    A soft sensor-based three-dimensional (3-D) finger motion measurement system

    Get PDF
    In this study, a soft sensor-based three-dimensional (3-D) finger motion measurement system is proposed. The sensors, made of the soft material Ecoflex, comprise embedded microchannels filled with a conductive liquid metal (EGaln). The superior elasticity, light weight, and sensitivity of soft sensors allows them to be embedded in environments in which conventional sensors cannot. Complicated finger joints, such as the carpometacarpal (CMC) joint of the thumb are modeled to specify the location of the sensors. Algorithms to decouple the signals from soft sensors are proposed to extract the pure flexion, extension, abduction, and adduction joint angles. The performance of the proposed system and algorithms are verified by comparison with a camera-based motion capture system.ope

    Analysis of Drivers\u27 Head and Eye Movement Correspondence: Predicting Drivers\u27 Glance Location Using Head Rotation Data

    Get PDF
    The relationship between a driverā€™s glance pattern and corresponding head rotation is not clearly defined. Head rotation and eye glance data drawn from a study conducted by the Virginia Tech Transportation Institute in support of methods development for the Strategic Highway Research Program (SHRP 2) naturalistic driving study were assessed. The data were utilized as input to classifiers that predicted glance allocation to the road and the center stack. A predictive accuracy of 83% was achieved with Hidden Markov Models. Results suggest that although there are individual differences in head-eye correspondence while driving, head-rotation data may be a useful predictor of glance location. Future work needs to investigate the correspondence across a wider range of individuals, traffic conditions, secondary tasks, and areas of interest

    The Relation Between the Driver Behavior Questionnaire, Demographics, and Driving History

    Get PDF
    This paper presents an analysis of responses obtained on the Driver Behavior Questionnaire (DBQ) and self-reported history of the frequency of crashes, citations, and warnings in a sample of 562 drivers. The sample was closely balanced by gender and distributed in a broadly proportional manner across an age range of from 20 to 69 years. As has been previously reported, age and gender were found to be related to both DBQ scores and crash rates. The size and demographic distribution of the sample allowed an analysis to be run looking at the relationships of DBQ subscale scores with crashes, citations, and warnings, while controlling for age and gender. The results show that higher violation scores are positively associated with increases in self-reported crash and citation likelihoods; the less serious but apparently more common experience of receiving a warning for oneā€™s driving behavior has a significant positive association with both violation and lapse scores. The extent to which these findings can be considered relevant to the overall driving population is enhanced from previous research given the sample size and age/gender balance

    Human Rights Versus National Security in Public Opinion on Foreign Affairs South Korea Views of North Korea 2008-2019

    Get PDF
    While human rights are an integral part of democratic rule, the extent that public opinion in democracies prioritize human rights in foreign countries relative to other competing foreign policy priorities is not clear. This is particularly the case when a country poses a serious security threat and there are incentives to improve relations with the regime in power. To assess whether and how the public values human rights vis-a-vis national security in foreign affairs, this paper utilizes survey questions that capture the public's relative preferences between the two in South Korean public opinion regarding relations with North Korea. The findings shed light on the trade-off that exists in attempts to improve relations with a regime that is both a serious security threat and a perpetrator of grave human rights violations

    Predicting Secondary Task Involvement and Differences in Task Modality Using Field Highway Driving Data

    Get PDF
    This study examined differences in the impact of visual-manual and auditory-vocal based radio tuning tasks on field driving performance. Engagement in visual-manual tuning tasks were associated with higher steering wheel reversal rates than baseline driving. Both visual-manual and auditory-vocal based tuning tasks were associated with higher variances in speed maintenance compared to baseline driving. Models were built to utilize driving performance measurements as input to a classifier that aimed to distinguish between the three states (i.e., baseline driving, visual-manual tuning, and auditory-vocal tuning). Baseline driving could be classified from visual-manual tuning at an accuracy of over 99% and from auditory-vocal based tuning at an accuracy of 93.3%. Models could differentiate between the modalities at an accuracy of 75.2 % and between the three classes at an accuracy of 81.2%. Results suggest that changes in driving performance associated with visual-manual based tuning are statistically distinguishable from auditory-vocal based tuning. While not being free of visualmanual demand, tasks that involve auditory-vocal interactions appear to differ from visual-manual in how they impact driving performance
    • ā€¦
    corecore