463 research outputs found

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    A real-time and convex model for the estimation of muscle force from surface electromyographic signals in the upper and lower limbs

    Get PDF
    Surface electromyography (sEMG) is a signal consisting of different motor unit action potential trains and records from the surface of the muscles. One of the applications of sEMG is the estimation of muscle force. We proposed a new real-time convex and interpretable model for solving the sEMG—force estimation. We validated it on the upper limb during isometric voluntary flexions-extensions at 30%, 50%, and 70% Maximum Voluntary Contraction in five subjects, and lower limbs during standing tasks in thirty-three volunteers, without a history of neuromuscular disorders. Moreover, the performance of the proposed method was statistically compared with that of the state-of-the-art (13 methods, including linear-in-the-parameter models, Artificial Neural Networks and Supported Vector Machines, and non-linear models). The envelope of the sEMG signals was estimated, and the representative envelope of each muscle was used in our analysis. The convex form of an exponential EMG-force model was derived, and each muscle’s coefficient was estimated using the Least Square method. The goodness-of-fit indices, the residual signal analysis (bias and Bland-Altman plot), and the running time analysis were provided. For the entire model, 30% of the data was used for estimation, while the remaining 20% and 50% were used for validation and testing, respectively. The average R-square (%) of the proposed method was 96.77 ± 1.67 [94.38, 98.06] for the test sets of the upper limb and 91.08 ± 6.84 [62.22, 96.62] for the lower-limb dataset (MEAN ± SD [min, max]). The proposed method was not significantly different from the recorded force signal (p-value = 0.610); that was not the case for the other tested models. The proposed method significantly outperformed the other methods (adj. p-value < 0.05). The average running time of each 250 ms signal of the training and testing of the proposed method was 25.7 ± 4.0 [22.3, 40.8] and 11.0 ± 2.9 [4.7, 17.8] in microseconds for the entire dataset. The proposed convex model is thus a promising method for estimating the force from the joints of the upper and lower limbs, with applications in load sharing, robotics, rehabilitation, and prosthesis control for the upper and lower limbs

    Proceedings XXIII Congresso SIAMOC 2023

    Get PDF
    Il congresso annuale della Società Italiana di Analisi del Movimento in Clinica (SIAMOC), giunto quest’anno alla sua ventitreesima edizione, approda nuovamente a Roma. Il congresso SIAMOC, come ogni anno, è l’occasione per tutti i professionisti che operano nell’ambito dell’analisi del movimento di incontrarsi, presentare i risultati delle proprie ricerche e rimanere aggiornati sulle più recenti innovazioni riguardanti le procedure e le tecnologie per l’analisi del movimento nella pratica clinica. Il congresso SIAMOC 2023 di Roma si propone l’obiettivo di fornire ulteriore impulso ad una già eccellente attività di ricerca italiana nel settore dell’analisi del movimento e di conferirle ulteriore respiro ed impatto internazionale. Oltre ai qualificanti temi tradizionali che riguardano la ricerca di base e applicata in ambito clinico e sportivo, il congresso SIAMOC 2023 intende approfondire ulteriori tematiche di particolare interesse scientifico e di impatto sulla società. Tra questi temi anche quello dell’inserimento lavorativo di persone affette da disabilità anche grazie alla diffusione esponenziale in ambito clinico-occupazionale delle tecnologie robotiche collaborative e quello della protesica innovativa a supporto delle persone con amputazione. Verrà infine affrontato il tema dei nuovi algoritmi di intelligenza artificiale per l’ottimizzazione della classificazione in tempo reale dei pattern motori nei vari campi di applicazione

    Design and Development of Biofeedback Stick Technology (BfT) to Improve the Quality of Life of Walking Stick Users

    Get PDF
    Biomedical engineering has seen a rapid growth in recent times, where the aim to facilitate and equip humans with the latest technology has become widespread globally. From high-tech equipment ranging from CT scanners, MRI equipment, and laser treatments, to the design, creation, and implementation of artificial body parts, the field of biomedical engineering has significantly contributed to mankind. Biomedical engineering has facilitated many of the latest developments surrounding human mobility, with advancement in mobility aids improving human movement for people with compromised mobility either caused by an injury or health condition. A review of the literature indicated that mobility aids, especially walking sticks, and appropriate training for their use, are generally prescribed by allied health professionals (AHP) to walking stick users for rehabilitation and activities of daily living (ADL). However, feedback from AHP is limited to the clinical environment, leaving walking stick users vulnerable to falls and injuries due to incorrect usage. Hence, to mitigate the risk of falls and injuries, and to facilitate a routine appraisal of individual patient’s usage, a simple, portable, robust, and reliable tool was developed which provides the walking stick users with real-time feedback upon incorrect usage during their activities of daily living (ADL). This thesis aimed to design and develop a smart walking stick technology: Biofeedback stick technology (BfT). The design incorporates the approach of patient and public involvement (PPI) in the development of BfT to ensure that BfT was developed as per the requirements of walking stick users and AHP recommendations. The newly developed system was tested quantitatively for; validity, reliability, and reproducibility against gold standard equipment such as the 3D motion capture system, force plates, optical measurement system for orientation, weight bearing, and step count. The system was also tested qualitatively for its usability by conducting semi-informal interviews with AHPs and walking stick users. The results of these studies showed that the newly developed system has good accuracy, reported above 95% with a maximum inaccuracy of 1°. The data reported indicates good reproducibility. The angles, weight, and steps recorded by the system during experiments are within the values published in the literature. From these studies, it was concluded that, BfT has the potential to improve the lives of walking stick users and that, with few additional improvements, appropriate approval from relevant regulatory bodies, and robust clinical testing, the technology has a huge potential to carve its way to a commercial market

    Multimodal assessment of emotional responses by physiological monitoring: novel auditory and visual elicitation strategies in traditional and virtual reality environments

    Get PDF
    This doctoral thesis explores novel strategies to quantify emotions and listening effort through monitoring of physiological signals. Emotions are a complex aspect of the human experience, playing a crucial role in our survival and adaptation to the environment. The study of emotions fosters important applications, such as Human-Computer and Human-Robot interaction or clinical assessment and treatment of mental health conditions such as depression, anxiety, stress, chronic anger, and mood disorders. Listening effort is also an important area of study, as it provides insight into the listeners’ challenges that are usually not identified by traditional audiometric measures. The research is divided into three lines of work, each with a unique emphasis on the methods of emotion elicitation and the stimuli that are most effective in producing emotional responses, with a specific focus on auditory stimuli. The research fostered the creation of three experimental protocols, as well as the use of an available online protocol for studying emotional responses including monitoring of both peripheral and central physiological signals, such as skin conductance, respiration, pupil dilation, electrocardiogram, blood volume pulse, and electroencephalography. An emotional protocol was created for the study of listening effort using a speech-in-noise test designed to be short and not induce fatigue. The results revealed that the listening effort is a complex problem that cannot be studied with a univariate approach, thus necessitating the use of multiple physiological markers to study different physiological dimensions. Specifically, the findings demonstrate a strong association between the level of auditory exertion, the amount of attention and involvement directed towards stimuli that are readily comprehensible compared to those that demand greater exertion. Continuing with the auditory domain, peripheral physiological signals were studied in order to discriminate four emotions elicited in a subject who listened to music for 21 days, using a previously designed and publicly available protocol. Surprisingly, the processed physiological signals were able to clearly separate the four emotions at the physiological level, demonstrating that music, which is not typically studied extensively in the literature, can be an effective stimulus for eliciting emotions. Following these results, a flat-screen protocol was created to compare physiological responses to purely visual, purely auditory, and combined audiovisual emotional stimuli. The results show that auditory stimuli are more effective in separating emotions at the physiological level. The subjects were found to be much more attentive during the audio-only phase. In order to overcome the limitations of emotional protocols carried out in a laboratory environment, which may elicit fewer emotions due to being an unnatural setting for the subjects under study, a final emotional elicitation protocol was created using virtual reality. Scenes similar to reality were created to elicit four distinct emotions. At the physiological level, it was noted that this environment is more effective in eliciting emotions. To our knowledge, this is the first protocol specifically designed for virtual reality that elicits diverse emotions. Furthermore, even in terms of classification, the use of virtual reality has been shown to be superior to traditional flat-screen protocols, opening the doors to virtual reality for the study of conditions related to emotional control

    Internet and Biometric Web Based Business Management Decision Support

    Get PDF
    Internet and Biometric Web Based Business Management Decision Support MICROBE MOOC material prepared under IO1/A5 Development of the MICROBE personalized MOOCs content and teaching materials Prepared by: A. Kaklauskas, A. Banaitis, I. Ubarte Vilnius Gediminas Technical University, Lithuania Project No: 2020-1-LT01-KA203-07810

    A real-time and convex model for the estimation of muscle force from surface electromyographic signals in the upper and lower limbs

    Get PDF
    Surface electromyography (sEMG) is a signal consisting of different motor unit action potential trains and records from the surface of the muscles. One of the applications of sEMG is the estimation of muscle force. We proposed a new real-time convex and interpretable model for solving the sEMG-force estimation. We validated it on the upper limb during isometric voluntary flexions-extensions at 30%, 50%, and 70% Maximum Voluntary Contraction in five subjects, and lower limbs during standing tasks in thirty-three volunteers, without a history of neuromuscular disorders. Moreover, the performance of the proposed method was statistically compared with that of the state-of-the-art (13 methods, including linear-in-the-parameter models, Artificial Neural Networks and Supported Vector Machines, and non-linear models). The envelope of the sEMG signals was estimated, and the representative envelope of each muscle was used in our analysis. The convex form of an exponential EMG-force model was derived, and each muscle's coefficient was estimated using the Least Square method. The goodness-of-fit indices, the residual signal analysis (bias and Bland-Altman plot), and the running time analysis were provided. For the entire model, 30% of the data was used for estimation, while the remaining 20% and 50% were used for validation and testing, respectively. The average R-square (%) of the proposed method was 96.77 +/- 1.67 [94.38, 98.06] for the test sets of the upper limb and 91.08 +/- 6.84 [62.22, 96.62] for the lower-limb dataset (MEAN +/- SD [min, max]). The proposed method was not significantly different from the recorded force signal (p-value = 0.610); that was not the case for the other tested models. The proposed method significantly outperformed the other methods (adj. p-value < 0.05). The average running time of each 250 ms signal of the training and testing of the proposed method was 25.7 +/- 4.0 [22.3, 40.8] and 11.0 +/- 2.9 [4.7, 17.8] in microseconds for the entire dataset. The proposed convex model is thus a promising method for estimating the force from the joints of the upper and lower limbs, with applications in load sharing, robotics, rehabilitation, and prosthesis control for the upper and lower limbs

    Toward Optimized VR/AR Ergonomics: Modeling and Predicting User Neck Muscle Contraction

    Full text link
    Ergonomic efficiency is essential to the mass and prolonged adoption of VR/AR experiences. While VR/AR head-mounted displays unlock users' natural wide-range head movements during viewing, their neck muscle comfort is inevitably compromised by the added hardware weight. Unfortunately, little quantitative knowledge for understanding and addressing such an issue is available so far. Leveraging electromyography devices, we measure, model, and predict VR users' neck muscle contraction levels (MCL) while they move their heads to interact with the virtual environment. Specifically, by learning from collected physiological data, we develop a bio-physically inspired computational model to predict neck MCL under diverse head kinematic states. Beyond quantifying the cumulative MCL of completed head movements, our model can also predict potential MCL requirements with target head poses only. A series of objective evaluations and user studies demonstrate its prediction accuracy and generality, as well as its ability in reducing users' neck discomfort by optimizing the layout of visual targets. We hope this research will motivate new ergonomic-centered designs for VR/AR and interactive graphics applications. Source code is released at: https://github.com/NYU-ICL/xr-ergonomics-neck-comfort.Comment: ACM SIGGRAPH 2023 Conference Proceeding
    corecore