65 research outputs found

    Enhanced Virtuality: Increasing the Usability and Productivity of Virtual Environments

    Get PDF
    Mit stetig steigender Bildschirmauflösung, genauerem Tracking und fallenden Preisen stehen Virtual Reality (VR) Systeme kurz davor sich erfolgreich am Markt zu etablieren. Verschiedene Werkzeuge helfen Entwicklern bei der Erstellung komplexer Interaktionen mit mehreren Benutzern innerhalb adaptiver virtueller Umgebungen. Allerdings entstehen mit der Verbreitung der VR-Systeme auch zusätzliche Herausforderungen: Diverse Eingabegeräte mit ungewohnten Formen und Tastenlayouts verhindern eine intuitive Interaktion. Darüber hinaus zwingt der eingeschränkte Funktionsumfang bestehender Software die Nutzer dazu, auf herkömmliche PC- oder Touch-basierte Systeme zurückzugreifen. Außerdem birgt die Zusammenarbeit mit anderen Anwendern am gleichen Standort Herausforderungen hinsichtlich der Kalibrierung unterschiedlicher Trackingsysteme und der Kollisionsvermeidung. Beim entfernten Zusammenarbeiten wird die Interaktion durch Latenzzeiten und Verbindungsverluste zusätzlich beeinflusst. Schließlich haben die Benutzer unterschiedliche Anforderungen an die Visualisierung von Inhalten, z.B. Größe, Ausrichtung, Farbe oder Kontrast, innerhalb der virtuellen Welten. Eine strikte Nachbildung von realen Umgebungen in VR verschenkt Potential und wird es nicht ermöglichen, die individuellen Bedürfnisse der Benutzer zu berücksichtigen. Um diese Probleme anzugehen, werden in der vorliegenden Arbeit Lösungen in den Bereichen Eingabe, Zusammenarbeit und Erweiterung von virtuellen Welten und Benutzern vorgestellt, die darauf abzielen, die Benutzerfreundlichkeit und Produktivität von VR zu erhöhen. Zunächst werden PC-basierte Hardware und Software in die virtuelle Welt übertragen, um die Vertrautheit und den Funktionsumfang bestehender Anwendungen in VR zu erhalten. Virtuelle Stellvertreter von physischen Geräten, z.B. Tastatur und Tablet, und ein VR-Modus für Anwendungen ermöglichen es dem Benutzer reale Fähigkeiten in die virtuelle Welt zu übertragen. Des Weiteren wird ein Algorithmus vorgestellt, der die Kalibrierung mehrerer ko-lokaler VR-Geräte mit hoher Genauigkeit und geringen Hardwareanforderungen und geringem Aufwand ermöglicht. Da VR-Headsets die reale Umgebung der Benutzer ausblenden, wird die Relevanz einer Ganzkörper-Avatar-Visualisierung für die Kollisionsvermeidung und das entfernte Zusammenarbeiten nachgewiesen. Darüber hinaus werden personalisierte räumliche oder zeitliche Modifikationen vorgestellt, die es erlauben, die Benutzerfreundlichkeit, Arbeitsleistung und soziale Präsenz von Benutzern zu erhöhen. Diskrepanzen zwischen den virtuellen Welten, die durch persönliche Anpassungen entstehen, werden durch Methoden der Avatar-Umlenkung (engl. redirection) kompensiert. Abschließend werden einige der Methoden und Erkenntnisse in eine beispielhafte Anwendung integriert, um deren praktische Anwendbarkeit zu verdeutlichen. Die vorliegende Arbeit zeigt, dass virtuelle Umgebungen auf realen Fähigkeiten und Erfahrungen aufbauen können, um eine vertraute und einfache Interaktion und Zusammenarbeit von Benutzern zu gewährleisten. Darüber hinaus ermöglichen individuelle Erweiterungen des virtuellen Inhalts und der Avatare Einschränkungen der realen Welt zu überwinden und das Erlebnis von VR-Umgebungen zu steigern

    Proceedings XXIII Congresso SIAMOC 2023

    Get PDF
    Il congresso annuale della Società Italiana di Analisi del Movimento in Clinica (SIAMOC), giunto quest’anno alla sua ventitreesima edizione, approda nuovamente a Roma. Il congresso SIAMOC, come ogni anno, è l’occasione per tutti i professionisti che operano nell’ambito dell’analisi del movimento di incontrarsi, presentare i risultati delle proprie ricerche e rimanere aggiornati sulle più recenti innovazioni riguardanti le procedure e le tecnologie per l’analisi del movimento nella pratica clinica. Il congresso SIAMOC 2023 di Roma si propone l’obiettivo di fornire ulteriore impulso ad una già eccellente attività di ricerca italiana nel settore dell’analisi del movimento e di conferirle ulteriore respiro ed impatto internazionale. Oltre ai qualificanti temi tradizionali che riguardano la ricerca di base e applicata in ambito clinico e sportivo, il congresso SIAMOC 2023 intende approfondire ulteriori tematiche di particolare interesse scientifico e di impatto sulla società. Tra questi temi anche quello dell’inserimento lavorativo di persone affette da disabilità anche grazie alla diffusione esponenziale in ambito clinico-occupazionale delle tecnologie robotiche collaborative e quello della protesica innovativa a supporto delle persone con amputazione. Verrà infine affrontato il tema dei nuovi algoritmi di intelligenza artificiale per l’ottimizzazione della classificazione in tempo reale dei pattern motori nei vari campi di applicazione

    Novel Bidirectional Body - Machine Interface to Control Upper Limb Prosthesis

    Get PDF
    Objective. The journey of a bionic prosthetic user is characterized by the opportunities and limitations involved in adopting a device (the prosthesis) that should enable activities of daily living (ADL). Within this context, experiencing a bionic hand as a functional (and, possibly, embodied) limb constitutes the premise for mitigating the risk of its abandonment through the continuous use of the device. To achieve such a result, different aspects must be considered for making the artificial limb an effective support for carrying out ADLs. Among them, intuitive and robust control is fundamental to improving amputees’ quality of life using upper limb prostheses. Still, as artificial proprioception is essential to perceive the prosthesis movement without constant visual attention, a good control framework may not be enough to restore practical functionality to the limb. To overcome this, bidirectional communication between the user and the prosthesis has been recently introduced and is a requirement of utmost importance in developing prosthetic hands. Indeed, closing the control loop between the user and a prosthesis by providing artificial sensory feedback is a fundamental step towards the complete restoration of the lost sensory-motor functions. Within my PhD work, I proposed the development of a more controllable and sensitive human-like hand prosthesis, i.e., the Hannes prosthetic hand, to improve its usability and effectiveness. Approach. To achieve the objectives of this thesis work, I developed a modular and scalable software and firmware architecture to control the Hannes prosthetic multi-Degree of Freedom (DoF) system and to fit all users’ needs (hand aperture, wrist rotation, and wrist flexion in different combinations). On top of this, I developed several Pattern Recognition (PR) algorithms to translate electromyographic (EMG) activity into complex movements. However, stability and repeatability were still unmet requirements in multi-DoF upper limb systems; hence, I started by investigating different strategies to produce a more robust control. To do this, EMG signals were collected from trans-radial amputees using an array of up to six sensors placed over the skin. Secondly, I developed a vibrotactile system to implement haptic feedback to restore proprioception and create a bidirectional connection between the user and the prosthesis. Similarly, I implemented an object stiffness detection to restore tactile sensation able to connect the user with the external word. This closed-loop control between EMG and vibration feedback is essential to implementing a Bidirectional Body - Machine Interface to impact amputees’ daily life strongly. For each of these three activities: (i) implementation of robust pattern recognition control algorithms, (ii) restoration of proprioception, and (iii) restoration of the feeling of the grasped object's stiffness, I performed a study where data from healthy subjects and amputees was collected, in order to demonstrate the efficacy and usability of my implementations. In each study, I evaluated both the algorithms and the subjects’ ability to use the prosthesis by means of the F1Score parameter (offline) and the Target Achievement Control test-TAC (online). With this test, I analyzed the error rate, path efficiency, and time efficiency in completing different tasks. Main results. Among the several tested methods for Pattern Recognition, the Non-Linear Logistic Regression (NLR) resulted to be the best algorithm in terms of F1Score (99%, robustness), whereas the minimum number of electrodes needed for its functioning was determined to be 4 in the conducted offline analyses. Further, I demonstrated that its low computational burden allowed its implementation and integration on a microcontroller running at a sampling frequency of 300Hz (efficiency). Finally, the online implementation allowed the subject to simultaneously control the Hannes prosthesis DoFs, in a bioinspired and human-like way. In addition, I performed further tests with the same NLR-based control by endowing it with closed-loop proprioceptive feedback. In this scenario, the results achieved during the TAC test obtained an error rate of 15% and a path efficiency of 60% in experiments where no sources of information were available (no visual and no audio feedback). Such results demonstrated an improvement in the controllability of the system with an impact on user experience. Significance. The obtained results confirmed the hypothesis of improving robustness and efficiency of a prosthetic control thanks to of the implemented closed-loop approach. The bidirectional communication between the user and the prosthesis is capable to restore the loss of sensory functionality, with promising implications on direct translation in the clinical practice

    Understanding Hand Interactions and Mid-Air Haptic Responses within Virtual Reality and Beyond.

    Get PDF
    Hand tracking has long been seen as a futuristic interaction, firmly situated into the realms of sci-fi. Recent developments and technological advancements have brought that dream into reality, allowing for real-time interactions by naturally moving and positioning your hand. While these developments have enabled numerous research projects, it is only recently that businesses and devices are truly starting to implement and integrate the technology into their different sectors. Numerous devices are shifting towards a fully self- contained ecosystem, where the removal of controllers could significantly help in reducing barriers to entry. Prior studies have focused on the effects or possible areas for implementation of hand tracking, but rarely focus on the direct comparisons of technologies, nor do they attempt to reproduce lost capabilities. With this prevailing background, the work presented in this thesis aims to understand the benefits and negatives of hand tracking when treated as the primary interaction method within virtual reality (VR) environments. Coupled with this, the implementation and usage of novel mid-air ultrasound-based haptics attempt to reintroduce feedback that would have been achieved through conventional controller interactions. Two unique user studies were undertaken, testing core underlying interactions within VR that represent common instances found throughout simulations. The first study focuses on the interactions presented within 3D VR user interfaces, with a core topic of buttons. While the second study directly compares input and haptic modalities within two different fine motor skill tasks. These studies are coupled with the development and implementation of a real-time user study recording toolkit, allowing for significantly heightened user analysis and visual evaluation of interactions. Results from these studies and developments make valuable contributions to the research and business knowledge of hand tracking interactions, as well as providing a uniquely valuable open-source toolkit for other researchers to use. This thesis covers work undertaken at Ultraleap over varying projects between 2018 and 2021

    인간 기계 상호작용을 위한 강건하고 정확한 손동작 추적 기술 연구

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 기계항공공학부, 2021.8. 이동준.Hand-based interface is promising for realizing intuitive, natural and accurate human machine interaction (HMI), as the human hand is main source of dexterity in our daily activities. For this, the thesis begins with the human perception study on the detection threshold of visuo-proprioceptive conflict (i.e., allowable tracking error) with or without cutantoues haptic feedback, and suggests tracking error specification for realistic and fluidic hand-based HMI. The thesis then proceeds to propose a novel wearable hand tracking module, which, to be compatible with the cutaneous haptic devices spewing magnetic noise, opportunistically employ heterogeneous sensors (IMU/compass module and soft sensor) reflecting the anatomical properties of human hand, which is suitable for specific application (i.e., finger-based interaction with finger-tip haptic devices). This hand tracking module however loses its tracking when interacting with, or being nearby, electrical machines or ferromagnetic materials. For this, the thesis presents its main contribution, a novel visual-inertial skeleton tracking (VIST) framework, that can provide accurate and robust hand (and finger) motion tracking even for many challenging real-world scenarios and environments, for which the state-of-the-art technologies are known to fail due to their respective fundamental limitations (e.g., severe occlusions for tracking purely with vision sensors; electromagnetic interference for tracking purely with IMUs (inertial measurement units) and compasses; and mechanical contacts for tracking purely with soft sensors). The proposed VIST framework comprises a sensor glove with multiple IMUs and passive visual markers as well as a head-mounted stereo camera; and a tightly-coupled filtering-based visual-inertial fusion algorithm to estimate the hand/finger motion and auto-calibrate hand/glove-related kinematic parameters simultaneously while taking into account the hand anatomical constraints. The VIST framework exhibits good tracking accuracy and robustness, affordable material cost, light hardware and software weights, and ruggedness/durability even to permit washing. Quantitative and qualitative experiments are also performed to validate the advantages and properties of our VIST framework, thereby, clearly demonstrating its potential for real-world applications.손 동작을 기반으로 한 인터페이스는 인간-기계 상호작용 분야에서 직관성, 몰입감, 정교함을 제공해줄 수 있어 많은 주목을 받고 있고, 이를 위해 가장 필수적인 기술 중 하나가 손 동작의 강건하고 정확한 추적 기술 이다. 이를 위해 본 학위논문에서는 먼저 사람 인지의 관점에서 손 동작 추적 오차의 인지 범위를 규명한다. 이 오차 인지 범위는 새로운 손 동작 추적 기술 개발 시 중요한 설계 기준이 될 수 있어 이를 피험자 실험을 통해 정량적으로 밝히고, 특히 손끝 촉각 장비가 있을때 이 인지 범위의 변화도 밝힌다. 이를 토대로, 촉각 피드백을 주는 것이 다양한 인간-기계 상호작용 분야에서 널리 연구되어 왔으므로, 먼저 손끝 촉각 장비와 함께 사용할 수 있는 손 동작 추적 모듈을 개발한다. 이 손끝 촉각 장비는 자기장 외란을 일으켜 착용형 기술에서 흔히 사용되는 지자기 센서를 교란하는데, 이를 적절한 사람 손의 해부학적 특성과 관성 센서/지자기 센서/소프트 센서의 적절한 활용을 통해 해결한다. 이를 확장하여 본 논문에서는, 촉각 장비 착용 시 뿐 아니라 모든 장비 착용 / 환경 / 물체와의 상호작용 시에도 사용 가능한 새로운 손 동작 추적 기술을 제안한다. 기존의 손 동작 추적 기술들은 가림 현상 (영상 기반 기술), 지자기 외란 (관성/지자기 센서 기반 기술), 물체와의 접촉 (소프트 센서 기반 기술) 등으로 인해 제한된 환경에서 밖에 사용하지 못한다. 이를 위해 많은 문제를 일으키는 지자기 센서 없이 상보적인 특성을 지니는 관성 센서와 영상 센서를 융합하고, 이때 작은 공간에 다 자유도의 움직임을 갖는 손 동작을 추적하기 위해 다수의 구분되지 않는 마커들을 사용한다. 이 마커의 구분 과정 (correspondence search)를 위해 기존의 약결합 (loosely-coupled) 기반이 아닌 강결합 (tightly-coupled 기반 센서 융합 기술을 제안하고, 이를 통해 지자기 센서 없이 정확한 손 동작이 가능할 뿐 아니라 착용형 센서들의 정확성/편의성에 문제를 일으키던 센서 부착 오차 / 사용자의 손 모양 등을 자동으로 정확히 보정한다. 이 제안된 영상-관성 센서 융합 기술 (Visual-Inertial Skeleton Tracking (VIST)) 의 뛰어난 성능과 강건성이 다양한 정량/정성 실험을 통해 검증되었고, 이는 VIST의 다양한 일상환경에서 기존 시스템이 구현하지 못하던 손 동작 추적을 가능케 함으로써, 많은 인간-기계 상호작용 분야에서의 가능성을 보여준다.1 Introduction 1 1.1. Motivation 1 1.2. Related Work 5 1.3. Contribution 12 2 Detection Threshold of Hand Tracking Error 16 2.1. Motivation 16 2.2. Experimental Environment 20 2.2.1. Hardware Setup 21 2.2.2. Virtual Environment Rendering 23 2.2.3. HMD Calibration 23 2.3. Identifying the Detection Threshold of Tracking Error 26 2.3.1. Experimental Setup 27 2.3.2. Procedure 27 2.3.3. Experimental Result 31 2.4. Enlarging the Detection Threshold of Tracking Error by Haptic Feedback 31 2.4.1. Experimental Setup 31 2.4.2. Procedure 32 2.4.3. Experimental Result 34 2.5. Discussion 34 3 Wearable Finger Tracking Module for Haptic Interaction 38 3.1. Motivation 38 3.2. Development of Finger Tracking Module 42 3.2.1. Hardware Setup 42 3.2.2. Tracking algorithm 45 3.2.3. Calibration method 48 3.3. Evaluation for VR Haptic Interaction Task 50 3.3.1. Quantitative evaluation of FTM 50 3.3.2. Implementation of Wearable Cutaneous Haptic Interface 51 3.3.3. Usability evaluation for VR peg-in-hole task 53 3.4. Discussion 57 4 Visual-Inertial Skeleton Tracking for Human Hand 59 4.1. Motivation 59 4.2. Hardware Setup and Hand Models 62 4.2.1. Human Hand Model 62 4.2.2. Wearable Sensor Glove 62 4.2.3. Stereo Camera 66 4.3. Visual Information Extraction 66 4.3.1. Marker Detection in Raw Images 68 4.3.2. Cost Function for Point Matching 68 4.3.3. Left-Right Stereo Matching 69 4.4. IMU-Aided Correspondence Search 72 4.5. Filtering-based Visual-Inertial Sensor Fusion 76 4.5.1. EKF States for Hand Tracking and Auto-Calibration 78 4.5.2. Prediction with IMU Information 79 4.5.3. Correction with Visual Information 82 4.5.4. Correction with Anatomical Constraints 84 4.6. Quantitative Evaluation for Free Hand Motion 87 4.6.1. Experimental Setup 87 4.6.2. Procedure 88 4.6.3. Experimental Result 90 4.7. Quantitative and Comparative Evaluation for Challenging Hand Motion 95 4.7.1. Experimental Setup 95 4.7.2. Procedure 96 4.7.3. Experimental Result 98 4.7.4. Performance Comparison with Existing Methods for Challenging Hand Motion 101 4.8. Qualitative Evaluation for Real-World Scenarios 105 4.8.1. Visually Complex Background 105 4.8.2. Object Interaction 106 4.8.3. Wearing Fingertip Cutaneous Haptic Devices 109 4.8.4. Outdoor Environment 111 4.9. Discussion 112 5 Conclusion 116 References 124 Abstract (in Korean) 139 Acknowledgment 141박

    Affective state recognition in Virtual Reality from electromyography and photoplethysmography using head-mounted wearable sensors.

    Get PDF
    The three core components of Affective Computing (AC) are emotion expression recognition, emotion processing, and emotional feedback. Affective states are typically characterized in a two-dimensional space consisting of arousal, i.e., the intensity of the emotion felt; and valence, i.e., the degree to which the current emotion is pleasant or unpleasant. These fundamental properties of emotion can not only be measured using subjective ratings from users, but also with the help of physiological and behavioural measures, which potentially provide an objective evaluation across users. Multiple combinations of measures are utilised in AC for a range of applications, including education, healthcare, marketing, and entertainment. As the uses of immersive Virtual Reality (VR) technologies are growing, there is a rapidly increasing need for robust affect recognition in VR settings. However, the integration of affect detection methodologies with VR remains an unmet challenge due to constraints posed by the current VR technologies, such as Head Mounted Displays. This EngD project is designed to overcome some of the challenges by effectively integrating valence and arousal recognition methods in VR technologies and by testing their reliability in seated and room-scale full immersive VR conditions. The aim of this EngD research project is to identify how affective states are elicited in VR and how they can be efficiently measured, without constraining the movement and decreasing the sense of presence in the virtual world. Through a three-years long collaboration with Emteq labs Ltd, a wearable technology company, we assisted in the development of a novel multimodal affect detection system, specifically tailored towards the requirements of VR. This thesis will describe the architecture of the system, the research studies that enabled this development, and the future challenges. The studies conducted, validated the reliability of our proposed system, including the VR stimuli design, data measures and processing pipeline. This work could inform future studies in the field of AC in VR and assist in the development of novel applications and healthcare interventions

    Proceedings of the 3rd International Mobile Brain/Body Imaging Conference : Berlin, July 12th to July 14th 2018

    Get PDF
    The 3rd International Mobile Brain/Body Imaging (MoBI) conference in Berlin 2018 brought together researchers from various disciplines interested in understanding the human brain in its natural environment and during active behavior. MoBI is a new imaging modality, employing mobile brain imaging methods like the electroencephalogram (EEG) or near infrared spectroscopy (NIRS) synchronized to motion capture and other data streams to investigate brain activity while participants actively move in and interact with their environment. Mobile Brain / Body Imaging allows to investigate brain dynamics accompanying more natural cognitive and affective processes as it allows the human to interact with the environment without restriction regarding physical movement. Overcoming the movement restrictions of established imaging modalities like functional magnetic resonance tomography (MRI), MoBI can provide new insights into the human brain function in mobile participants. This imaging approach will lead to new insights into the brain functions underlying active behavior and the impact of behavior on brain dynamics and vice versa, it can be used for the development of more robust human-machine interfaces as well as state assessment in mobile humans.DFG, GR2627/10-1, 3rd International MoBI Conference 201

    Literacy for digital futures : Mind, body, text

    Get PDF
    The unprecedented rate of global, technological, and societal change calls for a radical, new understanding of literacy. This book offers a nuanced framework for making sense of literacy by addressing knowledge as contextualised, embodied, multimodal, and digitally mediated. In today’s world of technological breakthroughs, social shifts, and rapid changes to the educational landscape, literacy can no longer be understood through established curriculum and static text structures. To prepare teachers, scholars, and researchers for the digital future, the book is organised around three themes – Mind and Materiality; Body and Senses; and Texts and Digital Semiotics – to shape readers’ understanding of literacy. Opening up new interdisciplinary themes, Mills, Unsworth, and Scholes confront emerging issues for next-generation digital literacy practices. The volume helps new and established researchers rethink dynamic changes in the materiality of texts and their implications for the mind and body, and features recommendations for educational and professional practice

    Smart Sensors for Healthcare and Medical Applications

    Get PDF
    This book focuses on new sensing technologies, measurement techniques, and their applications in medicine and healthcare. Specifically, the book briefly describes the potential of smart sensors in the aforementioned applications, collecting 24 articles selected and published in the Special Issue “Smart Sensors for Healthcare and Medical Applications”. We proposed this topic, being aware of the pivotal role that smart sensors can play in the improvement of healthcare services in both acute and chronic conditions as well as in prevention for a healthy life and active aging. The articles selected in this book cover a variety of topics related to the design, validation, and application of smart sensors to healthcare
    corecore