5,947 research outputs found

    Deep gesture interaction for augmented anatomy learning

    Get PDF
    Augmented reality is very useful in medical education because of the problem of having body organs in a regular classroom. In this paper, we propose to apply augmented reality to improve the way of teaching in medical schools and institutes. We propose a novel convolutional neural network (CNN) for gesture recognition, which recognizes the human's gestures as a certain instruction. We use augmented reality technology for anatomy learning, which simulates the scenarios where students can learn Anatomy with HoloLens instead of rare specimens. We have used the mesh reconstruction to reconstruct the 3D specimens. A user interface featured augment reality has been designed which fits the common process of anatomy learning. To improve the interaction services, we have applied gestures as an input source and improve the accuracy of gestures recognition by an updated deep convolutional neural network. Our proposed learning method includes many separated train procedures using cloud computing. Each train model and its related inputs have been sent to our cloud and the results are returned to the server. The suggested cloud includes windows and android devices, which are able to install deep convolutional learning libraries. Compared with previous gesture recognition, our approach is not only more accurate but also has more potential for adding new gestures. Furthermore, we have shown that neural networks can be combined with augmented reality as a rising field, and the great potential of augmented reality and neural networks to be employed for medical learning and education systems

    Deep gesture interaction for augmented anatomy learning

    Get PDF
    Augmented reality is very useful in medical education because of the problem of having body organs in a regular classroom. In this paper, we propose to apply augmented reality to improve the way of teaching in medical schools and institutes. We propose a novel convolutional neural network (CNN) for gesture recognition, which recognizes the human's gestures as a certain instruction. We use augmented reality technology for anatomy learning, which simulates the scenarios where students can learn Anatomy with HoloLens instead of rare specimens. We have used the mesh reconstruction to reconstruct the 3D specimens. A user interface featured augment reality has been designed which fits the common process of anatomy learning. To improve the interaction services, we have applied gestures as an input source and improve the accuracy of gestures recognition by an updated deep convolutional neural network. Our proposed learning method includes many separated train procedures using cloud computing. Each train model and its related inputs have been sent to our cloud and the results are returned to the server. The suggested cloud includes windows and android devices, which are able to install deep convolutional learning libraries. Compared with previous gesture recognition, our approach is not only more accurate but also has more potential for adding new gestures. Furthermore, we have shown that neural networks can be combined with augmented reality as a rising field, and the great potential of augmented reality and neural networks to be employed for medical learning and education systems

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    Computer Vision in the Surgical Operating Room

    Get PDF
    Background: Multiple types of surgical cameras are used in modern surgical practice and provide a rich visual signal that is used by surgeons to visualize the clinical site and make clinical decisions. This signal can also be used by artificial intelligence (AI) methods to provide support in identifying instruments, structures, or activities both in real-time during procedures and postoperatively for analytics and understanding of surgical processes. Summary: In this paper, we provide a succinct perspective on the use of AI and especially computer vision to power solutions for the surgical operating room (OR). The synergy between data availability and technical advances in computational power and AI methodology has led to rapid developments in the field and promising advances. Key Messages: With the increasing availability of surgical video sources and the convergence of technologiesaround video storage, processing, and understanding, we believe clinical solutions and products leveraging vision are going to become an important component of modern surgical capabilities. However, both technical and clinical challenges remain to be overcome to efficiently make use of vision-based approaches into the clinic

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    A Microsoft HoloLens Mixed Reality Surgical Simulator for Patient-Specific Hip Arthroplasty Training

    Get PDF
    Surgical simulation can offer novice surgeons an opportunity to practice skills outside the operating theatre in a safe controlled environment. According to literature evidence, nowadays there are very few training simulators available for Hip Arthroplasty (HA). In a previous study we have presented a physical simulator based on a lower torso phantom including a patient-specific hemi-pelvis replica embedded in a soft synthetic foam. This work explores the use of Microsoft HoloLens technology to enrich the physical patient-specific simulation with the implementation of wearable mixed reality functionalities. Our HA multimodal simulator based on mixed reality using the HoloLens is described by illustrating the overall system, and by summarizing the main phases of the design and development. Finally, we present a preliminary qualitative study with seven subjects (5 medical students, and 2 orthopedic surgeons) showing encouraging results that suggest the suitability of the HoloLens for the proposed application. However, further studies need to be conducted to perform a quantitative test of the registration accuracy of the virtual content, and to confirm qualitative results in a larger cohort of subjects
    • …
    corecore