1,410 research outputs found

    Substitutional reality:using the physical environment to design virtual reality experiences

    Get PDF
    Experiencing Virtual Reality in domestic and other uncontrolled settings is challenging due to the presence of physical objects and furniture that are not usually defined in the Virtual Environment. To address this challenge, we explore the concept of Substitutional Reality in the context of Virtual Reality: a class of Virtual Environments where every physical object surrounding a user is paired, with some degree of discrepancy, to a virtual counterpart. We present a model of potential substitutions and validate it in two user studies. In the first study we investigated factors that affect participants' suspension of disbelief and ease of use. We systematically altered the virtual representation of a physical object and recorded responses from 20 participants. The second study investigated users' levels of engagement as the physical proxy for a virtual object varied. From the results, we derive a set of guidelines for the design of future Substitutional Reality experiences

    Robotic simulators for tissue examination training with multimodal sensory feedback

    Get PDF
    Tissue examination by hand remains an essential technique in clinical practice. The effective application depends on skills in sensorimotor coordination, mainly involving haptic, visual, and auditory feedback. The skills clinicians have to learn can be as subtle as regulating finger pressure with breathing, choosing palpation action, monitoring involuntary facial and vocal expressions in response to palpation, and using pain expressions both as a source of information and as a constraint on physical examination. Patient simulators can provide a safe learning platform to novice physicians before trying real patients. This paper reviews state-of-the-art medical simulators for the training for the first time with a consideration of providing multimodal feedback to learn as many manual examination techniques as possible. The study summarizes current advances in tissue examination training devices simulating different medical conditions and providing different types of feedback modalities. Opportunities with the development of pain expression, tissue modeling, actuation, and sensing are also analyzed to support the future design of effective tissue examination simulators

    Social touch in human–computer interaction

    Get PDF
    Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better research methodologies, developing basic social touch building blocks, and solving specific ICT challenges

    Development and Validation of a Hybrid Virtual/Physical Nuss Procedure Surgical Trainer

    Get PDF
    With continuous advancements and adoption of minimally invasive surgery, proficiency with nontrivial surgical skills involved is becoming a greater concern. Consequently, the use of surgical simulation has been increasingly embraced by many for training and skill transfer purposes. Some systems utilize haptic feedback within a high-fidelity anatomically-correct virtual environment whereas others use manikins, synthetic components, or box trainers to mimic primary components of a corresponding procedure. Surgical simulation development for some minimally invasive procedures is still, however, suboptimal or otherwise embryonic. This is true for the Nuss procedure, which is a minimally invasive surgery for correcting pectus excavatum (PE) – a congenital chest wall deformity. This work aims to address this gap by exploring the challenges of developing both a purely virtual and a purely physical simulation platform of the Nuss procedure and their implications in a training context. This work then describes the development of a hybrid mixed-reality system that integrates virtual and physical constituents as well as an augmentation of the haptic interface, to carry out a reproduction of the primary steps of the Nuss procedure and satisfy clinically relevant prerequisites for its training platform. Furthermore, this work carries out a user study to investigate the system’s face, content, and construct validity to establish its faithfulness as a training platform

    Simulating dynamic facial expressions of pain from visuo-haptic interactions with a robotic patient

    Get PDF
    Medical training simulators can provide a safe and controlled environment for medical students to practice their physical examination skills. An important source of information for physicians is the visual feedback of involuntary pain facial expressions in response to physical palpation on an affected area of a patient. However, most existing robotic medical training simulators that can capture physical examination behaviours in real-time cannot display facial expressions and comprise a limited range of patient identities in terms of ethnicity and gender. Together, these limitations restrict the utility of medical training simulators because they do not provide medical students with a representative sample of pain facial expressions and face identities, which could result in biased practices. Further, these limitations restrict the utility of such medical simulators to detect and correct early signs of bias in medical training. Here, for the first time, we present a robotic system that can simulate facial expressions of pain in response to palpations, displayed on a range of patient face identities. We use the unique approach of modelling dynamic pain facial expressions using a data-driven perception-based psychophysical method combined with the visuo-haptic inputs of users performing palpations on a robot medical simulator. Specifically, participants performed palpation actions on the abdomen phantom of a simulated patient, which triggered the real-time display of six pain-related facial Action Units (AUs) on a robotic face (MorphFace), each controlled by two pseudo randomly generated transient parameters: rate of change β and activation delay τ. Participants then rated the appropriateness of the facial expression displayed in response to their palpations on a 4-point scale from “strongly disagree” to “strongly agree”. Each participant (n=16, 4 Asian females, 4 Asian males, 4 White females and 4 White males) performed 200 palpation trials on 4 patient identities (Black female, Black male, White female and White male) simulated using MorphFace. Results showed facial expressions rated as most appropriate by all participants comprise a higher rate of change and shorter delay from upper face AUs (around the eyes) to those in the lower face (around the mouth). In contrast, we found that transient parameter values of most appropriate-rated pain facial expressions, palpation forces, and delays between palpation actions varied across participant-simulated patient pairs according to gender and ethnicity. These findings suggest that gender and ethnicity biases affect palpation strategies and the perception of pain facial expressions displayed on MorphFace. We anticipate that our approach will be used to generate physical examination models with diverse patient demographics to reduce erroneous judgments in medical students, and provide focused training to address these errors

    A Modular and Extensible Architecture Integrating Sensors, Dynamic Displays of Anatomy and Physiology, and Automated Instruction for Innovations in Clinical Education

    Get PDF
    Adoption of simulation in healthcare education has increased tremendously over the past two decades. However, the resources necessary to perform simulation are immense. Simulators are large capital investments and require specialized training for both instructors and simulation support staff in order to develop curriculum using the simulator and to use the simulator to train students. Simulators require staff to run the simulator, and instructors must always be present to guide and assess student performance. Current simulators do not support self-learning by students. As a result, the expensive simulators sit idle most of the day. Furthermore, simulators are minimally customizable, resulting in programs often being required to purchase simulators that have more functions and features than needed or that cannot be upgraded as needs change. This dissertation presents the development of BodyExplorer, a system designed to address limitations of current simulators by reducing the resources required to support simulation in healthcare education, enabling self-use by students, and providing an architecture to support modular and extensible simulator development and upgrades. This dissertation discusses BodyExplorer’s initial prototype design, integration, and verification, as well as development of the modular architecture for integrating sensors, dynamic displays of anatomy and physiology, and automated instruction. Novel sensor systems were integrated to measure user actions while performing (simulated) medication administration, cricoid pressure application, and endotracheal intubation on a simulation mannequin. Dynamic displays of anatomy and physiology, showing animations of breathing lungs or a beating heart, for example, were developed and integrated into BodyExplorer. Projected augmented reality is used to show users underlying anatomy and physiology on the surface of the mannequin, allowing users to see the internal consequences of their actions. An interface for supporting self-use and showing additional views of anatomy was incorporated using a mobile display. Using the projected images, mobile display, and audio output, a virtual instructor was developed to provide automated instructions to users based upon real-time sensor measurements. Development of BodyExplorer was performed iteratively and included feedback from end-users throughout development using user-centered design principles. The mixed-methods results from three usability testing sessions with end-users at two academic institutions will be presented, along with the rationale for design decisions that derived from the results. Built upon feedback received during usability testing, the results from two scenarios of automated instruction will be provided by demonstrating examples of learning to apply cricoid pressure and learning to administer (simulated) medications in order to control heart rate. Discussion will also be provided regarding how the automated instruction techniques can be extended to provide training in other healthcare applications

    Wright State University\u27s Symposium of Student Research, Scholarship & Creative Activities from Thursday, October 26, 2023

    Get PDF
    The student abstract booklet is a compilation of abstracts from students\u27 oral and poster presentations at Wright State University\u27s Symposium of Student Research, Scholarship & Creative Activities on October 26, 2023.https://corescholar.libraries.wright.edu/celebration_abstract_books/1001/thumbnail.jp

    A HoloLens Framework for Augmented Reality Applications in Breast Cancer Surgery

    Get PDF
    This project aims to support oncologic breast-conserving surgery by creating a platform for better surgical planning through the development of a framework that is capable of displaying a virtual model of the tumour(s) requiring surgery, on a patient's breast. Breast-conserving surgery is the first clear option when it comes to tackling cases of breast cancer, but the surgery comes with risks. The surgeon wants to maintain clean margins while performing the procedure such that the disease does not resurface. This calls for the importance of surgical planning where the surgeon consults with radiologists and pre-surgical imaging such as Magnetic Resonance Imaging (MRI). The MRI prior to the surgical procedure, however, is taken with the patient in the prone position (face-down) but the surgery happens in a supine position (face-up). Thus mapping the location of the tumour(s) to the corresponding anatomical position from the MRI is a tedious task which requires a large amount of expertise and time given that the organ is soft and flexible. For this project, the tumour is visualized in the corresponding anatomical position to assist in surgical planning. Augmented Reality is the best option for this problem and this, in turn, led to an investigation of the application capability of the Microsoft HoloLens to solve this problem. Given its multitude of sensors and resolution of display the device is a fine candidate for this process. However, the HoloLens is still under development with a large number of limitations in its use. This work tries to compensate for these limitations using the existing hardware and software in the device's arsenal. Within this masters thesis, the principal questions answered are related to the acquiring of data from breast mimicking objects in acceptable resolutions, discriminating between the information based on photometry, offloading the data to a computer for post-processing in creating a correspondence between the MRI data and acquired data, and finally retrieving the processed information such that the MRI information can be used for visualizing the tumor in the anatomically precise position. Unfortunately, time limitations for this project led to an incomplete system which is not completely synchronized, however, our work has solidified the grounds for the software aspects toward the final goals set out such that extensive exploration need only be done in the imaging side of this problem
    • …
    corecore