111 research outputs found

    The Application of Mixed Reality Within Civil Nuclear Manufacturing and Operational Environments

    Get PDF
    This thesis documents the design and application of Mixed Reality (MR) within a nuclear manufacturing cell through the creation of a Digitally Assisted Assembly Cell (DAAC). The DAAC is a proof of concept system, combining full body tracking within a room sized environment and bi-directional feedback mechanism to allow communication between users within the Virtual Environment (VE) and a manufacturing cell. This allows for training, remote assistance, delivery of work instructions, and data capture within a manufacturing cell. The research underpinning the DAAC encompasses four main areas; the nuclear industry, Virtual Reality (VR) and MR technology, MR within manufacturing, and finally the 4 th Industrial Revolution (IR4.0). Using an array of Kinect sensors, the DAAC was designed to capture user movements within a real manufacturing cell, which can be transferred in real time to a VE, creating a digital twin of the real cell. Users can interact with each other via digital assets and laser pointers projected into the cell, accompanied by a built-in Voice over Internet Protocol (VoIP) system. This allows for the capture of implicit knowledge from operators within the real manufacturing cell, as well as transfer of that knowledge to future operators. Additionally, users can connect to the VE from anywhere in the world. In this way, experts are able to communicate with the users in the real manufacturing cell and assist with their training. The human tracking data fills an identified gap in the IR4.0 network of Cyber Physical System (CPS), and could allow for future optimisations within manufacturing systems, Material Resource Planning (MRP) and Enterprise Resource Planning (ERP). This project is a demonstration of how MR could prove valuable within nuclear manufacture. The DAAC is designed to be low cost. It is hoped this will allow for its use by groups who have traditionally been priced out of MR technology. This could help Small to Medium Enterprises (SMEs) close the double digital divide between themselves and larger global corporations. For larger corporations it offers the benefit of being low cost, and, is consequently, easier to roll out across the value chain. Skills developed in one area can also be transferred to others across the internet, as users from one manufacturing cell can watch and communicate with those in another. However, as a proof of concept, the DAAC is at Technology Readiness Level (TRL) five or six and, prior to its wider application, further testing is required to asses and improve the technology. The work was patented in both the UK (S. R EDDISH et al., 2017a), the US (S. R EDDISH et al., 2017b) and China (S. R EDDISH et al., 2017c). The patents are owned by Rolls-Royce and cover the methods of bi-directional feedback from which users can interact from the digital to the real and vice versa. Stephen Reddish Mixed Mode Realities in Nuclear Manufacturing Key words: Mixed Mode Reality, Virtual Reality, Augmented Reality, Nuclear, Manufacture, Digital Twin, Cyber Physical Syste

    Optical Methods in Sensing and Imaging for Medical and Biological Applications

    Get PDF
    The recent advances in optical sources and detectors have opened up new opportunities for sensing and imaging techniques which can be successfully used in biomedical and healthcare applications. This book, entitled ‘Optical Methods in Sensing and Imaging for Medical and Biological Applications’, focuses on various aspects of the research and development related to these areas. The book will be a valuable source of information presenting the recent advances in optical methods and novel techniques, as well as their applications in the fields of biomedicine and healthcare, to anyone interested in this subject

    Projection-based Spatial Augmented Reality for Interactive Visual Guidance in Surgery

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Augmented reality supported order picking using projected user interfaces

    Get PDF
    Order Picking is one of the most important tasks in modern warehouses. Since most work is still done manually, new methods to improve efficiency of the task are being researched. While the currently most used approaches Pick-by-Paper and Pick-by-Light are either prone to error or only scalable with high costs, other methods are considered. These methods include Pick-by-Vision systems based on Augmented Reality although these systems mostly rely on head-mounted displays. In order to evaluate a new method, we developed OrderPickAR which uses an order picking cart as well as projected user interfaces. OrderPickAR is part of the motionEAP project of the University of Stuttgart and relies on in-situ projection as well as motion recognition to guide the user and present feedback. The intuitive feedback provided by the in-situ projection as well as the motion recognition gives OrderPickAR the chance to effectivly eliminate errors while lowering the task completion time. With the use of a mobile workstation we also address the scalability of OrderPickAR. Since the developement is not sufficiant, we also conducted a study in which we compared OrderPickAR to currently used approaches. In addition we included a Pick-by-Vision approach developed in a related project by Sebastian Pickl. We analysed and compared different error types as well as the task completion time.Kommissionierung ist einer der wichtigsten Aufgaben in modernen Lagerhäusern. Da die meiste Arbeit noch immer manuell getätigt wird, werden neue Methoden zur Steigerung der Effizienz untersucht. Während die aktuell am meisten genutzten Ansätze, Pick-by-Paper und Pick-by-Light, entweder fehleranfällig oder nur unter hohen Kosten skalierbar sind, werden neue Methoden in Betracht gezogen. Diese Methoden schließen Pick-by-Vision Systeme basierend auf Augmended Reality ein, welche aber hauptsächlich auf den Nutzen von Head- Mounted Displays setzen. Um eine neue Methode zu untersuchen, haben wir OrderPickAR entwickelt, welches einen Kommissionierwagen und projizierte User Interfaces nutzt. Order-PickAR ist Teil des motionEAP Projekts der Universität Stuttgart und nutzt in-situ Projektion sowie Bewegungserkennung um den Nutzern zu leiten und Feedback zu präsentieren. Das intuitive Feedback der in-situ Projektion und die Bewegungserkennung geben OrderPickAR die Chance, Fehler auszumerzen, während gleichzeitig die Bearbeitungszeit einer Aufgabe reduziert wird. Durch die Nutzung einer mobilen Arbeitsstation berücksichtigen wir außerdem die Skalierbarkeit von OrderPickAR. Da die Entwicklung alleine nicht ausreicht, haben wir zusätzlich eine Studie durchgeführt, in welcher wir OrderPickAR mit aktuell genutzten Methoden verglichen haben. Außerdem haben wir einen Pick-by-Vision Ansatz, der von Sebastin Pickl in einem verwandten Projekt entwickelt wurde, in die Studie eingebunden. Wir haben unterschiedliche Fehlerarten und die Bearbeitungszeit untersucht und verglichen

    Beyond reality - extending a presentation trainer with an immersive VR module

    Get PDF
    The development of multimodal sensor-based applications designed to support learners with the improvement of their skills is expensive since most of these applications are tailor-made and built from scratch. In this paper, we show how the Presentation Trainer (PT), a multimodal sensor-based application designed to support the development of public speaking skills, can be modularly extended with a Virtual Reality real-time feedback module (VR module), which makes usage of the PT more immersive and comprehensive. The described study consists of a formative evaluation and has two main objectives. Firstly, a technical objective is concerned with the feasibility of extending the PT with an immersive VR Module. Secondly, a user experience objective focuses on the level of satisfaction of interacting with the VR extended PT. To study these objectives, we conducted user tests with 20 participants. Results from our test show the feasibility of modularly extending existing multimodal sensor-based applications, and in terms of learning and user experience, results indicate a positive attitude of the participants towards using the application (PT+VR module). (DIPF/Orig.

    Augmented Reality Assistance for Surgical Interventions using Optical See-Through Head-Mounted Displays

    Get PDF
    Augmented Reality (AR) offers an interactive user experience via enhancing the real world environment with computer-generated visual cues and other perceptual information. It has been applied to different applications, e.g. manufacturing, entertainment and healthcare, through different AR media. An Optical See-Through Head-Mounted Display (OST-HMD) is a specialized hardware for AR, where the computer-generated graphics can be overlaid directly onto the user's normal vision via optical combiners. Using OST-HMD for surgical intervention has many potential perceptual advantages. As a novel concept, many technical and clinical challenges exist for OST-HMD-based AR to be clinically useful, which motivates the work presented in this thesis. From the technical aspects, we first investigate the display calibration of OST-HMD, which is an indispensable procedure to create accurate AR overlay. We propose various methods to reduce the user-related error, improve robustness of the calibration, and remodel the calibration as a 3D-3D registration problem. Secondly, we devise methods and develop hardware prototype to increase the user's visual acuity of both real and virtual content through OST-HMD, to aid them in tasks that require high visual acuity, e.g. dental procedures. Thirdly, we investigate the occlusion caused by the OST-HMD hardware, which limits the user's peripheral vision. We propose to use alternative indicators to remind the user of unattended environment motion. From the clinical perspective, we identified many clinical use cases where OST-HMD-based AR is potentially helpful, developed applications integrated with current clinical systems, and conducted proof-of-concept evaluations. We first present a "virtual monitor'' for image-guided surgery. It can replace real radiology monitors in the operating room with easier user control and more flexibility in positioning. We evaluated the "virtual monitor'' for simulated percutaneous spine procedures. Secondly, we developed ARssist, an application for the bedside assistant in robotic surgery. The assistant can see the robotic instruments and endoscope within the patient body with ARssist. We evaluated the efficiency, safety and ergonomics of the assistant during two typical tasks: instrument insertion and manipulation. The performance for inexperienced users is significantly improved with ARssist, and for experienced users, the system significantly enhanced their confidence level. Lastly, we developed ARAMIS, which utilizes real-time 3D reconstruction and visualization to aid the laparoscopic surgeon. It demonstrates the concept of "X-ray see-through'' surgery. Our preliminary evaluation validated the application via a peg transfer task, and also showed significant improvement in hand-eye coordination. Overall, we have demonstrated that OST-HMD based AR application provides ergonomic improvements, e.g. hand-eye coordination. In challenging situations or for novice users, the improvements in ergonomic factors lead to improvement in task performance. With continuous effort as a community, optical see-through augmented reality technology will be a useful interventional aid in the near future

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Robotic Implant Modification for Neuroplastic Surgery

    Get PDF
    Neuroplastic surgery, which combines neurosurgery with plastic surgery, is a novel field that has not been rigorously studied. It has crucial clinical potentials in implanting instrumented devices for brain imaging, targeted drug delivery, deep brain stimulation, shunt placement, and so on. A specific application of neuroplastic surgery is single-stage cranioplasty. Current practice involves resizing a prefabricated oversized customized cranial implant (CCI). This method provides intraoperative flexibility for skull resection. However, surgeons need to manually resize the CCI to fit the craniofacial bone defect based on their judgment and estimation. This manual modification can be time-consuming and imprecise, resulting in large bone gaps between the skull and the resized implant. This work investigates the possibility of applying robotic and computer-integrated techniques to improve the procedure. This dissertation describes the development and examination of several systems to address the challenges that emerged from the CCI resizing process: (i) To assist the manual modification, a portable projection mapping device (PPMD) provides precise real-time visual guidance for surgeons to outline the defect boundary on the oversized CCI. (ii) Even with the assistance of a projection system, the subsequent manual resizing may still be imprecise and prone to failure. This work introduces an automated workflow for intraoperative CCI modification using a robotic system. (iii) A 2-scan method accomplishes the patient-to-CT registration using a handheld 3D scanner and addresses the challenges posed by the soft tissues and the surgical draping requirement using reattachable fiducial markers. (iv) A toolpath algorithm generates a cutting toolpath for the robot to resize the implant based on the defect geometry. (v) Due to certain limitations associated with mechanical cutting, this work presents a 5-axis CO\textsubscript{2} laser cutting system that achieves fast and precise implant modification, ideal for fabricating instrumented implants. The evaluation of the automated workflow shows a significant improvement in CCI resizing accuracy. This indicates lower risk of implant failure causing post-surgical complications. Furthermore, the functions provided by these systems can be expanded to other neuroplastic applications
    corecore