22,825 research outputs found

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Discrete event simulation and virtual reality use in industry: new opportunities and future trends

    Get PDF
    This paper reviews the area of combined discrete event simulation (DES) and virtual reality (VR) use within industry. While establishing a state of the art for progress in this area, this paper makes the case for VR DES as the vehicle of choice for complex data analysis through interactive simulation models, highlighting both its advantages and current limitations. This paper reviews active research topics such as VR and DES real-time integration, communication protocols, system design considerations, model validation, and applications of VR and DES. While summarizing future research directions for this technology combination, the case is made for smart factory adoption of VR DES as a new platform for scenario testing and decision making. It is put that in order for VR DES to fully meet the visualization requirements of both Industry 4.0 and Industrial Internet visions of digital manufacturing, further research is required in the areas of lower latency image processing, DES delivery as a service, gesture recognition for VR DES interaction, and linkage of DES to real-time data streams and Big Data sets

    A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction

    Full text link
    Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated real-time augmentations of the workspace in three conditions -- mixed reality, augmented reality, and a monitor as the baseline -- using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the mixed reality condition, participants found that modality more engaging than the other two, but overall showed preference for the augmented reality condition over the monitor and mixed reality conditions

    Development of a dynamic virtual reality model of the inner ear sensory system as a learning and demonstrating tool

    Get PDF
    In order to keep track of the position and motion of our body in space, nature has given us a fascinating and very ingenious organ, the inner ear. Each inner ear includes five biological sensors - three angular and two linear accelerometers - which provide the body with the ability to sense angular and linear motion of the head with respect to inertial space. The aim of this paper is to present a dynamic virtual reality model of these sensors. This model, implemented in Matlab/Simulink, simulates the rotary chair testing which is one of the tests carried out during a diagnosis of the vestibular system. High-quality 3D-animations linked to the Simulink model are created using the export of CAD models into Virtual Reality Modeling Language (VRML) files. This virtual environment shows not only the test but also the state of each sensor (excited or inhibited) in real time. Virtual reality is used as a tool of integrated learning of the dynamic behavior of the inner ear using ergonomic paradigm of user interactivity (zoom, rotation, mouse interaction,
). It can be used as a learning and demonstrating tool either in the medicine field - to understand the behavior of the sensors during any kind of motion - or in the aeronautical field to relate the inner ear functioning to some sensory illusions

    Supporting visualization analysis in industrial process tomography by using augmented reality—A case study of an industrial microwave drying system

    Get PDF
    Industrial process tomography (IPT) based process control is an advisable approach in industrial heating processes for improving system efficiency and quality. When using it, appropriate dataflow pipelines and visualizations are key for domain users to implement precise data acquisition and analysis. In this article, we propose a complete data processing and visualizing workflow regarding a specific case—microwave tomography (MWT) controlled industrial microwave drying system. Furthermore, we present the up-to-date augmented reality (AR) technique to support the corresponding data visualization and on-site analysis. As a pioneering study of using AR to benefit IPT systems, the proposed AR module provides straightforward and comprehensible visualizations pertaining to the process data to the related users. Inside the dataflow of the case, a time reversal imaging algorithm, a post-imaging segmentation, and a volumetric visualization module are included. For the time reversal algorithm, we exhaustively introduce each step for MWT image reconstruction and then present the simulated results. For the post-imaging segmentation, an automatic tomographic segmentation algorithm is utilized to reveal the significant information contained in the reconstructed images. For volumetric visualization, the 3D generated information is displayed. Finally, the proposed AR system is integrated with the on-going process data, including reconstructed, segmented, and volumetric images, which are used for facilitating interactive on-site data analysis for domain users. The central part of the AR system is implemented by a mobile app that is currently supported on iOS/Android platforms

    Virtual Meeting Rooms: From Observation to Simulation

    Get PDF
    Much working time is spent in meetings and, as a consequence, meetings have become the subject of multidisciplinary research. Virtual Meeting Rooms (VMRs) are 3D virtual replicas of meeting rooms, where various modalities such as speech, gaze, distance, gestures and facial expressions can be controlled. This allows VMRs to be used to improve remote meeting participation, to visualize multimedia data and as an instrument for research into social interaction in meetings. This paper describes how these three uses can be realized in a VMR. We describe the process from observation through annotation to simulation and a model that describes the relations between the annotated features of verbal and non-verbal conversational behavior.\ud As an example of social perception research in the VMR, we describe an experiment to assess human observers’ accuracy for head orientation

    From Industry to Practice: Can Users Tackle Domain Tasks with Augmented Reality?

    Get PDF
    Augmented Reality (AR) is a cutting-edge interactive technology. While Virtual Reality (VR) is based on completely virtual and immersive environments, AR superimposes virtual objects onto the real world. The value of AR has been demonstrated and applied within numerous industrial application areas due to its capability of providing interactive interfaces of visualized digital content. AR can provide functional tools that support users in undertaking domain-related tasks, especially facilitating them in data visualization and interaction by jointly augmenting physical space and user perception. Making effective use of the advantages of AR, especially the ability which augment human vision to help users perform different domain-related tasks is the central part of my PhD research.Industrial process tomography (IPT), as a non-intrusive and commonly-used imaging technique, has been effectively harnessed in many manufacturing components for inspections, monitoring, product quality control, and safety issues. IPT underpins and facilitates the extraction of qualitative and quantitative data regarding the related industrial processes, which is usually visualized in various ways for users to understand its nature, measure the critical process characteristics, and implement process control in a complete feedback network. The adoption of AR in benefiting IPT and its related fields is currently still scarce, resulting in a gap between AR technique and industrial applications. This thesis establishes a bridge between AR practitioners and IPT users by accomplishing four stages. First of these is a need-finding study of how IPT users can harness AR technique was developed. Second, a conceptualized AR framework, together with the implemented mobile AR application developed in an optical see-through (OST) head-mounted display (HMD) was proposed. Third, the complete approach for IPT users interacting with tomographic visualizations as well as the user study was investigated.Based on the shared technologies from industry, we propose and examine an AR approach for visual search tasks providing visual hints, audio hints, and gaze-assisted instant post-task feedback as the fourth stage. The target case was a book-searching task, in which we aimed to explore the effect of the hints and the feedback with two hypotheses: that both visual and audio hints can positively affect AR search tasks whilst the combination outperforms the individuals; that instant post-task feedback can positively affect AR search tasks. The proof-of-concept was demonstrated by an AR app in an HMD with a two-stage user evaluation. The first one was a pilot study (n=8) where the impact of the visual hint in benefiting search task performance was identified. The second was a comprehensive user study (n=96) consisting of two sub-studies, Study I (n=48) and Study II (n=48). Following quantitative and qualitative analysis, our results partially verified the first hypothesis and completely verified the second, enabling us to conclude that the synthesis of visual and audio hints conditionally improves AR search task efficiency when coupled with task feedback
    • 

    corecore