59 research outputs found

    Potential applications for virtual and augmented reality technologies in sensory science

    Get PDF
    peer-reviewedSensory science has advanced significantly in the past decade and is quickly evolving to become a key tool for predicting food product success in the marketplace. Increasingly, sensory data techniques are moving towards more dynamic aspects of sensory perception, taking account of the various stages of user-product interactions. Recent technological advancements in virtual reality and augmented reality have unlocked the potential for new immersive and interactive systems which could be applied as powerful tools for capturing and deciphering the complexities of human sensory perception. This paper reviews recent advancements in virtual and augmented reality technologies and identifies and explores their potential application within the field of sensory science. The paper also considers the possible benefits for the food industry as well as key challenges posed for widespread adoption. The findings indicate that these technologies have the potential to alter the research landscape in sensory science by facilitating promising innovations in five principal areas: consumption context, biometrics, food structure and texture, sensory marketing and augmenting sensory perception. Although the advent of augmented and virtual reality in sensory science offers new exciting developments, the exploitation of these technologies is in its infancy and future research will understand how they can be fully integrated with food and human responses. Industrial relevance: The need for sensory evaluation within the food industry is becoming increasingly complex as companies continuously compete for consumer product acceptance in today's highly innovative and global food environment. Recent technological developments in virtual and augmented reality offer the food industry new opportunities for generating more reliable insights into consumer sensory perceptions of food and beverages, contributing to the design and development of new products with optimised consumer benefits. These technologies also hold significant potential for improving the predictive validity of newly launched products within the marketplace

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    Immersive analytics for oncology patient cohorts

    Get PDF
    This thesis proposes a novel interactive immersive analytics tool and methods to interrogate the cancer patient cohort in an immersive virtual environment, namely Virtual Reality to Observe Oncology data Models (VROOM). The overall objective is to develop an immersive analytics platform, which includes a data analytics pipeline from raw gene expression data to immersive visualisation on virtual and augmented reality platforms utilising a game engine. Unity3D has been used to implement the visualisation. Work in this thesis could provide oncologists and clinicians with an interactive visualisation and visual analytics platform that helps them to drive their analysis in treatment efficacy and achieve the goal of evidence-based personalised medicine. The thesis integrates the latest discovery and development in cancer patients’ prognoses, immersive technologies, machine learning, decision support system and interactive visualisation to form an immersive analytics platform of complex genomic data. For this thesis, the experimental paradigm that will be followed is in understanding transcriptomics in cancer samples. This thesis specifically investigates gene expression data to determine the biological similarity revealed by the patient's tumour samples' transcriptomic profiles revealing the active genes in different patients. In summary, the thesis contributes to i) a novel immersive analytics platform for patient cohort data interrogation in similarity space where the similarity space is based on the patient's biological and genomic similarity; ii) an effective immersive environment optimisation design based on the usability study of exocentric and egocentric visualisation, audio and sound design optimisation; iii) an integration of trusted and familiar 2D biomedical visual analytics methods into the immersive environment; iv) novel use of the game theory as the decision-making system engine to help the analytics process, and application of the optimal transport theory in missing data imputation to ensure the preservation of data distribution; and v) case studies to showcase the real-world application of the visualisation and its effectiveness

    Immersive technology and medical visualisation: a user's guide

    Get PDF
    The immersive technologies of Virtual and Augmented Reality offer a new medium for visualisation. Where previous technologies allowed us only two-dimensional representations, constrained by a surface or a screen, these new immersive technologies will soon allow us to experience three dimensional environments that can occupy our entire field of view. This is a technological breakthrough for any field that requires visualisation, and in this chapter I explore the implications for medical visualisation in the near-to-medium future. First, I introduce Virtual Reality and Augmented Reality respectively, and identify the essential characteristics, and current state-of-the-art, for each. I will then survey some prominent applications already in-use within the medical field, and suggest potential use cases that remain under-explored. Finally, I will offer practical advice for those seeking to exploit these new tools

    Machine learning and interactive real-time simulation for training on relevant total hip replacement skills.

    Get PDF
    Virtual Reality simulators have proven to be an excellent tool in the medical sector to help trainees mastering surgical abilities by providing them with unlimited training opportunities. Total Hip Replacement (THR) is a procedure that can benefit significantly from VR/AR training, given its non-reversible nature. From all the different steps required while performing a THR, doctors agree that a correct fitting of the acetabular component of the implant has the highest relevance to ensure successful outcomes. Acetabular reaming is the step during which the acetabulum is resurfaced and prepared to receive the acetabular implant. The success of this step is directly related to the success of fitting the acetabular component. Therefore, this thesis will focus on developing digital tools that can be used to assist the training of acetabular reaming. Devices such as navigation systems and robotic arms have proven to improve the final accuracy of the procedure. However, surgeons must learn to adapt their instrument movements to be recognised by infrared cameras. When surgeons are initially introduced to these systems, surgical times can be extended up to 20 minutes, maximising surgical risks. Training opportunities are sparse, given the high investment required to purchase these devices. As a cheaper alternative, we developed an Augmented Reality (AR) alternative for training on the calibration of imageless navigation systems (INS). At the time, there were no alternative simulators that using head-mounted displays to train users into the steps to calibrate such systems. Our simulator replicates the presence of an infrared camera and its interaction with the reflecting markers located on the surgical tools. A group of 6 hip surgeons were invited to test the simulator. All of them expressed their satisfaction with the ease of use and attractiveness of the simulator as well as the similarity of interaction with the real procedure. The study confirmed that our simulator represents a cheaper and faster option to train multiple surgeons simultaneously in the use of Imageless Navigation Systems (INS) than learning exclusively on the surgical theatre. Current reviews on simulators for orthopaedic surgical procedures lack objective metrics of assessment given a standard set of design requirements. Instead, most of them rely exclusively on the level of interaction and functionality provided. We propose a comparative assessment rubric based on three different evaluation criteria. Namely immersion, interaction fidelity, and applied learning theories. After our assessment, we found that none of the simulators available for THR provides an accurate interactive representation of resurfacing procedures such as acetabular reaming based on force inputs exerted by the user. This feature is indispensable for an orthopaedics simulator, given that hand-eye coordination skills are essential skills to be trained before performing non-reversible bone removal on real patients. Based on the findings of our comparative assessment, we decided to develop a model to simulate the physically-based deformation expected during traditional acetabular reaming, given the user’s interaction with a volumetric mesh. Current interactive deformation methods on high-resolution meshes are based on geometrical collision detection and do not consider the contribution of the materials’ physical properties. By ignoring the effect of the material mechanics and the force exerted by the user, they become inadequate for training on hand- eye coordination skills transferable to the surgical theatre. Volumetric meshes are preferred in surgical simulation to geometric ones, given that they are able to represent the internal evolution of deformable solids resulting from cutting and shearing operations. Existing numerical methods for representing linear and corotational FEM cuts can only maintain interactive framerates at a low resolution of the mesh. Therefore, we decided to train a machine-learning model to learn the continuum mechanic laws relevant to acetabular reaming and predict deformations at interactive framerates. To the best of our knowledge, no research has been done previously on training a machine learning model on non-elastic FEM data to achieve results at interactive framerates. As training data, we used the results from XFEM simulations precomputed over 5000 frames for plastic deformations on tetrahedral meshes with 20406 elements each. We selected XFEM simulation as the physically-based deformation ground-truth given its accuracy and fast convergence to represent cuts, discontinuities and large strain rates. Our machine learning-based interactive model was trained following the Graph Neural Networks (GNN) blocks. GNNs were selected to learn on tetrahedral meshes as other supervised-learning architectures like the Multilayer perceptron (MLP), and Convolutional neural networks (CNN) are unable to learn the relationships between entities with an arbitrary number of neighbours. The learned simulator identifies the elements to be removed on each frame and describes the accumulated stress evolution in the whole machined piece. Using data generated from the results of XFEM allowed us to embed the effects of non-linearities in our interactive simulations without extra processing time. The trained model executed the prediction task using our tetrahedral mesh and unseen reamer orientations faster per frame than the time required to generate the training FEM dataset. Given an unseen orientation of the reamer, the trained GN model updates the value of accumulated stress on each of the 20406 tetrahedral elements that constitute our mesh during the prediction task. Once this value is updated, the tetrahedrons to be removed from the mesh are identified using a threshold condition. After using each single-frame output as input for the following prediction repeatedly for up to 60 iterations, our model can maintain an accuracy of up to 90.8% in identifying the status of each element given their value of accumulated stress. Finally, we demonstrate how the developed estimator can be easily connected to any game engine and included in developing a fully functional hip arthroplasty simulator

    Exploring the visualisation of hierarchical cybersecurity data within the Metaverse

    Full text link
    A prototype Metaverse experience was created in which users could explore hierarchical cybersecurity data. A small group of participants were surveyed on their attitudes to the Metaverse. They then completed a short series of tasks in the environment. Questions were asked to assess if they were suffering from Cybersickness. After completing further tasks, their attitudes were surveyed regarding future uses of the metaverse in the organisation. A second cohort of participants attended an online seminar. They completed a survey about their attitudes to the Metaverse. They then watched a short video of the Metaverse experience. Afterwards, they answered questions related to their attitudes towards future uses of the metaverse in the organisation. The results of these questionnaires were assessed to see whether participants were receptive to the idea of working with data inside the Metaverse in the future.Comment: MSc Dissertatio

    The selection and evaluation of a sensory technology for interaction in a warehouse environment

    Get PDF
    In recent years, Human-Computer Interaction (HCI) has become a significant part of modern life as it has improved human performance in the completion of daily tasks in using computerised systems. The increase in the variety of bio-sensing and wearable technologies on the market has propelled designers towards designing more efficient, effective and fully natural User-Interfaces (UI), such as the Brain-Computer Interface (BCI) and the Muscle-Computer Interface (MCI). BCI and MCI have been used for various purposes, such as controlling wheelchairs, piloting drones, providing alphanumeric inputs into a system and improving sports performance. Various challenges are experienced by workers in a warehouse environment. Because they often have to carry objects (referred to as hands-full) it is difficult to interact with traditional devices. Noise undeniably exists in some industrial environments and it is known as a major factor that causes communication problems. This has reduced the popularity of using verbal interfaces with computer applications, such as Warehouse Management Systems. Another factor that effects the performance of workers are action slips caused by a lack of concentration during, for example, routine picking activities. This can have a negative impact on job performance and allow a worker to incorrectly execute a task in a warehouse environment. This research project investigated the current challenges workers experience in a warehouse environment and the technologies utilised in this environment. The latest automation and identification systems and technologies are identified and discussed, specifically the technologies which have addressed known problems. Sensory technologies were identified that enable interaction between a human and a computerised warehouse environment. Biological and natural behaviours of humans which are applicable in the interaction with a computerised environment were described and discussed. The interactive behaviours included the visionary, auditory, speech production and physiological movement where other natural human behaviours such paying attention, action slips and the action of counting items were investigated. A number of modern sensory technologies, devices and techniques for HCI were identified with the aim of selecting and evaluating an appropriate sensory technology for MCI. iii MCI technologies enable a computer system to recognise hand and other gestures of a user, creating means of direct interaction between a user and a computer as they are able to detect specific features extracted from a specific biological or physiological activity. Thereafter, Machine Learning (ML) is applied in order to train a computer system to detect these features and convert them to a computer interface. An application of biomedical signals (bio-signals) in HCI using a MYO Armband for MCI is presented. An MCI prototype (MCIp) was developed and implemented to allow a user to provide input to an HCI, in a hands-free and hands-full situation. The MCIp was designed and developed to recognise the hand-finger gestures of a person when both hands are free or when holding an object, such a cardboard box. The MCIp applies an Artificial Neural Network (ANN) to classify features extracted from the surface Electromyography signals acquired by the MYO Armband around the forearm muscle. The MCIp provided the results of data classification for gesture recognition to an accuracy level of 34.87% with a hands-free situation. This was done by employing the ANN. The MCIp, furthermore, enabled users to provide numeric inputs to the MCIp system hands-full with an accuracy of 59.7% after a training session for each gesture of only 10 seconds. The results were obtained using eight participants. Similar experimentation with the MYO Armband has not been found to be reported in any literature at submission of this document. Based on this novel experimentation, the main contribution of this research study is a suggestion that the application of a MYO Armband, as a commercially available muscle-sensing device on the market, has the potential as an MCI to recognise the finger gestures hands-free and hands-full. An accurate MCI can increase the efficiency and effectiveness of an HCI tool when it is applied to different applications in a warehouse where noise and hands-full activities pose a challenge. Future work to improve its accuracy is proposed

    Interactive Three-Dimensional Simulation and Visualisation of Real Time Blood Flow in Vascular Networks

    Get PDF
    One of the challenges in cardiovascular disease management is the clinical decision-making process. When a clinician is dealing with complex and uncertain situations, the decision on whether or how to intervene is made based upon distinct information from diverse sources. There are several variables that can affect how the vascular system responds to treatment. These include: the extent of the damage and scarring, the efficiency of blood flow remodelling, and any associated pathology. Moreover, the effect of an intervention may lead to further unforeseen complications (e.g. another stenosis may be “hidden” further along the vessel). Currently, there is no tool for predicting or exploring such scenarios. This thesis explores the development of a highly adaptive real-time simulation of blood flow that considers patient specific data and clinician interaction. The simulation should model blood realistically, accurately, and through complex vascular networks in real-time. Developing robust flow scenarios that can be incorporated into the decision and planning medical tool set. The focus will be on specific regions of the anatomy, where accuracy is of the utmost importance and the flow can develop into specific patterns, with the aim of better understanding their condition and predicting factors of their future evolution. Results from the validation of the simulation showed promising comparisons with the literature and demonstrated a viability for clinical use
    • …
    corecore