231 research outputs found

    The Analysis of design and manufacturing tasks using haptic and immersive VR - Some case studies

    Get PDF
    The use of virtual reality in interactive design and manufacture has been researched extensively but the practical application of this technology in industry is still very much in its infancy. This is surprising as one would have expected that, after some 30 years of research commercial applications of interactive design or manufacturing planning and analysis would be widespread throughout the product design domain. One of the major but less well known advantages of VR technology is that logging the user gives a great deal of rich data which can be used to automatically generate designs or manufacturing instructions, analyse design and manufacturing tasks, map engineering processes and, tentatively, acquire expert knowledge. The authors feel that the benefits of VR in these areas have not been fully disseminated to the wider industrial community and - with the advent of cheaper PC-based VR solutions - perhaps a wider appreciation of the capabilities of this type of technology may encourage companies to adopt VR solutions for some of their product design processes. With this in mind, this paper will describe in detail applications of haptics in assembly demonstrating how user task logging can lead to the analysis of design and manufacturing tasks at a level of detail not previously possible as well as giving usable engineering outputs. The haptic 3D VR study involves the use of a Phantom and 3D system to analyse and compare this technology against real-world user performance. This work demonstrates that the detailed logging of tasks in a virtual environment gives considerable potential for understanding how virtual tasks can be mapped onto their real world equivalent as well as showing how haptic process plans can be generated in a similar manner to the conduit design and assembly planning HMD VR tool reported in PART A. The paper concludes with a view as to how the authors feel that the use of VR systems in product design and manufacturing should evolve in order to enable the industrial adoption of this technology in the future

    Exploring individual user differences in the 2D/3D interaction with medical image data

    Get PDF
    User-centered design is often performed without regard to individual user differences. In this paper, we report results of an empirical study aimed to evaluate whether computer experience and demographic user characteristics would have an effect on the way people interact with the visualized medical data in a 3D virtual environment using 2D and 3D input devices. We analyzed the interaction through performance data, questionnaires and observations. The results suggest that differences in gender, age and game experience have an effect on people’s behavior and task performance, as well as on subjective\ud user preferences

    A Software Framework to Create 3D Browser-Based Speech Enabled Applications

    Get PDF
    The advances in automatic speech recognition have pushed the humancomputer interface researchers to adopt speech as one mean of input data. It is natural to humans, and complements very well other input interfaces. However, integrating an automatic speech recognizer into a complex system (such as a 3D visualization system or a Virtual Reality system) can be a difficult and time consuming task. In this paper we present our approach to the problem, a software framework requiringminimum additional coding from the application developer. The framework combines voice commands with existing interaction code, automating the task of creating a new speech grammar (to be used by the recognizer). A new listener component for theXj3D was created, which makes transparent to the user the integration between the 3D browser and the recognizer. We believe this is a desirable feature for virtual reality system developers, and also to be used as a rapid prototyping tool when experimenting with speech technology

    A haptic-enabled multimodal interface for the planning of hip arthroplasty

    Get PDF
    Multimodal environments help fuse a diverse range of sensory modalities, which is particularly important when integrating the complex data involved in surgical preoperative planning. The authors apply a multimodal interface for preoperative planning of hip arthroplasty with a user interface that integrates immersive stereo displays and haptic modalities. This article overviews this multimodal application framework and discusses the benefits of incorporating the haptic modality in this area

    A novel haptic model and environment for maxillofacial surgical operation planning and manipulation

    Get PDF
    This paper presents a practical method and a new haptic model to support manipulations of bones and their segments during the planning of a surgical operation in a virtual environment using a haptic interface. To perform an effective dental surgery it is important to have all the operation related information of the patient available beforehand in order to plan the operation and avoid any complications. A haptic interface with a virtual and accurate patient model to support the planning of bone cuts is therefore critical, useful and necessary for the surgeons. The system proposed uses DICOM images taken from a digital tomography scanner and creates a mesh model of the filtered skull, from which the jaw bone can be isolated for further use. A novel solution for cutting the bones has been developed and it uses the haptic tool to determine and define the bone-cutting plane in the bone, and this new approach creates three new meshes of the original model. Using this approach the computational power is optimized and a real time feedback can be achieved during all bone manipulations. During the movement of the mesh cutting, a novel friction profile is predefined in the haptical system to simulate the force feedback feel of different densities in the bone

    Interactive Visualization of Multimodal Brain Connectivity: Applications in Clinical and Cognitive Neuroscience

    Get PDF
    Magnetic resonance imaging (MRI) has become a readily available prognostic and diagnostic method, providing invaluable information for the clinical treatment of neurological diseases. Multimodal neuroimaging allows integration of complementary data from various aspects such as functional and anatomical properties; thus, it has the potential to overcome the limitations of each individual modality. Specifically, functional and diffusion MRI are two non-invasive neuroimaging techniques customized to capture brain activity and microstructural properties, respectively. Data from these two modalities is inherently complex, and interactive visualization can assist with data comprehension. The current thesis presents the design, development, and validation of visualization and computation approaches that address the need for integration of brain connectivity from functional and structural domains. Two contexts were considered to develop these approaches: neuroscience exploration and minimally invasive neurosurgical planning. The goal was to provide novel visualization algorithms and gain new insights into big and complex data (e.g., brain networks) by visual analytics. This goal was achieved through three steps: 3D Graphical Collision Detection: One of the primary challenges was the timely rendering of grey matter (GM) regions and white matter (WM) fibers based on their 3D spatial maps. This challenge necessitated pre-scanning those objects to generate a memory array containing their intersections with memory units. This process helped faster retrieval of GM and WM virtual models during the user interactions. Neuroscience Enquiry (MultiXplore): A software interface was developed to display and react to user inputs by means of a connectivity matrix. This matrix displays connectivity information and is capable to accept selections from users and display the relevant ones in 3D anatomical view (with associated anatomical elements). In addition, this package can load multiple matrices from dynamic connectivity methods and annotate brain fibers. Neurosurgical Planning (NeuroPathPlan): A computational method was provided to map the network measures to GM and WM; thus, subject-specific eloquence metric can be derived from related resting state networks and used in objective assessment of cortical and subcortical tissue. This metric was later compared to apriori knowledge based decisions from neurosurgeons. Preliminary results show that eloquence metric has significant similarities with expert decisions

    Anomaly detection and virtual reality visualisation in supercomputers

    Get PDF
    Anomaly detection is the identification of events or observations that deviate from the expected behaviour of a given set of data. Its main application is the prediction of possible technical failures. In particular, anomaly detection on supercomputers is a difficult problem to solve due to the large scale of the systems and the large number of components. Most research works in this field employ machine learning methods and regression models in a supervised fashion, which implies the need for a large amount of labelled data to train such systems. This work proposes the use of autoencoder models, allowing the problem to be approached with semi-supervised learning techniques. Two different model training approaches are compared. The former is a model trained with data from all the nodes of a supercomputer. In the latter approach, observing significant differences between nodes, one model is trained for each node. The results are analysed by evaluating the positive and negative aspects of each approach. On the other hand, a replica of the Marconi 100 supercomputer is developed in a virtual reality environment that allows the data from each node to be visualised at the same time.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. We would like to thank “A way of making Europe” European Regional Development Fund (ERDF) and MCIN/AEI/10.13039/501100011033 for supporting this work under the MoDeaAS project (grant PID2019-104818RB-I00). Furthermore, we would like to thank the University of Skövde and to ASSAR Innovation Arena for their support to develop this work

    Ubiquitous Integration and Temporal Synchronisation (UbilTS) framework : a solution for building complex multimodal data capture and interactive systems

    Get PDF
    Contemporary Data Capture and Interactive Systems (DCIS) systems are tied in with various technical complexities such as multimodal data types, diverse hardware and software components, time synchronisation issues and distributed deployment configurations. Building these systems is inherently difficult and requires addressing of these complexities before the intended and purposeful functionalities can be attained. The technical issues are often common and similar among diverse applications. This thesis presents the Ubiquitous Integration and Temporal Synchronisation (UbiITS) framework, a generic solution to address the technical complexities in building DCISs. The proposed solution is an abstract software framework that can be extended and customised to any application requirements. UbiITS includes all fundamental software components, techniques, system level layer abstractions and reference architecture as a collection to enable the systematic construction of complex DCISs. This work details four case studies to showcase the versatility and extensibility of UbiITS framework’s functionalities and demonstrate how it was employed to successfully solve a range of technical requirements. In each case UbiITS operated as the core element of each application. Additionally, these case studies are novel systems by themselves in each of their domains. Longstanding technical issues such as flexibly integrating and interoperating multimodal tools, precise time synchronisation, etc., were resolved in each application by employing UbiITS. The framework enabled establishing a functional system infrastructure in these cases, essentially opening up new lines of research in each discipline where these research approaches would not have been possible without the infrastructure provided by the framework. The thesis further presents a sample implementation of the framework on a device firmware exhibiting its capability to be directly implemented on a hardware platform. Summary metrics are also produced to establish the complexity, reusability, extendibility, implementation and maintainability characteristics of the framework.Engineering and Physical Sciences Research Council (EPSRC) grants - EP/F02553X/1, 114433 and 11394
    corecore