1,369 research outputs found

    Implementation of Virtual Reality (VR) simulators in Norwegian maritime pilotage training

    Get PDF
    With millions of tons of cargo transported to and from Norwegian ports every year, the maritime waterways in Norway are heavily used. The high consequences of accidents and mishaps require well-trained seafarers and safe operating practices. The normal crews of vessels are supported by the Norwegian Coastal Administration (NCA) pilot service when operating vessels not meeting specific regulations. Simulator training is used as part of the toolset designed to educate, train, and advance the knowledge of maritime pilots in order to improve their operability. The NCA is working on an internal project to distribute Virtual Reality (VR) simulators to selected pilot stations along the coast and train and familiarize maritime pilots with the tool. There has been a lack of research on virtual reality simulators and how they are implemented in maritime organizations. The goal of this research is to see if a VR-simulator can be used as a training tool within the Norwegian Coastal Administration's pilot service. Furthermore, the findings of this study contribute to the understanding of VR-simulators in the field of Maritime Education and Training (MET). The thesis is addressing two research questions: 1. Is the Virtual Reality training useful in the competence development process of Norwegian maritime pilots? 2. How can the Virtual Reality simulators improve training outcomes of today’s maritime pilot education? The data gathered from the systematic literature review corresponds to the findings of the interviews. Considering the similarities with previous study findings from sectors such as healthcare, construction, and education, it is concluded that the results of the interviews can be generalized. For maritime pilots, the simulator offers recurrent scenario-based training and a high level of immersion. Pilots can learn at home, onboard a vessel, at the pilot station, and in group settings thanks to the system's mobility and user-friendliness. In terms of motivation and training effectiveness, the study finds that VR-simulators are effective and beneficial. The technology received positive reviews from the pilots. The simulator can be used to teach both novice and experienced maritime pilots about new operations, larger tonnage, and new operational areas, according to the findings of the research. After the NCA has utilized VR-simulators for some time, additional research may analyze the success of VR-simulators using a training evaluation study and investigate the impact of VR-training in the organization

    Advanced Visualization and Intuitive User Interface Systems for Biomedical Applications

    Get PDF
    Modern scientific research produces data at rates that far outpace our ability to comprehend and analyze it. Such sources include medical imaging data and computer simulations, where technological advancements and spatiotemporal resolution generate increasing amounts of data from each scan or simulation. A bottleneck has developed whereby medical professionals and researchers are unable to fully use the advanced information available to them. By integrating computer science, computer graphics, artistic ability and medical expertise, scientific visualization of medical data has become a new field of study. The objective of this thesis is to develop two visualization systems that use advanced visualization, natural user interface technologies and the large amount of biomedical data available to produce results that are of clinical utility and overcome the data bottleneck that has developed. Computational Fluid Dynamics (CFD) is a tool used to study the quantities associated with the movement of blood by computer simulation. We developed methods of processing spatiotemporal CFD data and displaying it in stereoscopic 3D with the ability to spatially navigate through the data. We used this method with two sets of display hardware: a full-scale visualization environment and a small-scale desktop system. The advanced display and data navigation abilities provide the user with the means to better understand the relationship between the vessel\u27s form and function. Low-cost 3D, depth-sensing cameras capture and process user body motion to recognize motions and gestures. Such devices allow users to use hand motions as an intuitive interface to computer applications. We developed algorithms to process and prepare the biomedical and scientific data for use with a custom control application. The application interprets user gestures as commands to a visualization tool and allows the user to control the visualization of multi-dimensional data. The intuitive interface allows the user to control the visualization of data without manual contact with an interaction device. In developing these methods and software tools we have leveraged recent trends in advanced visualization and intuitive interfaces in order to efficiently visualize biomedical data in such a way that provides meaningful information that can be used to further appreciate it

    A new approach to study gait impairments in Parkinson’s disease based on mixed reality

    Get PDF
    Dissertação de mestrado integrado em Engenharia Biomédica (especialização em Eletrónica Médica)Parkinson’s disease (PD) is the second most common neurodegenerative disorder after Alzheimer's disease. PD onset is at 55 years-old on average, and its incidence increases with age. This disease results from dopamine-producing neurons degeneration in the basal ganglia and is characterized by various motor symptoms such as freezing of gait, bradykinesia, hypokinesia, akinesia, and rigidity, which negatively impact patients’ quality of life. To monitor and improve these PD-related gait disabilities, several technology-based methods have emerged in the last decades. However, these solutions still require more customization to patients’ daily living tasks in order to provide more objective, reliable, and long-term data about patients’ motor conditions in home-related contexts. Providing this quantitative data to physicians will ensure more personalized and better treatments. Also, motor rehabilitation sessions fostered by assistance devices require the inclusion of quotidian tasks to train patients for their daily motor challenges. One of the most promising technology-based methods is virtual, augmented, and mixed reality (VR/AR/MR), which immerse patients in virtual environments and provide sensory stimuli (cues) to assist with these disabilities. However, further research is needed to improve and conceptualize efficient and patient-centred VR/AR/MR approaches and increase their clinical evidence. Bearing this in mind, the main goal of this dissertation was to design, develop, test, and validate virtual environments to assess and train PD-related gait impairments using mixed reality smart glasses, integrated with another high-technological motion tracking device. Using specific virtual environments that trigger PD-related gait impairments (turning, doorways, and narrow spaces), it is hypothesized that patients can be assessed and trained in their daily challenges related to walking. Also, this tool integrates on-demand visual cues to provide visual biofeedback and foster motor training. This solution was validated with end-users to test the identified hypothesis. The results showed that, in fact, mixed reality has the potential to recreate real-life environments that often provoke PD-related gait disabilities, by placing virtual objects on top of the real world. On the contrary, biofeedback strategies did not significantly improve the patients’ motor performance. The user experience evaluation showed that participants enjoyed participating in the activity and felt that this tool can help their motor performance.A doença de Parkinson (DP) é a segunda doença neurodegenerativa mais comum depois da doença de Alzheimer. O início da DP ocorre, em média, aos 55 anos de idade, e a sua incidência aumenta com a idade. Esta doença resulta da degeneração dos neurónios produtores de dopamina nos gânglios basais e é caracterizada por vários sintomas motores como o congelamento da marcha, bradicinesia, hipocinesia, acinesia, e rigidez, que afetam negativamente a qualidade de vida dos pacientes. Nas últimas décadas surgiram métodos tecnológicos para monitorizar e treinar estas desabilidades da marcha. No entanto, estas soluções ainda requerem uma maior personalização relativamente às tarefas diárias dos pacientes, a fim de fornecer dados mais objetivos, fiáveis e de longo prazo sobre o seu desempenho motor em contextos do dia-a-dia. Através do fornecimento destes dados quantitativos aos médicos, serão assegurados tratamentos mais personalizados. Além disso, as sessões de reabilitação motora, promovidas por dispositivos de assistência, requerem a inclusão de tarefas quotidianas para treinar os pacientes para os seus desafios diários. Um dos métodos tecnológicos mais promissores é a realidade virtual, aumentada e mista (RV/RA/RM), que imergem os pacientes em ambientes virtuais e fornecem estímulos sensoriais para ajudar nestas desabilidades. Contudo, é necessária mais investigação para melhorar e conceptualizar abordagens RV/RA/RM eficientes e centradas no paciente e ainda aumentar as suas evidências clínicas. Tendo isto em mente, o principal objetivo desta dissertação foi conceber, desenvolver, testar e validar ambientes virtuais para avaliar e treinar as incapacidades de marcha relacionadas com a DP usando óculos inteligentes de realidade mista, integrados com outro dispositivo de rastreio de movimento. Utilizando ambientes virtuais específicos que desencadeiam desabilidades da marcha (rodar, portas e espaços estreitos), é possível testar hipóteses de que os pacientes possam ser avaliados e treinados nos seus desafios diários. Além disso, esta ferramenta integra pistas visuais para fornecer biofeedback visual e fomentar a reabilitação motora. Esta solução foi validada com utilizadores finais de forma a testar as hipóteses identificadas. Os resultados mostraram que, de facto, a realidade mista tem o potencial de recriar ambientes da vida real que muitas vezes provocam deficiências de marcha relacionadas à DP. Pelo contrário, as estratégias de biofeedback não provocaram melhorias significativas no desempenho motor dos pacientes. A avaliação feita pelos pacientes mostrou que estes gostaram de participar nos testes e sentiram que esta ferramenta pode auxiliar no seu desempenho motor

    FEeSU - A Framework for Evaluating eHealth Systems Usability: A Case of Tanzania Health Facilities

    Get PDF
    Adopting eHealth systems in the health sector has changed the means of providing health services and increased the quality of service in many countries. The usability of these systems needs to be evaluated from time to time to reduce or completely avoid the possibility of jeopardizing the patients’ data, medication errors, etc. However, the existing frameworks are not country context sensitive since they are designed with the mindset of practices in developed countries. Such developed countries’ contexts have different cultures, resource settings, and levels of computer literacy compared to developing countries such as Tanzania. This paper presents the framework for evaluating eHealth system usability (FEeSU) that is designed with a focus on developing country contexts and tested in Tanzania. Healthcare professionals, including doctors, nurses, laboratory technologists, and pharmacists, were the main participants in this research to acquire practice-oriented requirements based on their experience, best practices, and healthcare norms. The framework comprises six steps to be followed in the evaluation process. These steps are associated with important components, including usability metrics, stakeholders, usability evaluation methods, and contextual issues necessary for usability evaluation. The proposed usability evaluation framework could be used as guidelines by different e-health system stakeholders when preparing, designing, and performing the evaluation of the usability of a system. Keywords: Usability metrics, Usability evaluation Contextual issues eHealth systems Framework for usability evaluation FEeSU. DOI: 10.7176/CEIS/10-1-01 Publication date:September 30th 202

    A Novel Virtual Reality Curriculum Improves Laparoscopic Skill in Novices

    Get PDF
    A NOVEL VIRTUAL REALITY-BASED CURRICULUM IMPROVES LAPAROSCOPIC SKILL IN NOVICES. Michael Joel Martinez, Andrew John Duffy. Department of Surgery, Yale School of Medicine, New Haven, CT. Surgical skills training, facing work hours restrictions and increasing numbers of procedural skills to master, requires an innovative approach to ensure success. We developed a novel basic laparoscopic skill, virtual reality-based simulator curriculum on the LapSim (Surgical Science, Goteborg, Sweden), with a training module and a skills exam enabling trainees to develop a minimum skill level. We hypothesize that unskilled trainees laparoscopic skills performance will improve when compared to controls. Also, those who are able to successfully complete our training curriculum and pass the exam will demonstrate higher skills levels compared to non-passers during the training period. We anticipate that skills will begin to degrade after a period 30 days without repetitive training. We expect that individual trainee performance will correlate with past experience with video games, sports, or musical instruments. Thirty-two novice, pre-clinical medical students were randomized to various training schedules. All students trained on the curriculum with the goal of completing the practice drills and passing the skills exam. Students laparoscopic skills were assessed at baseline and at monthly intervals using two tasks from the Fundamentals of Laparoscopic Surgery (FLS) curriculum that are known to correlate with operative laparoscopic skill. Additional FLS testing was performed after a one month layoff to evaluate short-term skill degradation. Objective skill FLS scores were compared between training and non-training groups, and between passing and non-passing groups at the completion of the study. All participants prior experiences with video games, sports and musical instruments were correlated with study performance. Training improved FLS performance for all participants. There was significantly greater skill development in passers versus non-passer (p\u3c0.05). Skills did not degrade after a 30 day layoff but continued to improve for all participants even reaching a statistically significant improvement on one task. Performance was not correlated with past video game, sports, or musical instrument experience. Trainees who successfully completed the our curriculum demonstrated significantly higher laparoscopic skills. These skills should translate to improved operative performance. Skills were retained after the last training session and demonstrated improvement at 30 days. We demonstrated no performance correlation with prior video game, sports or musical experience

    Using a Bayesian Framework to Develop 3D Gestural Input Systems Based on Expertise and Exposure in Anesthesia

    Get PDF
    Interactions with a keyboard and mouse fall short of human capabilities and what is lacking in the technological revolution is a surge of new and natural ways of interacting with computers. In-air gestures are a promising input modality as they are expressive, easy to use, quick to use, and natural for users. It is known that gestural systems should be developed within a particular context as gesture choice is dependent on the context; however, there is little research investigating other individual factors which may influence gesture choice such as expertise and exposure. Anesthesia providers’ hands have been linked to bacterial transmission; therefore, this research investigates the context of gestural technology for anesthetic task. The objective of this research is to understand how expertise and exposure influence gestural behavior and to develop Bayesian statistical models that can accurately predict how users would choose intuitive gestures in anesthesia based on expertise and exposure. Expertise and exposure may influence gesture responses for individuals; however, there is limited to no work investigating how these factors influence intuitive gesture choice and how to use this information to predict intuitive gestures to be used in system design. If researchers can capture users’ gesture variability within a particular context based on expertise and exposure, then statistical models can be developed to predict how users may gesturally respond to a computer system and use those predictions to design a gestural system which anticipates a user’s response and thus affords intuitiveness to multiple user groups. This allows designers to more completely understand the end user and implement intuitive gesture systems that are based on expected natural responses. Ultimately, this dissertation seeks to investigate the human factors challenges associated with gestural system development within a specific context and to offer statistical approaches to understanding and predicting human behavior in a gestural system. Two experimental studies and two Bayesian analyses were completed in this dissertation. The first experimental study investigated the effect of expertise within the context of anesthesiology. The main finding of this study was that domain expertise is influential when developing 3D gestural systems as novices and experts differ in terms of intuitive gesture-function mappings as well as reaction times to generate an intuitive mapping. The second study investigated the effect of exposure for controlling a computer-based presentation and found that there is a learning effect of gestural control in that participants were significantly faster at generating intuitive mappings as they gained exposure with the system. The two Bayesian analyses were in the form of Bayesian multinomial logistic regression models where intuitive gesture choice was predicted based on the contextual task and either expertise or exposure. The Bayesian analyses generated posterior predictive probabilities for all combinations of task, expertise level, and exposure level and showed that gesture choice can be predicted to some degree. This work provides further insights into how 3D gestural input systems should be designed and how Bayesian statistics can be used to model human behavior

    A white paper: NASA virtual environment research, applications, and technology

    Get PDF
    Research support for Virtual Environment technology development has been a part of NASA's human factors research program since 1985. Under the auspices of the Office of Aeronautics and Space Technology (OAST), initial funding was provided to the Aerospace Human Factors Research Division, Ames Research Center, which resulted in the origination of this technology. Since 1985, other Centers have begun using and developing this technology. At each research and space flight center, NASA missions have been major drivers of the technology. This White Paper was the joint effort of all the Centers which have been involved in the development of technology and its applications to their unique missions. Appendix A is the list of those who have worked to prepare the document, directed by Dr. Cynthia H. Null, Ames Research Center, and Dr. James P. Jenkins, NASA Headquarters. This White Paper describes the technology and its applications in NASA Centers (Chapters 1, 2 and 3), the potential roles it can take in NASA (Chapters 4 and 5), and a roadmap of the next 5 years (FY 1994-1998). The audience for this White Paper consists of managers, engineers, scientists and the general public with an interest in Virtual Environment technology. Those who read the paper will determine whether this roadmap, or others, are to be followed
    • …
    corecore