5,678 research outputs found

    The neurocognitive gains of diagnostic reasoning training using simulated interactive veterinary cases

    Get PDF
    The present longitudinal study ascertained training-associated transformations in the neural underpinnings of diagnostic reasoning, using a simulation game named “Equine Virtual Farm” (EVF). Twenty participants underwent structural, EVF/task-based and resting-state MRI and diffusion tensor imaging (DTI) before and after completing their training on diagnosing simulated veterinary cases. Comparing playing veterinarian versus seeing a colorful image across training sessions revealed the transition of brain activity from scientific creativity regions pre-training (left middle frontal and temporal gyrus) to insight problem-solving regions post-training (right cerebellum, middle cingulate and medial superior gyrus and left postcentral gyrus). Further, applying linear mixed-effects modelling on graph centrality metrics revealed the central roles of the creative semantic (inferior frontal, middle frontal and angular gyrus and parahippocampus) and reward systems (orbital gyrus, nucleus accumbens and putamen) in driving pre-training diagnostic reasoning; whereas, regions implicated in inductive reasoning (superior temporal and medial postcentral gyrus and parahippocampus) were the main post-training hubs. Lastly, resting-state and DTI analysis revealed post-training effects within the occipitotemporal semantic processing region. Altogether, these results suggest that simulation-based training transforms diagnostic reasoning in novices from regions implicated in creative semantic processing to regions implicated in improvised rule-based problem-solving

    Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review

    Get PDF
    BACKGROUND: There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS: A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS: Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS: Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901

    Modular framework for a breast biopsy smart navigation system

    Get PDF
    Dissertação de mestrado em Informatics EngineeringBreast cancer is currently one of the most commonly diagnosed cancers and the fifth leading cause of cancer-related deaths. Its treatment has a higher survivorship rate when diagnosed in the disease’s early stages. The screening procedure uses medical imaging techniques, such as mammography or ultrasound, to discover possible lesions. When a physician finds a lesion that is likely to be malignant, a biopsy is performed to obtain a sample and determine its characteristics. Currently, real-time ultrasound is the preferred medical imaging modality to perform this procedure. The breast biopsy procedure is highly reliant on the operator’s skill and experience, due to the difficulty in interpreting ultrasound images and correctly aiming the needle. Robotic solutions, and the usage of automatic lesion segmentation in ultrasound imaging along with advanced visualization techniques, such as augmented reality, can potentially make this process simpler, safer, and faster. The OncoNavigator project, in which this dissertation integrates, aims to improve the precision of the current breast cancer interventions. To accomplish this objective various medical training and robotic biopsy aid were developed. An augmented reality ultrasound training solution was created and the device’s tracking capabilities were validated by comparing it with an electromagnetic tracking device. Another solution for ultrasound-guided breast biopsy assisted with augmented reality was developed. This solution displays real-time ultrasound video, automatic lesion segmentation, and biopsy needle trajectory display in the user’s field of view. The validation of this solution was made by comparing its usability with the traditional procedure. A modular software framework was also developed that focuses on the integration of a collaborative medical robot with real-time ultrasound imaging and automatic lesion segmentation. Overall, the developed solutions offered good results. The augmented reality glasses tracking capabilities proved to be as capable as the electromagnetic system, and the augmented reality assisted breast biopsy proved to make the procedure more accurate and precise than the traditional system.O cancro da mama é, atualmente, um dos tipos de cancro mais comuns a serem diagnosticados e a quinta principal causa de mortes relacionadas ao cancro. O seu tratamento tem maior taxa de sobrevivência quando é diagnosticado nas fases iniciais da doença. O procedimento de triagem utiliza técnicas de imagem médica, como mamografia ou ultrassom, para descobrir possíveis lesões. Quando um médico encontra uma lesão com probabilidade de ser maligna, é realizada uma biópsia para obter uma amostra e determinar as suas características. O ultrassom em tempo real é a modalidade de imagem médica preferida para realizar esse procedimento. A biópsia mamária depende da habilidade e experiência do operador, devido à dificuldade de interpretação das imagens ultrassonográficas e ao direcionamento correto da agulha. Soluções robóticas, com o uso de segmentação automática de lesões em imagens de ultrassom, juntamente com técnicas avançadas de visualização, nomeadamente realidade aumentada, podem tornar esse processo mais simples, seguro e rápido. O projeto OncoNavigator, que esta dissertação integra, visa melhorar a precisão das atuais intervenções ao cancro da mama. Para atingir este objetivo, vários ajudas para treino médico e auxílio à biópsia por meio robótico foram desenvolvidas. Uma solução de treino de ultrassom com realidade aumentada foi criada e os recursos de rastreio do dispositivo foram validados comparando-os com um dispositivo eletromagnético. Outra solução para biópsia de mama guiada por ultrassom assistida com realidade aumentada foi desenvolvida. Esta solução exibe vídeo de ultrassom em tempo real, segmentação automática de lesões e exibição da trajetória da agulha de biópsia no campo de visão do utilizador. A validação desta solução foi feita comparando a sua usabilidade com o procedimento tradicional. Também foi desenvolvida uma estrutura de software modular que se concentra na integração de um robô médico colaborativo com imagens de ultrassom em tempo real e segmentação automática de lesões. Os recursos de rastreio dos óculos de realidade aumentada mostraram-se tão capazes quanto o sistema eletromagnético, e a biópsia de mama assistida por realidade aumentada provou tornar o procedimento mais exato e preciso do que o sistema tradicional

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training

    MULTIDISCIPLINARY TECHNIQUES FOR THE SIMULATION OF THE CONTACT BETWEEN THE FOOT AND THE SHOE UPPER IN GAIT: VIRTUAL REALITY, COMPUTATIONAL BIOMECHANICS, AND ARTIFICIAL NEURAL NETWORKS

    Full text link
    Esta Tesis propone el uso de técnicas multidisciplinares como una alternativa viable a los procedimientos actuales de evaluación del calzado los cuales, normalmente, consumen muchos recursos humanos y técnicos. Estas técnicas son Realidad Virtual, Biomecánica Computacional y Redes Neuronales Artificiales. El marco de esta tesis es el análisis virtual del confort mecánico en el calzado, es decir, el análisis de las presiones de confort en el calzado y su principal objetivo es predecir las presiones ejercidas por el zapato sobre la superficie del pie al caminar mediante la simulación del contacto en esta interfaz. En particular, en esta tesis se ha desarrollado una aplicación software que usa el Método de los Elementos Finitos para simular la deformación del calzado. Se ha desarrollado un modelo preliminar que describe el comportamiento del corte del calzado, se ha implementado un proceso automático para el ajuste pie-zapato y se ha presentado una metodología para obtener una animación genérica del paso de cada individuo. Además, y con el fin de mejorar la aplicación desarrollada, se han propuesto nuevos modelos para simular el comportamiento del corte del calzado al caminar. Por otro lado, las Redes Neuronales Artificiales han sido aplicadas en esta tesis a la predicción de la fuerza ejercida por una esfera, que simulando un hueso, empuja a una muestra de material. Además, también han sido utilizadas para predecir las presiones ejercidas por el corte del calzado sobre la superficie del pie (presiones dorsales) en un paso completo. Las principales contribuciones de esta tesis son: el desarrollo de un innovador simulador que permitirá a los fabricantes de calzado realizar evaluaciones virtuales de las características de sus diseños sin tener que construir el prototipo real, y el desarrollo de una también innovadora herramienta que les permitirá predecir las presiones dorsales ejercidas por el calzado sobre la superficie del pie al caminar.Rupérez Moreno, MJ. (2011). MULTIDISCIPLINARY TECHNIQUES FOR THE SIMULATION OF THE CONTACT BETWEEN THE FOOT AND THE SHOE UPPER IN GAIT: VIRTUAL REALITY, COMPUTATIONAL BIOMECHANICS, AND ARTIFICIAL NEURAL NETWORKS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11235Palanci

    Serious Games and Mixed Reality Applications for Healthcare

    Get PDF
    Virtual reality (VR) and augmented reality (AR) have long histories in the healthcare sector, offering the opportunity to develop a wide range of tools and applications aimed at improving the quality of care and efficiency of services for professionals and patients alike. The best-known examples of VR–AR applications in the healthcare domain include surgical planning and medical training by means of simulation technologies. Techniques used in surgical simulation have also been applied to cognitive and motor rehabilitation, pain management, and patient and professional education. Serious games are ones in which the main goal is not entertainment, but a crucial purpose, ranging from the acquisition of knowledge to interactive training.These games are attracting growing attention in healthcare because of their several benefits: motivation, interactivity, adaptation to user competence level, flexibility in time, repeatability, and continuous feedback. Recently, healthcare has also become one of the biggest adopters of mixed reality (MR), which merges real and virtual content to generate novel environments, where physical and digital objects not only coexist, but are also capable of interacting with each other in real time, encompassing both VR and AR applications.This Special Issue aims to gather and publish original scientific contributions exploring opportunities and addressing challenges in both the theoretical and applied aspects of VR–AR and MR applications in healthcare

    Neural Network Driven Eye Tracking Metrics and Data Visualization in Metaverse and Virtual Reality Maritime Safety Training

    Get PDF
    Understand the human brain, predict human performance, and proactively plan, strategize and act based on such information initiated a scientific multidisciplinary alliance to address modern management challenges. This paper integrates numerous advanced information technologies such as eye tracking, virtual reality and neural networks for cognitive task analysis leading to behavioral analysis on humans that perform specific activities. The technology developed and presented in this paper has been tested on a maritime safety training application for command bridge communication and procedures for collision avoidance. The technology integrates metaverse and virtual reality environments with eye tracking for the collection of behavioral data which are analyzed by a neural network to indicate the mental and physical state, attention and readiness of a seafarer to perform such a critical task. The paper demonstrates the technology architecture, data collection process, indicative results, and areas for further research

    Facial Expression Rendering in Medical Training Simulators: Current Status and Future Directions

    Get PDF
    Recent technological advances in robotic sensing and actuation methods have prompted development of a range of new medical training simulators with multiple feedback modalities. Learning to interpret facial expressions of a patient during medical examinations or procedures has been one of the key focus areas in medical training. This paper reviews facial expression rendering systems in medical training simulators that have been reported to date. Facial expression rendering approaches in other domains are also summarized to incorporate the knowledge from those works into developing systems for medical training simulators. Classifications and comparisons of medical training simulators with facial expression rendering are presented, and important design features, merits and limitations are outlined. Medical educators, students and developers are identified as the three key stakeholders involved with these systems and their considerations and needs are presented. Physical-virtual (hybrid) approaches provide multimodal feedback, present accurate facial expression rendering, and can simulate patients of different age, gender and ethnicity group; makes it more versatile than virtual and physical systems. The overall findings of this review and proposed future directions are beneficial to researchers interested in initiating or developing such facial expression rendering systems in medical training simulators.This work was supported by the Robopatient project funded by the EPSRC Grant No EP/T00519X/
    corecore