6,984 research outputs found

    Communication Bandwidth Considerations for Exploration Medical Care During Space Missions

    Get PDF
    Destinations beyond low Earth orbit, especially Mars, have several important constraints, including limited resupply, limited to no possibility of medical evacuation, and delayed communication with ground support teams. Therefore, medical care is driven towards greater autonomy and necessitates a medical system that supports this paradigm, including the potential for high medical data transfer rates in order to share medical information and coordinate care with the ground in an intermittent fashion as communication allows. The medical data transfer needs for a Martian exploration mission were estimated by defining two medical scenarios that would require high data rate communications between the spacecraft and Earth. One medical scenario involves a case of hydronephrosis (outflow obstruction of the kidney) that evolves into pyelonephritis (kidney infection), then urosepsis (systemic infection originating from the kidney), due to obstruction by a kidney stone. A second medical scenario involved the death of a crewmembers child back on Earth that requires behavioral health care. For each of these scenarios, a data communications timeline was created following the medical care described by the scenario. From these timelines, total medical data transfers and burst transmission rates were estimated. Total data transferred from the vehicle-to-ground were estimated to be 94 gigabytes (GB) and 835 GB for the hydronephrosis and behavioral health scenarios, respectively. Data burst rates were estimated to be 7.7 megabytes per second (MB/s) and 15 MB/s for the hydronephrosis and behavioral health scenarios, respectively. Even though any crewed Mars mission should be capable of functioning autonomously, as long as the possibility of communication between Earth and Mars exists, Earth-based subject matter experts will be relied upon to augment mission medical capability. Therefore, setting an upper boundary limit for medical communication rates can help factor medical system needs into total vehicle communication requirements

    Medics: Medical Decision Support System for Long-Duration Space Exploration

    Get PDF
    The Autonomous Medical Operations (AMO) group at NASA Ames is developing a medical decision support system to enable astronauts on long-duration exploration missions to operate autonomously. The system will support clinical actions by providing medical interpretation advice and procedural recommendations during emergent care and clinical work performed by crew. The current state of development of the system, called MedICS (Medical Interpretation Classification and Segmentation) includes two separate aspects: a set of machine learning diagnostic models trained to analyze organ images and patient health records, and an interface to ultrasound diagnostic hardware and to medical repositories. Three sets of images of different organs and medical records were utilized for training machine learning models for various analyses, as follows: 1. Pneumothorax condition (collapsed lung). The trained model provides a positive or negative diagnosis of the condition. 2. Carotid artery occlusion. The trained model produces a diagnosis of 5 different occlusion levels (including normal). 3. Ocular retinal images. The model extracts optic disc pixels (image segmentation). This is a precursor step for advanced autonomous fundus clinical evaluation algorithms to be implemented in FY20. 4. Medical health records. The model produces a differential diagnosis for any particular individual, based on symptoms and other health and demographic information. A probability is calculated for each of 25 most common conditions. The same model provides the likelihood of survival. All results are provided with a confidence level. Item 1 images were provided by the US Army and were part of a data set for the clinical treatment of injured battlefield soldiers. This condition is relevant to possible space mishaps, due to pressure management issues. Item 2 images were provided by Houston Methodist Hospital, and item 3 health records were acquired from the MIT laboratory of computational physiology. The machine learning technology utilized is deep multilayer networks (Deep Learning), and new models will continue to be produced, as relevant data is made available and specific health needs of astronaut crews are identified. The interfacing aspects of the system include a GUI for running the different models, and retrieving and storing data, as well as support for integration with an augmented reality (AR) system deployed at JSC by Tietronix Software Inc. (HoloLens). The AR system provides guidance for the placement of an ultrasound transducer that captures images to be sent to the MedICS system for diagnosis. The image captured and the associated diagnosis appear in the technicians AR visual display

    Recent trends, technical concepts and components of computer-assisted orthopedic surgery systems: A comprehensive review

    Get PDF
    Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.Web of Science1923art. no. 519

    Image-Fusion for Biopsy, Intervention, and Surgical Navigation in Urology

    Get PDF

    Evaluating Human Performance for Image-Guided Surgical Tasks

    Get PDF
    The following work focuses on the objective evaluation of human performance for two different interventional tasks; targeted prostate biopsy tasks using a tracked biopsy device, and external ventricular drain placement tasks using a mobile-based augmented reality device for visualization and guidance. In both tasks, a human performance methodology was utilized which respects the trade-off between speed and accuracy for users conducting a series of targeting tasks using each device. This work outlines the development and application of performance evaluation methods using these devices, as well as details regarding the implementation of the mobile AR application. It was determined that the Fitts’ Law methodology can be applied for evaluation of tasks performed in each surgical scenario, and was sensitive to differentiate performance across a range which spanned experienced and novice users. This methodology is valuable for future development of training modules for these and other medical devices, and can provide details about the underlying characteristics of the devices, and how they can be optimized with respect to human performance

    Proof of concept of a workflow methodology for the creation of basic canine head anatomy veterinary education tool using augmented reality

    Get PDF
    Neuroanatomy can be challenging to both teach and learn within the undergraduate veterinary medicine and surgery curriculum. Traditional techniques have been used for many years, but there has now been a progression to move towards alternative digital models and interactive 3D models to engage the learner. However, digital innovations in the curriculum have typically involved the medical curriculum rather than the veterinary curriculum. Therefore, we aimed to create a simple workflow methodology to highlight the simplicity there is in creating a mobile augmented reality application of basic canine head anatomy. Using canine CT and MRI scans and widely available software programs, we demonstrate how to create an interactive model of head anatomy. This was applied to augmented reality for a popular Android mobile device to demonstrate the user-friendly interface. Here we present the processes, challenges and resolutions for the creation of a highly accurate, data based anatomical model that could potentially be used in the veterinary curriculum. This proof of concept study provides an excellent framework for the creation of augmented reality training products for veterinary education. The lack of similar resources within this field provides the ideal platform to extend this into other areas of veterinary education and beyond

    Applying artificial intelligence to big data in hepatopancreatic and biliary surgery: a scoping review

    Get PDF
    Aim: Artificial Intelligence (AI) and its applications in healthcare are rapidly developing. The healthcare industry generates ever-increasing volumes of data that should be used to improve patient care. This review aims to examine the use of AI and its applications in hepatopancreatic and biliary (HPB) surgery, highlighting studies leveraging large datasets.Methods: A PRISMA-ScR compliant scoping review using Medline and Google Scholar databases was performed (5th August 2022). Studies focusing on the development and application of AI to HPB surgery were eligible for inclusion. We undertook a conceptual mapping exercise to identify key areas where AI is under active development for use in HPB surgery. We considered studies and concepts in the context of patient pathways - before surgery (including diagnostics), around the time of surgery (supporting interventions) and after surgery (including prognostication).Results: 98 studies were included. Most studies were performed in China or the USA (n = 45). Liver surgery was the most common area studied (n = 51). Research into AI in HPB surgery has increased rapidly in recent years, with almost two-thirds published since 2019 (61/98). Of these studies, 11 have focused on using “big data” to develop and apply AI models. Nine of these studies came from the USA and nearly all focused on the application of Natural Language Processing. We identified several critical conceptual areas where AI is under active development, including improving preoperative optimization, image guidance and sensor fusion-assisted surgery, surgical planning and simulation, natural language processing of clinical reports for deep phenotyping and prediction, and image-based machine learning.Conclusion: Applications of AI in HPB surgery primarily focus on image analysis and computer vision to address diagnostic and prognostic uncertainties. Virtual 3D and augmented reality models to support complex HPB interventions are also under active development and likely to be used in surgical planning and education. In addition, natural language processing may be helpful in the annotation and phenotyping of disease, leading to new scientific insights

    Modular framework for a breast biopsy smart navigation system

    Get PDF
    Dissertação de mestrado em Informatics EngineeringBreast cancer is currently one of the most commonly diagnosed cancers and the fifth leading cause of cancer-related deaths. Its treatment has a higher survivorship rate when diagnosed in the disease’s early stages. The screening procedure uses medical imaging techniques, such as mammography or ultrasound, to discover possible lesions. When a physician finds a lesion that is likely to be malignant, a biopsy is performed to obtain a sample and determine its characteristics. Currently, real-time ultrasound is the preferred medical imaging modality to perform this procedure. The breast biopsy procedure is highly reliant on the operator’s skill and experience, due to the difficulty in interpreting ultrasound images and correctly aiming the needle. Robotic solutions, and the usage of automatic lesion segmentation in ultrasound imaging along with advanced visualization techniques, such as augmented reality, can potentially make this process simpler, safer, and faster. The OncoNavigator project, in which this dissertation integrates, aims to improve the precision of the current breast cancer interventions. To accomplish this objective various medical training and robotic biopsy aid were developed. An augmented reality ultrasound training solution was created and the device’s tracking capabilities were validated by comparing it with an electromagnetic tracking device. Another solution for ultrasound-guided breast biopsy assisted with augmented reality was developed. This solution displays real-time ultrasound video, automatic lesion segmentation, and biopsy needle trajectory display in the user’s field of view. The validation of this solution was made by comparing its usability with the traditional procedure. A modular software framework was also developed that focuses on the integration of a collaborative medical robot with real-time ultrasound imaging and automatic lesion segmentation. Overall, the developed solutions offered good results. The augmented reality glasses tracking capabilities proved to be as capable as the electromagnetic system, and the augmented reality assisted breast biopsy proved to make the procedure more accurate and precise than the traditional system.O cancro da mama é, atualmente, um dos tipos de cancro mais comuns a serem diagnosticados e a quinta principal causa de mortes relacionadas ao cancro. O seu tratamento tem maior taxa de sobrevivência quando é diagnosticado nas fases iniciais da doença. O procedimento de triagem utiliza técnicas de imagem médica, como mamografia ou ultrassom, para descobrir possíveis lesões. Quando um médico encontra uma lesão com probabilidade de ser maligna, é realizada uma biópsia para obter uma amostra e determinar as suas características. O ultrassom em tempo real é a modalidade de imagem médica preferida para realizar esse procedimento. A biópsia mamária depende da habilidade e experiência do operador, devido à dificuldade de interpretação das imagens ultrassonográficas e ao direcionamento correto da agulha. Soluções robóticas, com o uso de segmentação automática de lesões em imagens de ultrassom, juntamente com técnicas avançadas de visualização, nomeadamente realidade aumentada, podem tornar esse processo mais simples, seguro e rápido. O projeto OncoNavigator, que esta dissertação integra, visa melhorar a precisão das atuais intervenções ao cancro da mama. Para atingir este objetivo, vários ajudas para treino médico e auxílio à biópsia por meio robótico foram desenvolvidas. Uma solução de treino de ultrassom com realidade aumentada foi criada e os recursos de rastreio do dispositivo foram validados comparando-os com um dispositivo eletromagnético. Outra solução para biópsia de mama guiada por ultrassom assistida com realidade aumentada foi desenvolvida. Esta solução exibe vídeo de ultrassom em tempo real, segmentação automática de lesões e exibição da trajetória da agulha de biópsia no campo de visão do utilizador. A validação desta solução foi feita comparando a sua usabilidade com o procedimento tradicional. Também foi desenvolvida uma estrutura de software modular que se concentra na integração de um robô médico colaborativo com imagens de ultrassom em tempo real e segmentação automática de lesões. Os recursos de rastreio dos óculos de realidade aumentada mostraram-se tão capazes quanto o sistema eletromagnético, e a biópsia de mama assistida por realidade aumentada provou tornar o procedimento mais exato e preciso do que o sistema tradicional
    corecore