549 research outputs found

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 361)

    Get PDF
    This bibliography lists 141 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during Mar. 1992. Subject coverage includes: aerospace medicine and physiology, life support systems and man/system technology, protective clothing, exobiology and extraterrestrial life, planetary biology, and flight crew behavior and performance

    Advancing Medical Imaging with Language Models: A Journey from N-grams to ChatGPT

    Full text link
    In this paper, we aimed to provide a review and tutorial for researchers in the field of medical imaging using language models to improve their tasks at hand. We began by providing an overview of the history and concepts of language models, with a special focus on large language models. We then reviewed the current literature on how language models are being used to improve medical imaging, emphasizing different applications such as image captioning, report generation, report classification, finding extraction, visual question answering, interpretable diagnosis, and more for various modalities and organs. The ChatGPT was specially highlighted for researchers to explore more potential applications. We covered the potential benefits of accurate and efficient language models for medical imaging analysis, including improving clinical workflow efficiency, reducing diagnostic errors, and assisting healthcare professionals in providing timely and accurate diagnoses. Overall, our goal was to bridge the gap between language models and medical imaging and inspire new ideas and innovations in this exciting area of research. We hope that this review paper will serve as a useful resource for researchers in this field and encourage further exploration of the possibilities of language models in medical imaging

    Multimodality in VR: A survey

    Get PDF
    Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer

    Feel the Noise: Mid-Air Ultrasound Haptics as a Novel Human-Vehicle Interaction Paradigm

    Get PDF
    Focussed ultrasound can be used to create the sensation of touch in mid-air. Combined with gestures, this can provide haptic feedback to guide users, thereby overcoming the lack of agency associated with pure gestural interfaces, and reducing the need for vision – it is therefore particularly apropos of the driving domain. In a counter-balanced 2×2 driving simulator study, a traditional in-vehicle touchscreen was compared with a virtual mid-air gestural interface, both with and without ultrasound haptics. Forty-eight experienced drivers (28 male, 20 female) undertook representative in-vehicle tasks – discrete target selections and continuous slider-bar manipulations – whilst driving. Results show that haptifying gestures with ultrasound was particularly effective in reducing visual demand (number of long glances and mean off-road glance time), and increasing performance (shortest interaction times, highest number of correct responses and least ‘overshoots’) associated with continuous tasks. In contrast, for discrete, target-selections, the touchscreen enabled the highest accuracy and quickest responses, particularly when combined with haptic feedback to guide interactions, although this also increased visual demand. Subjectively, the gesture interfaces invited higher ratings of arousal compared to the more familiar touch-surface technology, and participants indicated the lowest levels of workload (highest performance, lowest frustration) associated with the gesture-haptics interface. In addition, gestures were preferred by participants for continuous tasks. The study shows practical utility and clear potential for the use of haptified gestures in the automotive domain

    Transforming obstetric ultrasound into data science using eye tracking, voice recording, transducer motion and ultrasound video.

    Get PDF
    Ultrasound is the primary modality for obstetric imaging and is highly sonographer dependent. Long training period, insufficient recruitment and poor retention of sonographers are among the global challenges in the expansion of ultrasound use. For the past several decades, technical advancements in clinical obstetric ultrasound scanning have largely concerned improving image quality and processing speed. By contrast, sonographers have been acquiring ultrasound images in a similar fashion for several decades. The PULSE (Perception Ultrasound by Learning Sonographer Experience) project is an interdisciplinary multi-modal imaging study aiming to offer clinical sonography insights and transform the process of obstetric ultrasound acquisition and image analysis by applying deep learning to large-scale multi-modal clinical data. A key novelty of the study is that we record full-length ultrasound video with concurrent tracking of the sonographer's eyes, voice and the transducer while performing routine obstetric scans on pregnant women. We provide a detailed description of the novel acquisition system and illustrate how our data can be used to describe clinical ultrasound. Being able to measure different sonographer actions or model tasks will lead to a better understanding of several topics including how to effectively train new sonographers, monitor the learning progress, and enhance the scanning workflow of experts
    • …
    corecore