7,466 research outputs found

    MRI radiomics-based decision support tool for a personalized classification of cervical disc degeneration: a two-center study

    Get PDF
    Objectives: To develop and validate an MRI radiomics-based decision support tool for the automated grading of cervical disc degeneration.Methods: The retrospective study included 2,610 cervical disc samples of 435 patients from two hospitals. The cervical magnetic resonance imaging (MRI) analysis of patients confirmed cervical disc degeneration grades using the Pfirrmann grading system. A training set (1,830 samples of 305 patients) and an independent test set (780 samples of 130 patients) were divided for the construction and validation of the machine learning model, respectively. We provided a fine-tuned MedSAM model for automated cervical disc segmentation. Then, we extracted 924 radiomic features from each segmented disc in T1 and T2 MRI modalities. All features were processed and selected using minimum redundancy maximum relevance (mRMR) and multiple machine learning algorithms. Meanwhile, the radiomics models of various machine learning algorithms and MRI images were constructed and compared. Finally, the combined radiomics model was constructed in the training set and validated in the test set. Radiomic feature mapping was provided for auxiliary diagnosis.Results: Of the 2,610 cervical disc samples, 794 (30.4%) were classified as low grade and 1,816 (69.6%) were classified as high grade. The fine-tuned MedSAM model achieved good segmentation performance, with the mean Dice coefficient of 0.93. Higher-order texture features contributed to the dominant force in the diagnostic task (80%). Among various machine learning models, random forest performed better than the other algorithms (p < 0.01), and the T2 MRI radiomics model showed better results than T1 MRI in the diagnostic performance (p < 0.05). The final combined radiomics model had an area under the receiver operating characteristic curve (AUC) of 0.95, an accuracy of 89.51%, a precision of 87.07%, a recall of 98.83%, and an F1 score of 0.93 in the test set, which were all better than those of other models (p < 0.05).Conclusion: The radiomics-based decision support tool using T1 and T2 MRI modalities can be used for cervical disc degeneration grading, facilitating individualized management

    Artificial intelligence for predictive biomarker discovery in immuno-oncology: a systematic review

    Get PDF
    Background: The widespread use of immune checkpoint inhibitors (ICIs) has revolutionised treatment of multiple cancer types. However, selecting patients who may benefit from ICI remains challenging. Artificial intelligence (AI) approaches allow exploitation of high-dimension oncological data in research and development of precision immuno-oncology. Materials and methods: We conducted a systematic literature review of peer-reviewed original articles studying the ICI efficacy prediction in cancer patients across five data modalities: genomics (including genomics, transcriptomics, and epigenomics), radiomics, digital pathology (pathomics), and real-world and multimodality data. Results: A total of 90 studies were included in this systematic review, with 80% published in 2021-2022. Among them, 37 studies included genomic, 20 radiomic, 8 pathomic, 20 real-world, and 5 multimodal data. Standard machine learning (ML) methods were used in 72% of studies, deep learning (DL) methods in 22%, and both in 6%. The most frequently studied cancer type was non-small-cell lung cancer (36%), followed by melanoma (16%), while 25% included pan-cancer studies. No prospective study design incorporated AI-based methodologies from the outset; rather, all implemented AI as a post hoc analysis. Novel biomarkers for ICI in radiomics and pathomics were identified using AI approaches, and molecular biomarkers have expanded past genomics into transcriptomics and epigenomics. Finally, complex algorithms and new types of AI-based markers, such as meta-biomarkers, are emerging by integrating multimodal/multi-omics data. Conclusion: AI-based methods have expanded the horizon for biomarker discovery, demonstrating the power of integrating multimodal data from existing datasets to discover new meta-biomarkers. While most of the included studies showed promise for AI-based prediction of benefit from immunotherapy, none provided high-level evidence for immediate practice change. A priori planned prospective trial designs are needed to cover all lifecycle steps of these software biomarkers, from development and validation to integration into clinical practice

    Improving Cross-Lingual Transfer Learning for Event Detection

    Get PDF
    The widespread adoption of applications powered by Artificial Intelligence (AI) backbones has unquestionably changed the way we interact with the world around us. Applications such as automated personal assistants, automatic question answering, and machine-based translation systems have become mainstays of modern culture thanks to the recent considerable advances in Natural Language Processing (NLP) research. Nonetheless, with over 7000 spoken languages in the world, there still remain a considerable number of marginalized communities that are unable to benefit from these technological advancements largely due to the language they speak. Cross-Lingual Learning (CLL) looks to address this issue by transferring the knowledge acquired from a popular, high-resource source language (e.g., English, Chinese, or Spanish) to a less favored, lower-resourced target language (e.g., Urdu or Swahili). This dissertation leverages the Event Detection (ED) sub-task of Information Extraction (IE) as a testbed and presents three novel approaches that improve cross-lingual transfer learning from distinct perspectives: (1) direct knowledge transfer, (2) hybrid knowledge transfer, and (3) few-shot learning

    Deep learning-based multimodality classification of chronic mild traumatic brain injury using resting-state functional MRI and PET imaging

    Get PDF
    Mild traumatic brain injury (mTBI) is a public health concern. The present study aimed to develop an automatic classifier to distinguish between patients with chronic mTBI (n = 83) and healthy controls (HCs) (n = 40). Resting-state functional MRI (rs-fMRI) and positron emission tomography (PET) imaging were acquired from the subjects. We proposed a novel deep-learning-based framework, including an autoencoder (AE), to extract high-level latent and rectified linear unit (ReLU) and sigmoid activation functions. Single and multimodality algorithms integrating multiple rs-fMRI metrics and PET data were developed. We hypothesized that combining different imaging modalities provides complementary information and improves classification performance. Additionally, a novel data interpretation approach was utilized to identify top-performing features learned by the AEs. Our method delivered a classification accuracy within the range of 79–91.67% for single neuroimaging modalities. However, the performance of classification improved to 95.83%, thereby employing the multimodality model. The models have identified several brain regions located in the default mode network, sensorimotor network, visual cortex, cerebellum, and limbic system as the most discriminative features. We suggest that this approach could be extended to the objective biomarkers predicting mTBI in clinical settings

    Assessment of right ventricular function—a state of the art

    Get PDF
    Purpose of Review The right ventricle (RV) has a complex geometry and physiology which is distinct from the left. RV dysfunction and failure can be the aftermath of volume- and/or pressure-loading conditions, as well as myocardial and pericardial diseases. Recent Findings Echocardiography, magnetic resonance imaging and right heart catheterisation can assess RV function by using several qualitative and quantitative parameters. In pulmonary hypertension (PH) in particular, RV function can be impaired and is related to survival. Summary An accurate assessment of RV function is crucial for the early diagnosis and management of these patients. This review focuses on the different modalities and indices used for the evaluation of RV function with an emphasis on PH

    Segmentation of Pathology Images: A Deep Learning Strategy with Annotated Data

    Get PDF
    Cancer has significantly threatened human life and health for many years. In the clinic, histopathology image segmentation is the golden stand for evaluating the prediction of patient prognosis and treatment outcome. Generally, manually labelling tumour regions in hundreds of high-resolution histopathological images is time-consuming and expensive for pathologists. Recently, the advancements in hardware and computer vision have allowed deep-learning-based methods to become mainstream to segment tumours automatically, significantly reducing the workload of pathologists. However, most current methods rely on large-scale labelled histopathological images. Therefore, this research studies label-effective tumour segmentation methods using deep-learning paradigms to relieve the annotation limitations. Chapter 3 proposes an ensemble framework for fully-supervised tumour segmentation. Usually, the performance of an individual-trained network is limited by significant morphological variances in histopathological images. We propose a fully-supervised learning ensemble fusion model that uses both shallow and deep U-Nets, trained with images of different resolutions and subsets of images, for robust predictions of tumour regions. Noise elimination is achieved with Convolutional Conditional Random Fields. Two open datasets are used to evaluate the proposed method: the ACDC@LungHP challenge at ISBI2019 and the DigestPath challenge at MICCAI2019. With a dice coefficient of 79.7 %, the proposed method takes third place in ACDC@LungHP. In DigestPath 2019, the proposed method achieves a dice coefficient 77.3 %. Well-annotated images are an indispensable part of training fully-supervised segmentation strategies. However, large-scale histopathology images are hardly annotated finely in clinical practice. It is common for labels to be of poor quality or for only a few images to be manually marked by experts. Consequently, fully-supervised methods cannot perform well in these cases. Chapter 4 proposes a self-supervised contrast learning for tumour segmentation. A self-supervised cancer segmentation framework is proposed to reduce label dependency. An innovative contrastive learning scheme is developed to represent tumour features based on unlabelled images. Unlike a normal U-Net, the backbone is a patch-based segmentation network. Additionally, data augmentation and contrastive losses are applied to improve the discriminability of tumour features. A convolutional Conditional Random Field is used to smooth and eliminate noise. Three labelled, and fourteen unlabelled images are collected from a private skin cancer dataset called BSS. Experimental results show that the proposed method achieves better tumour segmentation performance than other popular self-supervised methods. However, by evaluated on the same public dataset as chapter 3, the proposed self-supervised method is hard to handle fine-grained segmentation around tumour boundaries compared to the supervised method we proposed. Chapter 5 proposes a sketch-based weakly-supervised tumour segmentation method. To segment tumour regions precisely with coarse annotations, a sketch-supervised method is proposed, containing a dual CNN-Transformer network and a global normalised class activation map. CNN-Transformer networks simultaneously model global and local tumour features. With the global normalised class activation map, a gradient-based tumour representation can be obtained from the dual network predictions. We invited experts to mark fine and coarse annotations in the private BSS and the public PAIP2019 datasets to facilitate reproducible performance comparisons. Using the BSS dataset, the proposed method achieves 76.686 % IOU and 86.6 % Dice scores, outperforming state-of-the-art methods. Additionally, the proposed method achieves a Dice gain of 8.372 % compared with U-Net on the PAIP2019 dataset. The thesis presents three approaches to segmenting cancers from histology images: fully-supervised, unsupervised, and weakly supervised methods. This research effectively segments tumour regions based on histopathological annotations and well-designed modules. Our studies comprehensively demonstrate label-effective automatic histopathological image segmentation. Experimental results prove that our works achieve state-of-the-art segmentation performances on private and public datasets. In the future, we plan to integrate more tumour feature representation technologies with other medical modalities and apply them to clinical research

    Is attention all you need in medical image analysis? A review

    Full text link
    Medical imaging is a key component in clinical diagnosis, treatment planning and clinical trial design, accounting for almost 90% of all healthcare data. CNNs achieved performance gains in medical image analysis (MIA) over the last years. CNNs can efficiently model local pixel interactions and be trained on small-scale MI data. The main disadvantage of typical CNN models is that they ignore global pixel relationships within images, which limits their generalisation ability to understand out-of-distribution data with different 'global' information. The recent progress of Artificial Intelligence gave rise to Transformers, which can learn global relationships from data. However, full Transformer models need to be trained on large-scale data and involve tremendous computational complexity. Attention and Transformer compartments (Transf/Attention) which can well maintain properties for modelling global relationships, have been proposed as lighter alternatives of full Transformers. Recently, there is an increasing trend to co-pollinate complementary local-global properties from CNN and Transf/Attention architectures, which led to a new era of hybrid models. The past years have witnessed substantial growth in hybrid CNN-Transf/Attention models across diverse MIA problems. In this systematic review, we survey existing hybrid CNN-Transf/Attention models, review and unravel key architectural designs, analyse breakthroughs, and evaluate current and future opportunities as well as challenges. We also introduced a comprehensive analysis framework on generalisation opportunities of scientific and clinical impact, based on which new data-driven domain generalisation and adaptation methods can be stimulated

    RSGPT: A Remote Sensing Vision Language Model and Benchmark

    Full text link
    The emergence of large-scale large language models, with GPT-4 as a prominent example, has significantly propelled the rapid advancement of artificial general intelligence and sparked the revolution of Artificial Intelligence 2.0. In the realm of remote sensing (RS), there is a growing interest in developing large vision language models (VLMs) specifically tailored for data analysis in this domain. However, current research predominantly revolves around visual recognition tasks, lacking comprehensive, large-scale image-text datasets that are aligned and suitable for training large VLMs, which poses significant challenges to effectively training such models for RS applications. In computer vision, recent research has demonstrated that fine-tuning large vision language models on small-scale, high-quality datasets can yield impressive performance in visual and language understanding. These results are comparable to state-of-the-art VLMs trained from scratch on massive amounts of data, such as GPT-4. Inspired by this captivating idea, in this work, we build a high-quality Remote Sensing Image Captioning dataset (RSICap) that facilitates the development of large VLMs in the RS field. Unlike previous RS datasets that either employ model-generated captions or short descriptions, RSICap comprises 2,585 human-annotated captions with rich and high-quality information. This dataset offers detailed descriptions for each image, encompassing scene descriptions (e.g., residential area, airport, or farmland) as well as object information (e.g., color, shape, quantity, absolute position, etc). To facilitate the evaluation of VLMs in the field of RS, we also provide a benchmark evaluation dataset called RSIEval. This dataset consists of human-annotated captions and visual question-answer pairs, allowing for a comprehensive assessment of VLMs in the context of RS

    Eating Behavior In-The-Wild and Its Relationship to Mental Well-Being

    Get PDF
    The motivation for eating is beyond survival. Eating serves as means for socializing, exploring cultures, etc. Computing researchers have developed various eating detection technologies that can leverage passive sensors available on smart devices to automatically infer when and, to some extent, what an individual is eating. However, despite their significance in eating literature, crucial contextual information such as meal company, type of food, location of meals, the motivation of eating episodes, the timing of meals, etc., are difficult to detect through passive means. More importantly, the applications of currently developed automated eating detection systems are limited. My dissertation addresses several of these challenges by combining the strengths of passive sensing technologies and EMAs (Ecological Momentary Assessment). EMAs are a widely adopted tool used across a variety of disciplines that can gather in-situ information about individual experiences. In my dissertation, I demonstrate the relationship between various eating contexts and the mental well-being of college students and information workers through naturalistic studies. The contributions of my dissertation are four-fold. First, I develop a real-time meal detection system that can detect meal-level episodes and trigger EMAs to gather contextual data about one’s eating episode. Second, I deploy this system in a college student population to understand their eating behavior during day-to-day life and investigate the relationship of these eating behaviors with various mental well-being outcomes. Third, based on the limitations of passive sensing systems to detect short and sporadic chewing episodes present in snacking, I develop a snacking detection system and operationalize the definition of snacking in this thesis. Finally, I investigate the causal relationship between stress levels experienced by remote information workers during their workdays and its effect on lunchtime. This dissertation situates the findings in an interdisciplinary context, including ubiquitous computing, psychology, and nutrition.Ph.D
    • …
    corecore