119 research outputs found

    Towards Reliable Colorectal Cancer Polyps Classification via Vision Based Tactile Sensing and Confidence-Calibrated Neural Networks

    Full text link
    In this study, toward addressing the over-confident outputs of existing artificial intelligence-based colorectal cancer (CRC) polyp classification techniques, we propose a confidence-calibrated residual neural network. Utilizing a novel vision-based tactile sensing (VS-TS) system and unique CRC polyp phantoms, we demonstrate that traditional metrics such as accuracy and precision are not sufficient to encapsulate model performance for handling a sensitive CRC polyp diagnosis. To this end, we develop a residual neural network classifier and address its over-confident outputs for CRC polyps classification via the post-processing method of temperature scaling. To evaluate the proposed method, we introduce noise and blur to the obtained textural images of the VS-TS and test the model's reliability for non-ideal inputs through reliability diagrams and other statistical metrics

    Calibrating the dice loss to handle neural network overconfidence for biomedical image segmentation

    Get PDF
    The Dice similarity coefficient (DSC) is both a widely used metric and loss function for biomedical image segmentation due to its robustness to class imbalance. However, it is well known that the DSC loss is poorly calibrated, resulting in overconfident predictions that cannot be usefully interpreted in biomedical and clinical practice. Performance is often the only metric used to evaluate segmentations produced by deep neural networks, and calibration is often neglected. However, calibration is important for translation into biomedical and clinical practice, providing crucial contextual information to model predictions for interpretation by scientists and clinicians. In this study, we provide a simple yet effective extension of the DSC loss, named the DSC++ loss, that selectively modulates the penalty associated with overconfident, incorrect predictions. As a standalone loss function, the DSC++ loss achieves significantly improved calibration over the conventional DSC loss across six well-validated open-source biomedical imaging datasets, including both 2D binary and 3D multi-class segmentation tasks. Similarly, we observe significantly improved calibration when integrating the DSC++ loss into four DSC-based loss functions. Finally, we use softmax thresholding to illustrate that well calibrated outputs enable tailoring of recall-precision bias, which is an important post-processing technique to adapt the model predictions to suit the biomedical or clinical task. The DSC++ loss overcomes the major limitation of the DSC loss, providing a suitable loss function for training deep learning segmentation models for use in biomedical and clinical practice. Source code is available at https://github.com/mlyg/DicePlusPlus

    Barriers and Pitfalls for Artificial Intelligence in Gastroenterology: Ethical and Regulatory issues

    Get PDF
    Artificial intelligence (AI)-based technologies are developing rapidly, offering great promise for gastroenterology and particularly endoscopy. However, there are complex barriers and pitfalls that must be considered before widespread real-world clinical implementation can occur. This review highlights major ethical concerns related to data privacy and sharing that are essential for the development of AI models, through to practical clinical issues such as potential patient harm, accountability, bias in decisions, and impact on workforce. Finally, current regulatory pathways are discussed, recognizing that these need to evolve to deal with unique new challenges, such as the adaptive and rapidly iterative nature of AI-based technologies, while striking a balance between ensuring patient safety and promoting innovation

    μž„μƒμˆ κΈ° ν–₯상을 μœ„ν•œ λ”₯λŸ¬λ‹ 기법 연ꡬ: λŒ€μž₯λ‚΄μ‹œκ²½ 진단 및 λ‘œλ΄‡μˆ˜μˆ  술기 평가에 적용

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ ν˜‘λ™κ³Όμ • μ˜μš©μƒμ²΄κ³΅ν•™μ „κ³΅, 2020. 8. 김희찬.This paper presents deep learning-based methods for improving performance of clinicians. Novel methods were applied to the following two clinical cases and the results were evaluated. In the first study, a deep learning-based polyp classification algorithm for improving clinical performance of endoscopist during colonoscopy diagnosis was developed. Colonoscopy is the main method for diagnosing adenomatous polyp, which can multiply into a colorectal cancer and hyperplastic polyps. The classification algorithm was developed using convolutional neural network (CNN), trained with colorectal polyp images taken by a narrow-band imaging colonoscopy. The proposed method is built around an automatic machine learning (AutoML) which searches for the optimal architecture of CNN for colorectal polyp image classification and trains the weights of the architecture. In addition, gradient-weighted class activation mapping technique was used to overlay the probabilistic basis of the prediction result on the polyp location to aid the endoscopists visually. To verify the improvement in diagnostic performance, the efficacy of endoscopists with varying proficiency levels were compared with or without the aid of the proposed polyp classification algorithm. The results confirmed that, on average, diagnostic accuracy was improved and diagnosis time was shortened in all proficiency groups significantly. In the second study, a surgical instruments tracking algorithm for robotic surgery video was developed, and a model for quantitatively evaluating the surgeons surgical skill based on the acquired motion information of the surgical instruments was proposed. The movement of surgical instruments is the main component of evaluation for surgical skill. Therefore, the focus of this study was develop an automatic surgical instruments tracking algorithm, and to overcome the limitations presented by previous methods. The instance segmentation framework was developed to solve the instrument occlusion issue, and a tracking framework composed of a tracker and a re-identification algorithm was developed to maintain the type of surgical instruments being tracked in the video. In addition, algorithms for detecting the tip position of instruments and arm-indicator were developed to acquire the movement of devices specialized for the robotic surgery video. The performance of the proposed method was evaluated by measuring the difference between the predicted tip position and the ground truth position of the instruments using root mean square error, area under the curve, and Pearsons correlation analysis. Furthermore, motion metrics were calculated from the movement of surgical instruments, and a machine learning-based robotic surgical skill evaluation model was developed based on these metrics. These models were used to evaluate clinicians, and results were similar in the developed evaluation models, the Objective Structured Assessment of Technical Skill (OSATS), and the Global Evaluative Assessment of Robotic Surgery (GEARS) evaluation methods. In this study, deep learning technology was applied to colorectal polyp images for a polyp classification, and to robotic surgery videos for surgical instruments tracking. The improvement in clinical performance with the aid of these methods were evaluated and verified.λ³Έ 논문은 μ˜λ£Œμ§„μ˜ μž„μƒμˆ κΈ° λŠ₯λ ₯을 ν–₯μƒμ‹œν‚€κΈ° μœ„ν•˜μ—¬ μƒˆλ‘œμš΄ λ”₯λŸ¬λ‹ 기법듀을 μ œμ•ˆν•˜κ³  λ‹€μŒ 두 가지 싀둀에 λŒ€ν•΄ μ μš©ν•˜μ—¬ κ·Έ κ²°κ³Όλ₯Ό ν‰κ°€ν•˜μ˜€λ‹€. 첫 번째 μ—°κ΅¬μ—μ„œλŠ” λŒ€μž₯λ‚΄μ‹œκ²½μœΌλ‘œ κ΄‘ν•™ 진단 μ‹œ, λ‚΄μ‹œκ²½ μ „λ¬Έμ˜μ˜ 진단 λŠ₯λ ₯을 ν–₯μƒμ‹œν‚€κΈ° μœ„ν•˜μ—¬ λ”₯λŸ¬λ‹ 기반의 μš©μ’… λΆ„λ₯˜ μ•Œκ³ λ¦¬μ¦˜μ„ κ°œλ°œν•˜κ³ , λ‚΄μ‹œκ²½ μ „λ¬Έμ˜μ˜ 진단 λŠ₯λ ₯ ν–₯상 μ—¬λΆ€λ₯Ό κ²€μ¦ν•˜κ³ μž ν•˜μ˜€λ‹€. λŒ€μž₯λ‚΄μ‹œκ²½ κ²€μ‚¬λ‘œ μ•”μ’…μœΌλ‘œ 증식할 수 μžˆλŠ” μ„ μ’…κ³Ό 과증식성 μš©μ’…μ„ μ§„λ‹¨ν•˜λŠ” 것은 μ€‘μš”ν•˜λ‹€. λ³Έ μ—°κ΅¬μ—μ„œλŠ” ν˜‘λŒ€μ—­ μ˜μƒ λ‚΄μ‹œκ²½μœΌλ‘œ μ΄¬μ˜ν•œ λŒ€μž₯ μš©μ’… μ˜μƒμœΌλ‘œ ν•©μ„±κ³± 신경망을 ν•™μŠ΅ν•˜μ—¬ λΆ„λ₯˜ μ•Œκ³ λ¦¬μ¦˜μ„ κ°œλ°œν•˜μ˜€λ‹€. μ œμ•ˆν•˜λŠ” μ•Œκ³ λ¦¬μ¦˜μ€ μžλ™ κΈ°κ³„ν•™μŠ΅ (AutoML) λ°©λ²•μœΌλ‘œ, λŒ€μž₯ μš©μ’… μ˜μƒμ— μ΅œμ ν™”λœ ν•©μ„±κ³± 신경망 ꡬ쑰λ₯Ό μ°Ύκ³  μ‹ κ²½λ§μ˜ κ°€μ€‘μΉ˜λ₯Ό ν•™μŠ΅ν•˜μ˜€λ‹€. λ˜ν•œ 기울기-κ°€μ€‘μΉ˜ 클래슀 ν™œμ„±ν™” 맡핑 기법을 μ΄μš©ν•˜μ—¬ κ°œλ°œν•œ ν•©μ„±κ³± 신경망 결과의 ν™•λ₯ μ  κ·Όκ±°λ₯Ό μš©μ’… μœ„μΉ˜μ— μ‹œκ°μ μœΌλ‘œ λ‚˜νƒ€λ‚˜λ„λ‘ ν•¨μœΌλ‘œ λ‚΄μ‹œκ²½ μ „λ¬Έμ˜μ˜ 진단을 돕도둝 ν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, μˆ™λ ¨λ„ κ·Έλ£Ήλ³„λ‘œ λ‚΄μ‹œκ²½ μ „λ¬Έμ˜κ°€ μš©μ’… λΆ„λ₯˜ μ•Œκ³ λ¦¬μ¦˜μ˜ κ²°κ³Όλ₯Ό μ°Έκ³ ν•˜μ˜€μ„ λ•Œ 진단 λŠ₯λ ₯이 ν–₯μƒλ˜μ—ˆλŠ”μ§€ 비ꡐ μ‹€ν—˜μ„ μ§„ν–‰ν•˜μ˜€κ³ , λͺ¨λ“  κ·Έλ£Ήμ—μ„œ μœ μ˜λ―Έν•˜κ²Œ 진단 정확도가 ν–₯μƒλ˜κ³  진단 μ‹œκ°„μ΄ λ‹¨μΆ•λ˜μ—ˆμŒμ„ ν™•μΈν•˜μ˜€λ‹€. 두 번째 μ—°κ΅¬μ—μ„œλŠ” λ‘œλ΄‡μˆ˜μˆ  λ™μ˜μƒμ—μ„œ μˆ˜μˆ λ„κ΅¬ μœ„μΉ˜ 좔적 μ•Œκ³ λ¦¬μ¦˜μ„ κ°œλ°œν•˜κ³ , νšλ“ν•œ μˆ˜μˆ λ„κ΅¬μ˜ μ›€μ§μž„ 정보λ₯Ό λ°”νƒ•μœΌλ‘œ 수술자의 μˆ™λ ¨λ„λ₯Ό μ •λŸ‰μ μœΌλ‘œ ν‰κ°€ν•˜λŠ” λͺ¨λΈμ„ μ œμ•ˆν•˜μ˜€λ‹€. μˆ˜μˆ λ„κ΅¬μ˜ μ›€μ§μž„μ€ 수술자의 λ‘œλ΄‡μˆ˜μˆ  μˆ™λ ¨λ„λ₯Ό ν‰κ°€ν•˜κΈ° μœ„ν•œ μ£Όμš”ν•œ 정보이닀. λ”°λΌμ„œ λ³Έ μ—°κ΅¬λŠ” λ”₯λŸ¬λ‹ 기반의 μžλ™ μˆ˜μˆ λ„κ΅¬ 좔적 μ•Œκ³ λ¦¬μ¦˜μ„ κ°œλ°œν•˜μ˜€μœΌλ©°, λ‹€μŒ 두가지 μ„ ν–‰μ—°κ΅¬μ˜ ν•œκ³„μ μ„ κ·Ήλ³΅ν•˜μ˜€λ‹€. μΈμŠ€ν„΄μŠ€ λΆ„ν•  (Instance Segmentation) ν”„λ ˆμž„μ›μ„ κ°œλ°œν•˜μ—¬ 폐색 (Occlusion) 문제λ₯Ό ν•΄κ²°ν•˜μ˜€κ³ , 좔적기 (Tracker)와 μž¬μ‹λ³„ν™” (Re-Identification) μ•Œκ³ λ¦¬μ¦˜μœΌλ‘œ κ΅¬μ„±λœ 좔적 ν”„λ ˆμž„μ›μ„ κ°œλ°œν•˜μ—¬ λ™μ˜μƒμ—μ„œ μΆ”μ ν•˜λŠ” μˆ˜μˆ λ„κ΅¬μ˜ μ’…λ₯˜κ°€ μœ μ§€λ˜λ„λ‘ ν•˜μ˜€λ‹€. λ˜ν•œ λ‘œλ΄‡μˆ˜μˆ  λ™μ˜μƒμ˜ νŠΉμˆ˜μ„±μ„ κ³ λ €ν•˜μ—¬ μˆ˜μˆ λ„κ΅¬μ˜ μ›€μ§μž„μ„ νšλ“ν•˜κΈ°μœ„ν•΄ μˆ˜μˆ λ„κ΅¬ 끝 μœ„μΉ˜μ™€ λ‘œλ΄‡ νŒ”-인디케이터 (Arm-Indicator) 인식 μ•Œκ³ λ¦¬μ¦˜μ„ κ°œλ°œν•˜μ˜€λ‹€. μ œμ•ˆν•˜λŠ” μ•Œκ³ λ¦¬μ¦˜μ˜ μ„±λŠ₯은 μ˜ˆμΈ‘ν•œ μˆ˜μˆ λ„κ΅¬ 끝 μœ„μΉ˜μ™€ μ •λ‹΅ μœ„μΉ˜ κ°„μ˜ 평균 제곱근 였차, 곑선 μ•„λž˜ 면적, ν”Όμ–΄μŠ¨ μƒκ΄€λΆ„μ„μœΌλ‘œ ν‰κ°€ν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, μˆ˜μˆ λ„κ΅¬μ˜ μ›€μ§μž„μœΌλ‘œλΆ€ν„° μ›€μ§μž„ μ§€ν‘œλ₯Ό κ³„μ‚°ν•˜κ³  이λ₯Ό λ°”νƒ•μœΌλ‘œ κΈ°κ³„ν•™μŠ΅ 기반의 λ‘œλ΄‡μˆ˜μˆ  μˆ™λ ¨λ„ 평가 λͺ¨λΈμ„ κ°œλ°œν•˜μ˜€λ‹€. κ°œλ°œν•œ 평가 λͺ¨λΈμ€ 기쑴의 Objective Structured Assessment of Technical Skill (OSATS), Global Evaluative Assessment of Robotic Surgery (GEARS) 평가 방법과 μœ μ‚¬ν•œ μ„±λŠ₯을 λ³΄μž„μ„ ν™•μΈν•˜μ˜€λ‹€. λ³Έ 논문은 μ˜λ£Œμ§„μ˜ μž„μƒμˆ κΈ° λŠ₯λ ₯을 ν–₯μƒμ‹œν‚€κΈ° μœ„ν•˜μ—¬ λŒ€μž₯ μš©μ’… μ˜μƒκ³Ό λ‘œλ΄‡μˆ˜μˆ  λ™μ˜μƒμ— λ”₯λŸ¬λ‹ κΈ°μˆ μ„ μ μš©ν•˜κ³  κ·Έ μœ νš¨μ„±μ„ ν™•μΈν•˜μ˜€μœΌλ©°, ν–₯후에 μ œμ•ˆν•˜λŠ” 방법이 μž„μƒμ—μ„œ μ‚¬μš©λ˜κ³  μžˆλŠ” 진단 및 평가 λ°©λ²•μ˜ λŒ€μ•ˆμ΄ 될 κ²ƒμœΌλ‘œ κΈ°λŒ€ν•œλ‹€.Chapter 1 General Introduction 1 1.1 Deep Learning for Medical Image Analysis 1 1.2 Deep Learning for Colonoscipic Diagnosis 2 1.3 Deep Learning for Robotic Surgical Skill Assessment 3 1.4 Thesis Objectives 5 Chapter 2 Optical Diagnosis of Colorectal Polyps using Deep Learning with Visual Explanations 7 2.1 Introduction 7 2.1.1 Background 7 2.1.2 Needs 8 2.1.3 Related Work 9 2.2 Methods 11 2.2.1 Study Design 11 2.2.2 Dataset 14 2.2.3 Preprocessing 17 2.2.4 Convolutional Neural Networks (CNN) 21 2.2.4.1 Standard CNN 21 2.2.4.2 Search for CNN Architecture 22 2.2.4.3 Searched CNN Training 23 2.2.4.4 Visual Explanation 24 2.2.5 Evaluation of CNN and Endoscopist Performances 25 2.3 Experiments and Results 27 2.3.1 CNN Performance 27 2.3.2 Results of Visual Explanation 31 2.3.3 Endoscopist with CNN Performance 33 2.4 Discussion 45 2.4.1 Research Significance 45 2.4.2 Limitations 47 2.5 Conclusion 49 Chapter 3 Surgical Skill Assessment during Robotic Surgery by Deep Learning-based Surgical Instrument Tracking 50 3.1 Introduction 50 3.1.1 Background 50 3.1.2 Needs 51 3.1.3 Related Work 52 3.2 Methods 56 3.2.1 Study Design 56 3.2.2 Dataset 59 3.2.3 Instance Segmentation Framework 63 3.2.4 Tracking Framework 66 3.2.4.1 Tracker 66 3.2.4.2 Re-identification 68 3.2.5 Surgical Instrument Tip Detection 69 3.2.6 Arm-Indicator Recognition 71 3.2.7 Surgical Skill Prediction Model 71 3.3 Experiments and Results 78 3.3.1 Performance of Instance Segmentation Framework 78 3.3.2 Performance of Tracking Framework 82 3.3.3 Evaluation of Surgical Instruments Trajectory 83 3.3.4 Evaluation of Surgical Skill Prediction Model 86 3.4 Discussion 90 3.4.1 Research Significance 90 3.4.2 Limitations 92 3.5 Conclusion 96 Chapter 4 Summary and Future Works 97 4.1 Thesis Summary 97 4.2 Limitations and Future Works 98 Bibliography 100 Abstract in Korean 116 Acknowledgement 119Docto

    Cost-Effectiveness Analysis of Colorectal Cancer Screening Strategies Using Active Learning and Montecarlo Simulation

    Get PDF
    Colorectal cancer (CRC) is one of the deadliest types of cancer in the US due to its high incidence and mortality rates. Detection of CRC in the early stages through available screening tests increases the patient\u27s survival chances. In this study, we investigate the cost-effectiveness of a wide variety of multi-modal CRC screening policies. More specifically, we develop a Monte Carlo simulation framework to model the CRC natural history and preventive interventions. Age-specific and size-specific progression rates of adenomatous polyps are estimated using an innovative active learning method. Specifically, we develop a decision tree model to estimate size-specific and age-specific adenoma progression and regression rates. Compared to traditional methods, the proposed calibration process expedites the searching of the model parameter space significantly. CRC age-specific incidence rates and CRC stage distribution are the two output measures used in the calibration process. Seventy-eight CRC screening policies are applied to a cohort of U.S. male population using the simulation model and compared in terms of expected Quality Adjusted Life Years (QALY) and costs. Eleven policies are identified as efficient frontier policies. Among these 9 are identified as cost- effective at the willingness to pay (WTP) threshold of $50,000. Fecal Occult Blood Test (FOBT) biennially in conjunction with one time Colonoscopy at 60, FOBT biennially along with one time Colonoscopy at 50, Fecal Immunochemical Test (FIT) biennially in conjunction with two times Flexible Sigmoidoscopy (FS) at 60 and 65. FIT biennially with one time Colonoscopy at 65, Colonoscopy at 50, 60 and 70, FOBT biennially along with two times Colonoscopy at 55 and 65, FOBT annually with 2 times FS at 70 and 75, FOBT annually in conjunction with FS at 50 and 55, and FIT biennially along with FS every 5 years are the nine identified cost-effective policies

    S2^2ME: Spatial-Spectral Mutual Teaching and Ensemble Learning for Scribble-supervised Polyp Segmentation

    Full text link
    Fully-supervised polyp segmentation has accomplished significant triumphs over the years in advancing the early diagnosis of colorectal cancer. However, label-efficient solutions from weak supervision like scribbles are rarely explored yet primarily meaningful and demanding in medical practice due to the expensiveness and scarcity of densely-annotated polyp data. Besides, various deployment issues, including data shifts and corruption, put forward further requests for model generalization and robustness. To address these concerns, we design a framework of Spatial-Spectral Dual-branch Mutual Teaching and Entropy-guided Pseudo Label Ensemble Learning (S2^2ME). Concretely, for the first time in weakly-supervised medical image segmentation, we promote the dual-branch co-teaching framework by leveraging the intrinsic complementarity of features extracted from the spatial and spectral domains and encouraging cross-space consistency through collaborative optimization. Furthermore, to produce reliable mixed pseudo labels, which enhance the effectiveness of ensemble learning, we introduce a novel adaptive pixel-wise fusion technique based on the entropy guidance from the spatial and spectral branches. Our strategy efficiently mitigates the deleterious effects of uncertainty and noise present in pseudo labels and surpasses previous alternatives in terms of efficacy. Ultimately, we formulate a holistic optimization objective to learn from the hybrid supervision of scribbles and pseudo labels. Extensive experiments and evaluation on four public datasets demonstrate the superiority of our method regarding in-distribution accuracy, out-of-distribution generalization, and robustness, highlighting its promising clinical significance. Our code is available at https://github.com/lofrienger/S2ME.Comment: MICCAI 2023 Early Acceptanc

    Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis

    Full text link
    The full acceptance of Deep Learning (DL) models in the clinical field is rather low with respect to the quantity of high-performing solutions reported in the literature. Particularly, end users are reluctant to rely on the rough predictions of DL models. Uncertainty quantification methods have been proposed in the literature as a potential response to reduce the rough decision provided by the DL black box and thus increase the interpretability and the acceptability of the result by the final user. In this review, we propose an overview of the existing methods to quantify uncertainty associated to DL predictions. We focus on applications to medical image analysis, which present specific challenges due to the high dimensionality of images and their quality variability, as well as constraints associated to real-life clinical routine. We then discuss the evaluation protocols to validate the relevance of uncertainty estimates. Finally, we highlight the open challenges of uncertainty quantification in the medical field

    Examining lipid metabolism of colorectal adenomas and carcinomas using Rapid Evaporative Ionisation Mass Spectrometry (REIMS)

    Get PDF
    Background There is an unmet need for real-time intraoperative colorectal tissue recognition, which would promote personalised oncologic decision making. Rapid Evaporative Ionization Mass Spectrometry (REIMS) analyses the composition of cellular lipids through the aerosol generated from electrosurgical instruments, providing a novel diagnostic platform and surgeon feedback. Thesis Hypothesis Colorectal lipid metabolism and cellular lipid composition are associated with the phenotype of colorectal adenomas and carcinomas, which can be leveraged for tissue recognition in vivo. Methods This thesis contains three work packages. First, a method for REIMS spectral quality control was developed based on a human dataset and analysis of a porcine model assessed the spectral impact of technical and environmental factors. Second, an ex vivo spectral reference database was constructed from analysis of human colorectal tissues, assessing the ability of REIMS for tissue recognition. Finally, REIMS was translated into the operating theatre, for proof-of-principle application of during transanal minimally invasive surgery (TAMIS). Results Sensitivity analyses revealed seven minimum quality criteria for REIMS spectra to be included in all future statistical analyses, with quality also impacted by low diathermy power, coagulation mode and tissue contamination. Based on tissue of 161 patients, REIMS could differentiate colorectal normal, adenoma and cancer tissue with 91.1% accuracy, and disease from normal with 93.5% accuracy. REIMS could risk-stratify adenomas by predicting grade of dysplasia, however not histological features of poor prognosis in cancers. 61 pertinent lipid metabolites were structurally identified. REIMS was coupled to TAMIS in seven patients. Optimisation of the workflow successfully increased signal intensity, with tissue recognition showing high accuracy in vivo and identification of a cancer-involved margin. Discussion This thesis demonstrates that REIMS can be optimised and applied for accurate real-time colorectal tissue recognition based on cellular lipid composition. This can be translated in vivo, with promising results during first-in-man mass spectrometry-coupled TAMIS.Open Acces
    • …
    corecore