11 research outputs found

    Weakly Supervised Medical Image Segmentation With Soft Labels and Noise Robust Loss

    Full text link
    Recent advances in deep learning algorithms have led to significant benefits for solving many medical image analysis problems. Training deep learning models commonly requires large datasets with expert-labeled annotations. However, acquiring expert-labeled annotation is not only expensive but also is subjective, error-prone, and inter-/intra- observer variability introduces noise to labels. This is particularly a problem when using deep learning models for segmenting medical images due to the ambiguous anatomical boundaries. Image-based medical diagnosis tools using deep learning models trained with incorrect segmentation labels can lead to false diagnoses and treatment suggestions. Multi-rater annotations might be better suited to train deep learning models with small training sets compared to single-rater annotations. The aim of this paper was to develop and evaluate a method to generate probabilistic labels based on multi-rater annotations and anatomical knowledge of the lesion features in MRI and a method to train segmentation models using probabilistic labels using normalized active-passive loss as a "noise-tolerant loss" function. The model was evaluated by comparing it to binary ground truth for 17 knees MRI scans for clinical segmentation and detection of bone marrow lesions (BML). The proposed method successfully improved precision 14, recall 22, and Dice score 8 percent compared to a binary cross-entropy loss function. Overall, the results of this work suggest that the proposed normalized active-passive loss using soft labels successfully mitigated the effects of noisy labels

    Self-Supervised-RCNN for Medical Image Segmentation with Limited Data Annotation

    Full text link
    Many successful methods developed for medical image analysis that are based on machine learning use supervised learning approaches, which often require large datasets annotated by experts to achieve high accuracy. However, medical data annotation is time-consuming and expensive, especially for segmentation tasks. To solve the problem of learning with limited labeled medical image data, an alternative deep learning training strategy based on self-supervised pretraining on unlabeled MRI scans is proposed in this work. Our pretraining approach first, randomly applies different distortions to random areas of unlabeled images and then predicts the type of distortions and loss of information. To this aim, an improved version of Mask-RCNN architecture has been adapted to localize the distortion location and recover the original image pixels. The effectiveness of the proposed method for segmentation tasks in different pre-training and fine-tuning scenarios is evaluated based on the Osteoarthritis Initiative dataset. Using this self-supervised pretraining method improved the Dice score by 20% compared to training from scratch. The proposed self-supervised learning is simple, effective, and suitable for different ranges of medical image analysis tasks including anomaly detection, segmentation, and classification

    Real-time movement compensation for synchronous robotic HIFU surgery

    No full text
    Open surgical techniques are being replaced by minimally invasive or noninvasive techniques in areas such as oncology, mainly due to reduction in tissue trauma and recovery time. Radiation therapy (RT), which is the most commonly used technique for noninvasive treatment, is known to cause tissue ionization which manifests into side effects such as edema, damage to epithelium and fatigue. It can also be a potential cause for cancer in rare cases. High intensity focused ultrasound (HIFU) is devoid of ionizing effects making it more suitable for organs such as the kidney which have low radiation tolerance. Although the therapeutic potential of HIFU has been proved, its clinical usage is still limited due to inaccuracies in the target tracking and control. HIFU surgery shares a few issues common to noninvasive surgery. Most significant among these is the movement of the target organ due to physiological processes such as respiration. Prevalent HIFU systems either ignore this movement or rely upon breath suppression. In this research, a movement model that can be incorporated into therapeutic system is proposed, so as to achieve real-time movement compensation. Aspects such as the image registration algorithm, movement control strategy and HIFU applicator design which are part of the therapeutic system were also addressed. The target application for this study was the treatment of kidney tumors such as the Renal Cell Carcinoma (RCC). Initial work deals with measurement and statistical analysis of the kidney from a set of healthy volunteers. The movement patterns observed were complex and subject-specific. Hence generic movement models cannot be used. Therefore, a prediction-correlation based model was proposed and a new empirical model was developed for kidney movement; this was used as the basis for non-linear predictors such as Unscented Kalman Filter (UKF), Extended Kalman Filter (EKF) and Adaptive Neuro Fuzzy Inference System (ANIFS). The movement model also comprised of a correlation network which maps the position of skin markers to current position of the kidney. The correlation module comprised of a new mapping function tuned using ANFIS. The prediction-correlation based approach has two advantages – firstly, it intuitively accounts for the subject specific nature of the movement and secondly, it reduces the length of prediction thereby improving the accuracy. This approach gives an accuracy of more than 92% for movement prediction. The maximum absolute error observed was 2.4 mm.DOCTOR OF PHILOSOPHY (MAE

    Respiration-induced movement correlation for synchronous noninvasive renal cancer surgery

    No full text
    Noninvasive surgery (NIS), such as high-intensity focused ultrasound (HIFU)-based ablation or radiosurgery, is used for treating tumors and cancers in various parts of the body. The soft tissue targets (usually organs) deform and move as a result of physiological processes such as respiration. Moreover, other deformations induced during surgery by changes in patient position, changes in physical properties caused by repeated exposures and uncertainties resulting from cavitation also occur. In this paper, we present a correlation-based movement prediction technique to address respiration-induced movement of the urological organs while targeting through extracorporeal trans-abdominal route access. Among other organs, kidneys are worst affected during respiratory cycles, with significant three-dimensional displacements observed on the order of 20 mm. Remote access to renal targets such as renal carcinomas and cysts during noninvasive surgery, therefore, requires a tightly controlled real-time motion tracking and quantitative estimate for compensation routine to synchronize the energy source(s) for precise energy delivery to the intended regions. The correlation model finds a mapping between the movement patterns of external skin markers placed on the abdominal access window and the internal movement of the targeted kidney. The coarse estimate of position is then fine-tuned using the Adaptive Neuro-Fuzzy Inference System (ANFIS), thereby achieving a nonlinear mapping. The technical issues involved in this tracking scheme are threefold: the model must have sufficient accuracy in mapping the movement pattern; there must be an image-based tracking scheme to provide the organ position within allowable system latency; and the processing delay resulting from modeling and tracking must be within the achievable prediction horizon to accommodate the latency in the therapeutic delivery system. The concept was tested on ultrasound image sequences collected from 20 healthy volunte- rs. The results indicate that the modeling technique can be practically integrated into an image-guided noninvasive robotic surgical system with an indicative targeting accuracy of more than 94%. A comparative analysis showed the superiority of this technique over conventional linear mapping and modelfree blind search techniques

    AI aided workflow for hip dysplasia screening using ultrasound in primary care clinics

    No full text
    Abstract Developmental dysplasia of the hip (DDH) is a common cause of premature osteoarthritis. This osteoarthritis can be prevented if DDH is detected by ultrasound and treated in infancy, but universal DDH screening is generally not cost-effective due to the need for experts to perform the scans. The purpose of our study was to evaluate the feasibility of having non-expert primary care clinic staff perform DDH ultrasound using handheld ultrasound with artificial intelligence (AI) decision support. We performed an implementation study evaluating the FDA-cleared MEDO-Hip AI app interpreting cine-sweep images obtained from handheld Philips Lumify probe to detect DDH. Initial scans were done by nurses or family physicians in 3 primary care clinics, trained by video, powerpoint slides and brief in-person. When the AI app recommended follow-up (FU), we first performed internal FU by a sonographer using the AI app; cases still considered abnormal by AI were referred to pediatric orthopedic clinic for assessment. We performed 369 scans in 306 infants. Internal FU rates were initially 40% for nurses and 20% for physicians, declining steeply to 14% after ~ 60 cases/site: 4% technical failure, 8% normal at sonographer FU using AI, and 2% confirmed DDH. Of 6 infants referred to pediatric orthopedic clinic, all were treated for DDH (100% specificity); 4 had no risk factors and may not have otherwise been identified. Real-time AI decision support and a simplified portable ultrasound protocol enabled lightly trained primary care clinic staff to perform hip dysplasia screening with FU and case detection rates similar to costly formal ultrasound screening, where the US scan is performed by a sonographer and interpreted by a radiologist/orthopedic surgeon. This highlights the potential utility of AI-supported portable ultrasound in primary care

    2D/3D ultrasound diagnosis of pediatric distal radius fractures by human readers vs artificial intelligence

    No full text
    Abstract Wrist trauma is common in children and generally requires radiography for exclusion of fractures, subjecting children to radiation and long wait times in the emergency department. Ultrasound (US) has potential to be a safer, faster diagnostic tool. This study aimed to determine how reliably US could detect distal radius fractures in children, to contrast the accuracy of 2DUS to 3DUS, and to assess the utility of artificial intelligence for image interpretation. 127 children were scanned with 2DUS and 3DUS on the affected wrist. US scans were then read by 7 blinded human readers and an AI model. With radiographs used as the gold standard, expert human readers obtained a mean sensitivity of 0.97 and 0.98 for 2DUS and 3DUS respectively. The AI model sensitivity was 0.91 and 1.00 for 2DUS and 3DUS respectively. Study data suggests that 2DUS is comparable to 3DUS and AI diagnosis is comparable to human experts
    corecore