6 research outputs found

    Segmentation and Identification of Vertebrae in CT Scans Using CNN, k-Means Clustering and k-NN

    Get PDF
    The accurate segmentation and identification of vertebrae presents the foundations for spine analysis including fractures, malfunctions and other visual insights. The large-scale vertebrae segmentation challenge (VerSe), organized as a competition at the Medical Image Computing and Computer Assisted Intervention (MICCAI), is aimed at vertebrae segmentation and labeling. In this paper, we propose a framework that addresses the tasks of vertebrae segmentation and identification by exploiting both deep learning and classical machine learning methodologies. The proposed solution comprises two phases: a binary fully automated segmentation of the whole spine, which exploits a 3D convolutional neural network, and a semi-automated procedure that allows locating vertebrae centroids using traditional machine learning algorithms. Unlike other approaches, the proposed method comes with the added advantage of no requirement for single vertebrae-level annotations to be trained. A dataset of 214 CT scans has been extracted from VerSe'20 challenge data, for training, validating and testing the proposed approach. In addition, to evaluate the robustness of the segmentation and labeling algorithms, 12 CT scans from subjects affected by severe, moderate and mild scoliosis have been collected from a local medical clinic. On the designated test set from Verse'20 data, the binary spine segmentation stage allowed to obtain a binary Dice coefficient of 89.17%, whilst the vertebrae identification one reached an average multi-class Dice coefficient of 90.09%. In order to ensure the reproducibility of the algorithms hereby developed, the code has been made publicly available

    Iron application improves yield, economic returns and grain-Fe concentration of mungbean.

    No full text
    Malnutrition is among the biggest threats being faced globally, and Pakistan is among the countries having high malnutrition rate. Pulses grown in Pakistan have lower amounts of micronutrients, especially iron (Fe) in grains compared to developed world. Biofortification, -a process of integrating nutrients into food crops-, provides a sustainable and economic way of increasing minerals/micronutrients' concentration in staple crops. Mungbean fulfills protein needs of large portion of Pakistani population; however, low Fe concentration in grains do not provide sufficient Fe. Therefore, current study was conducted to infer the impact of different Fe levels and application methods on yield, economic returns and grain-Fe concentration of mungbean. Mungbean was sown under four levels of Fe, i.e., 0, 5, 10 and 15 kg Fe ha-1 applied by three methods, i) as basal application (whole at sowing), ii) side dressing (whole at 1st irrigation) and iii) 50% as basal application + 50% side dressing (regarded as split application). Iron levels and application methods significantly influenced the allometry, yield, economic returns and grain-Fe concentration of mungbean. Split application of 15 kg Fe ha-1 had the highest yield, economic returns and grain-Fe concentration compared to the rest of Fe levels and application methods. Moreover, split application of 15 kg Fe ha-1 proved a quick method to improve the grain-Fe concentration and bioavailability, which will ultimately solve the Fe malnutrition problem of mungbean-consuming population in Pakistan. In conclusion, split application of Fe at 15 kg ha-1 seemed a viable technique to enhance yield, economic returns, grain-Fe concentration and bioavailability of mungbean

    Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence

    No full text
    Computer-aided diagnosis (CAD) systems can help radiologists in numerous medical tasks including classification and staging of the various diseases. The 3D tomosynthesis imaging technique adds value to the CAD systems in diagnosis and classification of the breast lesions. Several convolutional neural network (CNN) architectures have been proposed to classify the lesion shapes to the respective classes using a similar imaging method. However, not only is the black box nature of these CNN models questionable in the healthcare domain, but so is the morphological-based cancer classification, concerning the clinicians. As a result, this study proposes both a mathematically and visually explainable deep-learning-driven multiclass shape-based classification framework for the tomosynthesis breast lesion images. In this study, authors exploit eight pretrained CNN architectures for the classification task on the previously extracted regions of interests images containing the lesions. Additionally, the study also unleashes the black box nature of the deep learning models using two well-known perceptive explainable artificial intelligence (XAI) algorithms including Grad-CAM and LIME. Moreover, two mathematical-structure-based interpretability techniques, i.e., t-SNE and UMAP, are employed to investigate the pretrained models’ behavior towards multiclass feature clustering. The experimental results of the classification task validate the applicability of the proposed framework by yielding the mean area under the curve of 98.2%. The explanability study validates the applicability of all employed methods, mainly emphasizing the pros and cons of both Grad-CAM and LIME methods that can provide useful insights towards explainable CAD systems

    Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence

    No full text
    Computer-aided diagnosis (CAD) systems can help radiologists in numerous medical tasks including classification and staging of the various diseases. The 3D tomosynthesis imaging technique adds value to the CAD systems in diagnosis and classification of the breast lesions. Several convolutional neural network (CNN) architectures have been proposed to classify the lesion shapes to the respective classes using a similar imaging method. However, not only is the black box nature of these CNN models questionable in the healthcare domain, but so is the morphological-based cancer classification, concerning the clinicians. As a result, this study proposes both a mathematically and visually explainable deep-learning-driven multiclass shape-based classification framework for the tomosynthesis breast lesion images. In this study, authors exploit eight pretrained CNN architectures for the classification task on the previously extracted regions of interests images containing the lesions. Additionally, the study also unleashes the black box nature of the deep learning models using two well-known perceptive explainable artificial intelligence (XAI) algorithms including Grad-CAM and LIME. Moreover, two mathematical-structure-based interpretability techniques, i.e., t-SNE and UMAP, are employed to investigate the pretrained models’ behavior towards multiclass feature clustering. The experimental results of the classification task validate the applicability of the proposed framework by yielding the mean area under the curve of 98.2%. The explanability study validates the applicability of all employed methods, mainly emphasizing the pros and cons of both Grad-CAM and LIME methods that can provide useful insights towards explainable CAD systems

    Inline Defective Laser Weld Identification by Processing Thermal Image Sequences with Machine and Deep Learning Techniques

    No full text
    The non-destructive testing methods offer great benefit in detecting and classifying the weld defects. Among these, infrared (IR) thermography stands out in the inspection, characterization, and analysis of the defects from the camera image sequences, particularly with the recent advent of deep learning. However, in IR, the defect classification becomes a cumbersome task because of the exposure to the inconsistent and unbalanced heat source, which requires additional supervision. In light of this, authors present a fully automated system capable of detecting defective welds according to the electrical resistance properties in the inline mode. The welding process is captured by an IR camera that generates a video sequence. A set of features extracted by such video feeds supervised machine learning and deep learning algorithms in order to build an industrial diagnostic framework for weld defect detection. The experimental study validates the aptitude of a customized convolutional neural network architecture to classify the malfunctioning weld joints with mean accuracy of 99% and median f1 score of 73% across five-fold cross validation on our locally acquired real world dataset. The outcome encourages the integration of thermographic-based quality control frameworks in all applications where fast and accurate recognition and safety assurance are crucial industrial requirements across the production line

    A Fusion Biopsy Framework for Prostate Cancer Based on Deformable Superellipses and nnU-Net

    No full text
    In prostate cancer, fusion biopsy, which couples magnetic resonance imaging (MRI) with transrectal ultrasound (TRUS), poses the basis for targeted biopsy by allowing the comparison of information coming from both imaging modalities at the same time. Compared with the standard clinical procedure, it provides a less invasive option for the patients and increases the likelihood of sampling cancerous tissue regions for the subsequent pathology analyses. As a prerequisite to image fusion, segmentation must be achieved from both MRI and TRUS domains. The automatic contour delineation of the prostate gland from TRUS images is a challenging task due to several factors including unclear boundaries, speckle noise, and the variety of prostate anatomical shapes. Automatic methodologies, such as those based on deep learning, require a huge quantity of training data to achieve satisfactory results. In this paper, the authors propose a novel optimization formulation to find the best superellipse, a deformable model that can accurately represent the prostate shape. The advantage of the proposed approach is that it does not require extensive annotations, and can be used independently of the specific transducer employed during prostate biopsies. Moreover, in order to show the clinical applicability of the method, this study also presents a module for the automatic segmentation of the prostate gland from MRI, exploiting the nnU-Net framework. Lastly, segmented contours from both imaging domains are fused with a customized registration algorithm in order to create a tool that can help the physician to perform a targeted prostate biopsy by interacting with the graphical user interface
    corecore