135 research outputs found

    Artificial intelligence in musculoskeletal ultrasound imaging

    Get PDF
    Ultrasonography (US) is noninvasive and offers real-time, low-cost, and portable imaging that facilitates the rapid and dynamic assessment of musculoskeletal components. Significant technological improvements have contributed to the increasing adoption of US for musculoskeletal assessments, as artificial intelligence (AI)-based computer-aided detection and computer-aided diagnosis are being utilized to improve the quality, efficiency, and cost of US imaging. This review provides an overview of classical machine learning techniques and modern deep learning approaches for musculoskeletal US, with a focus on the key categories of detection and diagnosis of musculoskeletal disorders, predictive analysis with classification and regression, and automated image segmentation. Moreover, we outline challenges and a range of opportunities for AI in musculoskeletal US practice.11Nsciescopu

    Generic Feature Learning for Wireless Capsule Endoscopy Analysis

    Full text link
    The interpretation and analysis of wireless capsule endoscopy (WCE) recordings is a complex task which requires sophisticated computer aided decision (CAD) systems to help physicians with video screening and, finally, with the diagnosis. Most CAD systems used in capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, a new CAD system has to be designed from the scratch. This makes the design of new CAD systems very time consuming. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which circumvents the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed using state-of-the-art handcrafted features. In particular, it reaches a mean classification accuracy of 96% for six intestinal motility events, outperforming the other classifiers by a large margin (a 14% relative performance increase)

    An image processing decisional system for the Achilles tendon using ultrasound images

    Get PDF
    The Achilles Tendon (AT) is described as the largest and strongest tendon in the human body. As for any other organs in the human body, the AT is associated with some medical problems that include Achilles rupture and Achilles tendonitis. AT rupture affects about 1 in 5,000 people worldwide. Additionally, AT is seen in about 10 percent of the patients involved in sports activities. Today, ultrasound imaging plays a crucial role in medical imaging technologies. It is portable, non-invasive, free of radiation risks, relatively inexpensive and capable of taking real-time images. There is a lack of research that looks into the early detection and diagnosis of AT abnormalities from ultrasound images. This motivated the researcher to build a complete system which enables one to crop, denoise, enhance, extract the important features and classify AT ultrasound images. The proposed application focuses on developing an automated system platform. Generally, systems for analysing ultrasound images involve four stages, pre-processing, segmentation, feature extraction and classification. To produce the best results for classifying the AT, SRAD, CLAHE, GLCM, GLRLM, KPCA algorithms have been used. This was followed by the use of different standard and ensemble classifiers trained and tested using the dataset samples and reduced features to categorize the AT images into normal or abnormal. Various classifiers have been adopted in this research to improve the classification accuracy. To build an image decisional system, a 57 AT ultrasound images has been collected. These images were used in three different approaches where the Region of Interest (ROI) position and size are located differently. To avoid the imbalanced misleading metrics, different evaluation metrics have been adapted to compare different classifiers and evaluate the whole classification accuracy. The classification outcomes are evaluated using different metrics in order to estimate the decisional system performance. A high accuracy of 83% was achieved during the classification process. Most of the ensemble classifies worked better than the standard classifiers in all the three ROI approaches. The research aim was achieved and accomplished by building an image processing decisional system for the AT ultrasound images. This system can distinguish between normal and abnormal AT ultrasound images. In this decisional system, AT images were improved and enhanced to achieve a high accuracy of classification without any user intervention

    Image texture analysis of transvaginal ultrasound in monitoring ovarian cancer

    Get PDF
    Ovarian cancer has the highest mortality rate of all gynaecologic cancers and is the fifth most common cancer in UK women. It has been dubbed “the silent killer” because of its non-specific symptoms. Amongst various imaging modalities, ultrasound is considered the main modality for ovarian cancer triage. Like other imaging modalities, the main issue is that the interpretation of the images is subjective and observer dependent. In order to overcome this problem, texture analysis was considered for this study. Advances in medical imaging, computer technology and image processing have collectively ramped up the interest of many researchers in texture analysis. While there have been a number of successful uses of texture analysis technique reported, to my knowledge, until recently it has yet to be applied to characterise an ovarian lesion from a B-mode image. The concept of applying texture analysis in the medical field would not replace the conventional method of interpreting images but is simply intended to aid clinicians in making their diagnoses. Five categories of textural features were considered in this study: grey-level co-occurrence matrix (GLCM), Run Length Matrix (RLM), gradient, auto-regressive (AR) and wavelet. Prior to the image classification, the robustness or how well a specific textural feature can tolerate variation arises from the image acquisition and texture extraction process was first evaluated. This includes random variation caused by the ultrasound system and the operator during image acquisition. Other factors include the influence of region of interest (ROI) size, ROI depth, scanner gain setting, and „calliper line‟. Evaluation of scanning reliability was carried out using a tissue-equivalent phantom as well as evaluations of a clinical environment. iii Additionally, the reliability of the ROI delineation procedure for clinical images was also evaluated. An image enhancement technique and semi-automatic segmentation tool were employed in order to improve the ROI delineation procedure. The results of the study indicated that two out of five textural features, GLCM and wavelet, were robust. Hence, these two features were then used for image classification purposes. To extract textural features from the clinical images, two ROI delineation approaches were introduced: (i) the textural features were extracted from the whole area of the tissue of interest, and (ii) the anechoic area within the normal and malignant tissues was excluded from features extraction. The results revealed that the second approach outperformed the first approach: there is a significant difference in the GLCM and wavelet features between the three groups: normal tissue, cysts, and malignant. Receiver operating characteristic (ROC) curve analysis was carried out to determine the discriminatory ability of textural features, which was found to be satisfactory. The principal conclusion was that GLCM and wavelet features can potentially be used as computer aided diagnosis (CAD) tools to help clinicians in the diagnosis of ovarian cancer

    Dual-modality fibre optic probe for simultaneous ablation and ultrasound imaging

    Get PDF
    All-optical ultrasound (OpUS) is an emerging high resolution imaging paradigm utilising optical fibres. This allows both therapeutic and imaging modalities to be integrated into devices with dimensions small enough for minimally invasive surgical applications. Here we report a dual-modality fibre optic probe that synchronously performs laser ablation and real-time all-optical ultrasound imaging for ablation monitoring. The device comprises three optical fibres: one each for transmission and reception of ultrasound, and one for the delivery of laser light for ablation. The total device diameter is < 1 mm. Ablation monitoring was carried out on porcine liver and heart tissue ex vivo with ablation depth tracked using all-optical M-mode ultrasound imaging and lesion boundary identification using a segmentation algorithm. Ablation depths up to 2.1 mm were visualised with a good correspondence between the ultrasound depth measurements and visual inspection of the lesions using stereomicroscopy. This work demonstrates the potential for OpUS probes to guide minimally invasive ablation procedures in real time

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention
    corecore