61 research outputs found

    A Review on Skin Disease Classification and Detection Using Deep Learning Techniques

    Get PDF
    Skin cancer ranks among the most dangerous cancers. Skin cancers are commonly referred to as Melanoma. Melanoma is brought on by genetic faults or mutations on the skin, which are caused by Unrepaired Deoxyribonucleic Acid (DNA) in skin cells. It is essential to detect skin cancer in its infancy phase since it is more curable in its initial phases. Skin cancer typically progresses to other regions of the body. Owing to the disease's increased frequency, high mortality rate, and prohibitively high cost of medical treatments, early diagnosis of skin cancer signs is crucial. Due to the fact that how hazardous these disorders are, scholars have developed a number of early-detection techniques for melanoma. Lesion characteristics such as symmetry, colour, size, shape, and others are often utilised to detect skin cancer and distinguish benign skin cancer from melanoma. An in-depth investigation of deep learning techniques for melanoma's early detection is provided in this study. This study discusses the traditional feature extraction-based machine learning approaches for the segmentation and classification of skin lesions. Comparison-oriented research has been conducted to demonstrate the significance of various deep learning-based segmentation and classification approaches

    Computer-Aided Assessment of Tuberculosis with Radiological Imaging: From rule-based methods to Deep Learning

    Get PDF
    Mención Internacional en el título de doctorTuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis (Mtb.) that produces pulmonary damage due to its airborne nature. This fact facilitates the disease fast-spreading, which, according to the World Health Organization (WHO), in 2021 caused 1.2 million deaths and 9.9 million new cases. Traditionally, TB has been considered a binary disease (latent/active) due to the limited specificity of the traditional diagnostic tests. Such a simple model causes difficulties in the longitudinal assessment of pulmonary affectation needed for the development of novel drugs and to control the spread of the disease. Fortunately, X-Ray Computed Tomography (CT) images enable capturing specific manifestations of TB that are undetectable using regular diagnostic tests, which suffer from limited specificity. In conventional workflows, expert radiologists inspect the CT images. However, this procedure is unfeasible to process the thousands of volume images belonging to the different TB animal models and humans required for a suitable (pre-)clinical trial. To achieve suitable results, automatization of different image analysis processes is a must to quantify TB. It is also advisable to measure the uncertainty associated with this process and model causal relationships between the specific mechanisms that characterize each animal model and its level of damage. Thus, in this thesis, we introduce a set of novel methods based on the state of the art Artificial Intelligence (AI) and Computer Vision (CV). Initially, we present an algorithm to assess Pathological Lung Segmentation (PLS) employing an unsupervised rule-based model which was traditionally considered a needed step before biomarker extraction. This procedure allows robust segmentation in a Mtb. infection model (Dice Similarity Coefficient, DSC, 94%±4%, Hausdorff Distance, HD, 8.64mm±7.36mm) of damaged lungs with lesions attached to the parenchyma and affected by respiratory movement artefacts. Next, a Gaussian Mixture Model ruled by an Expectation-Maximization (EM) algorithm is employed to automatically quantify the burden of Mtb.using biomarkers extracted from the segmented CT images. This approach achieves a strong correlation (R2 ≈ 0.8) between our automatic method and manual extraction. Consequently, Chapter 3 introduces a model to automate the identification of TB lesions and the characterization of disease progression. To this aim, the method employs the Statistical Region Merging algorithm to detect lesions subsequently characterized by texture features that feed a Random Forest (RF) estimator. The proposed procedure enables a selection of a simple but powerful model able to classify abnormal tissue. The latest works base their methodology on Deep Learning (DL). Chapter 4 extends the classification of TB lesions. Namely, we introduce a computational model to infer TB manifestations present in each lung lobe of CT scans by employing the associated radiologist reports as ground truth. We do so instead of using the classical manually delimited segmentation masks. The model adjusts the three-dimensional architecture, V-Net, to a multitask classification context in which loss function is weighted by homoscedastic uncertainty. Besides, the method employs Self-Normalizing Neural Networks (SNNs) for regularization. Our results are promising with a Root Mean Square Error of 1.14 in the number of nodules and F1-scores above 0.85 for the most prevalent TB lesions (i.e., conglomerations, cavitations, consolidations, trees in bud) when considering the whole lung. In Chapter 5, we present a DL model capable of extracting disentangled information from images of different animal models, as well as information of the mechanisms that generate the CT volumes. The method provides the segmentation mask of axial slices from three animal models of different species employing a single trained architecture. It also infers the level of TB damage and generates counterfactual images. So, with this methodology, we offer an alternative to promote generalization and explainable AI models. To sum up, the thesis presents a collection of valuable tools to automate the quantification of pathological lungs and moreover extend the methodology to provide more explainable results which are vital for drug development purposes. Chapter 6 elaborates on these conclusions.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidenta: María Jesús Ledesma Carbayo.- Secretario: David Expósito Singh.- Vocal: Clarisa Sánchez Gutiérre

    The role of deep learning in structural and functional lung imaging

    Get PDF
    Background: Structural and functional lung imaging are critical components of pulmonary patient care. Image analysis methods, such as image segmentation, applied to structural and functional lung images, have significant benefits for patients with lung pathologies, including the computation of clinical biomarkers. Traditionally, machine learning (ML) approaches, such as clustering, and computational modelling techniques, such as CT-ventilation imaging, have been used for segmentation and synthesis, respectively. Deep learning (DL) has shown promise in medical image analysis tasks, often outperforming alternative methods. Purpose: To address the hypothesis that DL can outperform conventional ML and classical image analysis methods for the segmentation and synthesis of structural and functional lung imaging via: i. development and comparison of 3D convolutional neural networks (CNNs) for the segmentation of ventilated lung using hyperpolarised (HP) gas MRI. ii. development of a generalisable, multi-centre CNN for segmentation of the lung cavity using 1H-MRI. iii. the proposal of a framework for estimating the lung cavity in the spatial domain of HP gas MRI. iv. development of a workflow to synthesise HP gas MRI from multi-inflation, non-contrast CT. v. the proposal of a framework for the synthesis of fully-volumetric HP gas MRI ventilation from a large, diverse dataset of non-contrast, multi-inflation 1H-MRI scans. Methods: i. A 3D CNN-based method for the segmentation of ventilated lung using HP gas MRI was developed and CNN parameters, such as architecture, loss function and pre-processing were optimised. ii. A 3D CNN trained on a multi-acquisition dataset and validated on data from external centres was compared with a 2D alternative for the segmentation of the lung cavity using 1H-MRI. iii. A dual-channel, multi-modal segmentation framework was compared to single-channel approaches for estimation of the lung cavity in the domain of HP gas MRI. iv. A hybrid data-driven and model-based approach for the synthesis of HP gas MRI ventilation from CT was compared to approaches utilising DL or computational modelling alone. v. A physics-constrained, multi-channel framework for the synthesis of fully-volumetric ventilation surrogates from 1H-MRI was validated using five-fold cross-validation and an external test data set. Results: i. The 3D CNN, developed via parameterisation experiments, accurately segmented ventilation scans and outperformed conventional ML methods. ii. The 3D CNN produced more accurate segmentations than its 2D analogues for the segmentation of the lung cavity, exhibiting minimal variation in performance between centres, vendors and acquisitions. iii. Dual-channel, multi-modal approaches generate significant improvements compared to methods which use a single imaging modality for the estimation of the lung cavity. iv. The hybrid approach produced synthetic ventilation scans which correlate with HP gas MRI. v. The physics-constrained, 3D multi-channel synthesis framework outperformed approaches which did not integrate computational modelling, demonstrating generalisability to external data. Conclusion: DL approaches demonstrate the ability to segment and synthesise lung MRI across a range of modalities and pulmonary pathologies. These methods outperform computational modelling and classical ML approaches, reducing the time required to adequately edit segmentations and improving the modelling of synthetic ventilation, which may facilitate the clinical translation of DL in structural and functional lung imaging

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    Development of deep learning methods for head and neck cancer detection in hyperspectral imaging and digital pathology for surgical guidance

    Get PDF
    Surgeons performing routine cancer resections utilize palpation and visual inspection, along with time-consuming microscopic tissue analysis, to ensure removal of cancer. Despite this, inadequate surgical cancer margins are reported for up to 10-20% of head and neck squamous cell carcinoma (SCC) operations. There exists a need for surgical guidance with optical imaging to ensure complete cancer resection in the operating room. The objective of this dissertation is to evaluate hyperspectral imaging (HSI) as a non-contact, label-free optical imaging modality to provide intraoperative diagnostic information. For comparison of different optical methods, autofluorescence, RGB composite images synthesized from HSI, and two fluorescent dyes are also acquired and investigated for head and neck cancer detection. A novel and comprehensive dataset was obtained of 585 excised tissue specimens from 204 patients undergoing routine head and neck cancer surgeries. The first aim was to use SCC tissue specimens to determine the potential of HSI for surgical guidance in the challenging task of head and neck SCC detection. It is hypothesized that HSI could reduce time and provide quantitative cancer predictions. State-of-the-art deep learning algorithms were developed for SCC detection in 102 patients and compared to other optical methods. HSI detected SCC with a median AUC score of 85%, and several anatomical locations demonstrated good SCC detection, such as the larynx, oropharynx, hypopharynx, and nasal cavity. To understand the ability of HSI for SCC detection, the most important spectral features were calculated and correlated with known cancer physiology signals, notably oxygenated and deoxygenated hemoglobin. The second aim was to evaluate HSI for tumor detection in thyroid and salivary glands, and RGB images were synthesized using the spectral response curves of the human eye for comparison. Using deep learning, HSI detected thyroid tumors with 86% average AUC score, which outperformed fluorescent dyes and autofluorescence, but HSI-synthesized RGB imagery performed with 90% AUC score. The last aim was to develop deep learning algorithms for head and neck cancer detection in hundreds of digitized histology slides. Slides containing SCC or thyroid carcinoma can be distinguished from normal slides with 94% and 99% AUC scores, respectively, and SCC and thyroid carcinoma can be localized within whole-slide images with 92% and 95% AUC scores, respectively. In conclusion, the outcomes of this thesis work demonstrate that HSI and deep learning methods could aid surgeons and pathologists in detecting head and neck cancers.Ph.D

    Tracing back the source of contamination

    Get PDF
    From the time a contaminant is detected in an observation well, the question of where and when the contaminant was introduced in the aquifer needs an answer. Many techniques have been proposed to answer this question, but virtually all of them assume that the aquifer and its dynamics are perfectly known. This work discusses a new approach for the simultaneous identification of the contaminant source location and the spatial variability of hydraulic conductivity in an aquifer which has been validated on synthetic and laboratory experiments and which is in the process of being validated on a real aquifer

    HARDI-ZOOMit protocol improves specificity to microstructural changes in presymptomatic myelopathy

    Get PDF
    ABSTRACT: Diffusion magnetic resonance imaging (dMRI) proved promising in patients with non-myelopathic degenerative cervical cord compression (NMDCCC), i.e., without clinically manifested myelopathy. Aim of the study is to present a fast multi-shell HARDI-ZOOMit dMRI protocol and validate its usability to detect microstructural myelopathy in NMDCCC patients. In 7 young healthy volunteers, 13 age-comparable healthy controls, 18 patients with mild NMDCCC and 15 patients with severe NMDCCC, the protocol provided higher signal-to-noise ratio, enhanced visualization of white/gray matter structures in microstructural maps, improved dMRI metric reproducibility, preserved sensitivity (SE = 87.88%) and increased specificity (SP = 92.31%) of control-patient group differences when compared to DTI-RESOLVE protocol (SE = 87.88%, SP = 76.92%). Of the 56 tested microstructural parameters, HARDI-ZOOMit yielded significant patient-control differences in 19 parameters, whereas in DTI-RESOLVE data, differences were observed in 10 parameters, with mostly lower robustness. Novel marker the white-gray matter diffusivity gradient demonstrated the highest separation. HARDI-ZOOMit protocol detected larger number of crossing fibers (5–15% of voxels) with physiologically plausible orientations than DTI-RESOLVE protocol (0–8% of voxels). Crossings were detected in areas of dorsal horns and anterior white commissure. HARDI-ZOOMit protocol proved to be a sensitive and practical tool for clinical quantitative spinal cord imaging

    Imaging Sensors and Applications

    Get PDF
    In past decades, various sensor technologies have been used in all areas of our lives, thus improving our quality of life. In particular, imaging sensors have been widely applied in the development of various imaging approaches such as optical imaging, ultrasound imaging, X-ray imaging, and nuclear imaging, and contributed to achieve high sensitivity, miniaturization, and real-time imaging. These advanced image sensing technologies play an important role not only in the medical field but also in the industrial field. This Special Issue covers broad topics on imaging sensors and applications. The scope range of imaging sensors can be extended to novel imaging sensors and diverse imaging systems, including hardware and software advancements. Additionally, biomedical and nondestructive sensing applications are welcome
    corecore