10 research outputs found

    A novel segmentation framework for uveal melanoma in magnetic resonance imaging based on class activation maps

    Get PDF
    An automatic and accurate eye tumor segmentation from Magnetic Resonance images (MRI) could have a great clinical contribution for the purpose of diagnosis and treatment planning of intra-ocular cancer. For instance, the characterization of uveal melanoma (UM) tumors would allow the integration of 3D information for the radiotherapy and would also support further radiomics studies. In this work, we tackle two major challenges of UM segmentation: 1) the high heterogeneity of tumor characterization in respect to location, size and appearance and, 2) the difficulty in obtaining ground-truth delineations of medical experts for training. We propose a thorough segmentation pipeline consisting of a combination of two Convolutional Neural Networks (CNN). First, we consider the class activation maps (CAM) output from a Resnet classification model and the combination of Dense Conditional Random Field (CRF) with a prior information of sclera and lens from an Active Shape Model (ASM) to automatically extract the tumor location for all MRIs. Then, these immediate results will be inputted into a 2D-Unet CNN whereby using four encoder and decoder layers to produce the tumor segmentation. A clinical data set of 1.5T T1-w and T2-w images of 28 healthy eyes and 24 UM patients is used for validation. We show experimentally in two different MRI sequences that our weakly 2D-Unet approach outperforms previous state-of-the-art methods for tumor segmentation and that it achieves equivalent accuracy as when manual labels are used for training. These results are promising for further large-scale analysis and for introducing 3D ocular tumor information in the therapy planning

    U-net and its variants for medical image segmentation: A review of theory and applications

    Get PDF
    U-net is an image segmentation technique developed primarily for image segmentation tasks. These traits provide U-net with a high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in nearly all major image modalities, from CT scans and MRI to Xrays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. Given that U-net’s potential is still increasing, this narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends. We also discuss the many innovations that have advanced in deep learning and discuss how these tools facilitate U-net. In addition, we review the different image modalities and application areas that have been enhanced by U-net

    An automatic framework to create patient-specific eye models from 3D magnetic resonance images for treatment selection in patients with uveal melanoma

    Get PDF
    Purpose: The optimal treatment strategy for uveal melanoma (UM) relies on many factors, the most important being tumor size and location. Building on recent developments in high-resolution 3D ocular magnetic resonance imaging (MRI), we developed an automatic image-processing framework to create patient-specific eye models and to subsequently determine the full 3D tumor shape and size automatically.Methods and Materials: From 15 patients with UM, 3D inversion-recovery gradient-echo (T1-weighted) and 3D fat-suppressed spin-echo (T2-weighted) images were acquired with a 7T MRI scanner. First, the sclera and cornea were segmented from the T2 weighted image by mesh-fitting. The T1-and T2-weighted images were then coregistered. From the registered T1-weighted image, the lens, vitreous body, retinal detachment, and tumor were segmented. Fuzzy C-means clustering was used to differentiate the tumor from retinal detachments. The tumor model was verified and (if needed) edited by an ophthalmic MRI specialist. Subsequently, the prominence and largest basal diameter of the tumor were measured automatically based on the verified contours. These results were compared with manual assessments on the original images and with ultrasound measurements to show the errors in manual analysis.Results: The framework successfully created an eye model fully automatically in 12 cases. In these cases, a Dice similarity coefficient (mean surface distance) of 97.7%+/- 0.84% (0.17 +/- 0.11 mm) was achieved for the sclera, 96.8%+/- 1.05% (0.20 +/- 0.06 mm) for the vitreous body, 91.6%+/- 4.83% (0.15 +/- 0.06 mm) for the lens, and 86.0%+/- 7.4% (0.35 +/- 0.27 mm) for the tumor. The manual assessments deviated, on average, 0.39 +/- 0.31 mm in prominence and 1.7 +/- 1.22 mm in basal diameter from the automatic measurements.Conclusions: The described framework combined information from T1- and T2-weighted images to accurately determine tumor boundaries in 3D. The proposed process may have a direct effect on clinical workflow, as it enables an accurate 3D assessment of tumor dimensions, which directly influences therapy selection. (C) 2021 Published by Elsevier Inc. on behalf of American Society for Radiation Oncology.Biological, physical and clinical aspects of cancer treatment with ionising radiatio

    Multi-modal and multi-dimensional biomedical image data analysis using deep learning

    Get PDF
    There is a growing need for the development of computational methods and tools for automated, objective, and quantitative analysis of biomedical signal and image data to facilitate disease and treatment monitoring, early diagnosis, and scientific discovery. Recent advances in artificial intelligence and machine learning, particularly in deep learning, have revolutionized computer vision and image analysis for many application areas. While processing of non-biomedical signal, image, and video data using deep learning methods has been very successful, high-stakes biomedical applications present unique challenges such as different image modalities, limited training data, need for explainability and interpretability etc. that need to be addressed. In this dissertation, we developed novel, explainable, and attention-based deep learning frameworks for objective, automated, and quantitative analysis of biomedical signal, image, and video data. The proposed solutions involve multi-scale signal analysis for oraldiadochokinesis studies; ensemble of deep learning cascades using global soft attention mechanisms for segmentation of meningeal vascular networks in confocal microscopy; spatial attention and spatio-temporal data fusion for detection of rare and short-term video events in laryngeal endoscopy videos; and a novel discrete Fourier transform driven class activation map for explainable-AI and weakly-supervised object localization and segmentation for detailed vocal fold motion analysis using laryngeal endoscopy videos. Experiments conducted on the proposed methods showed robust and promising results towards automated, objective, and quantitative analysis of biomedical data, that is of great value for potential early diagnosis and effective disease progress or treatment monitoring.Includes bibliographical references

    Unveiling the frontiers of deep learning: innovations shaping diverse domains

    Full text link
    Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table

    Biomedical Image Processing and Classification

    Get PDF
    Biomedical image processing is an interdisciplinary field involving a variety of disciplines, e.g., electronics, computer science, physics, mathematics, physiology, and medicine. Several imaging techniques have been developed, providing many approaches to the study of the human body. Biomedical image processing is finding an increasing number of important applications in, for example, the study of the internal structure or function of an organ and the diagnosis or treatment of a disease. If associated with classification methods, it can support the development of computer-aided diagnosis (CAD) systems, which could help medical doctors in refining their clinical picture

    Ocular Structures Segmentation from Multi-sequences MRI Using 3D Unet with Fully Connected CRFs

    No full text
    The use of 3D Magnetic Resonance Imaging (MRI) has attracted growing attention for the purpose of diagnosis and treatment planning of intraocular ocular cancers. Precise segmentation of such tumors are highly important to characterize tumors, their progression and to define a treatment plan. Along this line, automatic and effective segmentation of tumors and healthy eye anatomy would be of great value. The major challenge to this end however lies in the disease variability encountered over different populations, often imaged under different acquisition conditions and high heterogeneity of tumor characterization in location, size and appearance. In this work, we consider the Retinoblastoma disease, the most common eye cancer in children. To provide automated segmentations of relevant structures, a multi-sequences MRI dataset of 72 subjects is introduced, collected across different clinical sites with different magnetic fields (3T and 1.5T), with healthy and pathological subjects (children and adults). Using this data, we present a framework to segment both healthy and pathological eye structures. In particular, we make use of a 3D U-net CNN whereby using four encoder and decoder layers to produce conditional probabilities of different eye structures. These are further refined using a Conditional Random Field with Gaussian kernels to maximize label agreement between similar voxels in multi-sequence MRIs. We show experimentally that our approach brings state-of-the-art performances for several relevant eye structures and that these results are promising for use in clinical practice

    WOFEX 2021 : 19th annual workshop, Ostrava, 1th September 2021 : proceedings of papers

    Get PDF
    The workshop WOFEX 2021 (PhD workshop of Faculty of Electrical Engineer-ing and Computer Science) was held on September 1st September 2021 at the VSB – Technical University of Ostrava. The workshop offers an opportunity for students to meet and share their research experiences, to discover commonalities in research and studentship, and to foster a collaborative environment for joint problem solving. PhD students are encouraged to attend in order to ensure a broad, unconfined discussion. In that view, this workshop is intended for students and researchers of this faculty offering opportunities to meet new colleagues.Ostrav

    Ultrasensitive detection of toxocara canis excretory-secretory antigens by a nanobody electrochemical magnetosensor assay.

    Full text link
    peer reviewedHuman Toxocariasis (HT) is a zoonotic disease caused by the migration of the larval stage of the roundworm Toxocara canis in the human host. Despite of being the most cosmopolitan helminthiasis worldwide, its diagnosis is elusive. Currently, the detection of specific immunoglobulins IgG against the Toxocara Excretory-Secretory Antigens (TES), combined with clinical and epidemiological criteria is the only strategy to diagnose HT. Cross-reactivity with other parasites and the inability to distinguish between past and active infections are the main limitations of this approach. Here, we present a sensitive and specific novel strategy to detect and quantify TES, aiming to identify active cases of HT. High specificity is achieved by making use of nanobodies (Nbs), recombinant single variable domain antibodies obtained from camelids, that due to their small molecular size (15kDa) can recognize hidden epitopes not accessible to conventional antibodies. High sensitivity is attained by the design of an electrochemical magnetosensor with an amperometric readout with all components of the assay mixed in one single step. Through this strategy, 10-fold higher sensitivity than a conventional sandwich ELISA was achieved. The assay reached a limit of detection of 2 and15 pg/ml in PBST20 0.05% or serum, spiked with TES, respectively. These limits of detection are sufficient to detect clinically relevant toxocaral infections. Furthermore, our nanobodies showed no cross-reactivity with antigens from Ascaris lumbricoides or Ascaris suum. This is to our knowledge, the most sensitive method to detect and quantify TES so far, and has great potential to significantly improve diagnosis of HT. Moreover, the characteristics of our electrochemical assay are promising for the development of point of care diagnostic systems using nanobodies as a versatile and innovative alternative to antibodies. The next step will be the validation of the assay in clinical and epidemiological contexts
    corecore