625 research outputs found
Semi Automatic Segmentation of a Rat Brain Atlas
A common approach to segment an MRI dataset is to use a standard atlas to identify different regions of interest. Existing 2D atlases, prepared by freehand tracings of templates, are seldom complete for 3D volume segmentation. Although many of these atlases are prepared in graphics packages like Adobe Illustrator® (AI), which present the geometrical entities based on their mathematical description, the drawings are not numerically robust. This work presents an automatic conversion of graphical atlases suitable for further usage such as creation of a segmented 3D numerical atlas. The system begins with DXF (Drawing Exchange Format) files of individual atlas drawings. The drawing entities are mostly in cubic spline format. Each segment of the spline is reduced to polylines, which reduces the complexity of data. The system merges overlapping nodes and polylines to make the database of the drawing numerically integrated, i.e. each location within the drawing is referred by only one point, each line is uniquely defined by only two nodes, etc. Numerous integrity diagnostics are performed to eliminate duplicate or overlapping lines, extraneous markers, open-ended loops, etc. Numerically intact closed loops are formed using atlas labels as seed points. These loops specify the boundary and tissue type for each area. The final results preserve the original atlas with its 1272 different neuroanatomical regions which are complete, non-overlapping, contiguous sub-areas whose boundaries are composed of unique polyline
Comparative Analysis of Segment Anything Model and U-Net for Breast Tumor Detection in Ultrasound and Mammography Images
In this study, the main objective is to develop an algorithm capable of
identifying and delineating tumor regions in breast ultrasound (BUS) and
mammographic images. The technique employs two advanced deep learning
architectures, namely U-Net and pretrained SAM, for tumor segmentation. The
U-Net model is specifically designed for medical image segmentation and
leverages its deep convolutional neural network framework to extract meaningful
features from input images. On the other hand, the pretrained SAM architecture
incorporates a mechanism to capture spatial dependencies and generate
segmentation results. Evaluation is conducted on a diverse dataset containing
annotated tumor regions in BUS and mammographic images, covering both benign
and malignant tumors. This dataset enables a comprehensive assessment of the
algorithm's performance across different tumor types. Results demonstrate that
the U-Net model outperforms the pretrained SAM architecture in accurately
identifying and segmenting tumor regions in both BUS and mammographic images.
The U-Net exhibits superior performance in challenging cases involving
irregular shapes, indistinct boundaries, and high tumor heterogeneity. In
contrast, the pretrained SAM architecture exhibits limitations in accurately
identifying tumor areas, particularly for malignant tumors and objects with
weak boundaries or complex shapes. These findings highlight the importance of
selecting appropriate deep learning architectures tailored for medical image
segmentation. The U-Net model showcases its potential as a robust and accurate
tool for tumor detection, while the pretrained SAM architecture suggests the
need for further improvements to enhance segmentation performance
Automatic dental caries detection in bitewing radiographs.
Doctoral Degree. University of KwaZulu-Natal, Durban.Dental Caries is one of the most prevalent chronic disease around the globe. Distinguishing carious lesions has been a challenging task. Conventional computer aided
diagnosis and detection methods in the past have heavily relied on visual inspection
of teeth. These are only effective on large and clearly visible caries on affected teeth.
Conventional methods have been limited in performance due to the complex visual
characteristics of dental caries images, which consists of hidden or inaccessible lesions.
Early detection of dental caries is an important determinant for treatment and benefits
much from the introduction of new tools such as dental radiography. A method for
the segmentation of teeth in bitewing X-rays is presented in this thesis, as well as a
method for the detection of dental caries on X-ray images using a supervised model.
The diagnostic method proposed uses an assessment protocol that is evaluated according to a set of identifiers obtained from a learning model. The proposed technique
automatically detects hidden and inaccessible dental caries lesions in bitewing radio graphs. The approach employed data augmentation to increase the number of images
in the data set in order to have a total of 11,114 dental images. Image pre-processing
on the data set was through the use of Gaussian blur filters. Image segmentation was
handled through thresholding, erosion and dilation morphology, while image boundary detection was achieved through active contours method. Furthermore, the deep
learning based network through the sequential model in Keras extracts features from
the images through blob detection. Finally, a convexity threshold value of 0.9 is introduced to aid in the classification of caries as either present or not present. The relative
efficacy of the supervised model in diagnosing dental caries when compared to current
systems is indicated by the results detailed in this thesis. The proposed model achieved
a 97% correct diagnostic which proved quite competitive with existing models.Author's Publications are listed on page 4 of this thesis
Computational processing and analysis of ear images
Tese de mestrado. Engenharia Biomédica. Faculdade de Engenharia. Universidade do Porto. 201
Deep learning in food category recognition
Integrating artificial intelligence with food category recognition has been a field of interest for research for the
past few decades. It is potentially one of the next steps in revolutionizing human interaction with food. The
modern advent of big data and the development of data-oriented fields like deep learning have provided advancements
in food category recognition. With increasing computational power and ever-larger food datasets,
the approach’s potential has yet to be realized. This survey provides an overview of methods that can be applied
to various food category recognition tasks, including detecting type, ingredients, quality, and quantity. We
survey the core components for constructing a machine learning system for food category recognition, including
datasets, data augmentation, hand-crafted feature extraction, and machine learning algorithms. We place a
particular focus on the field of deep learning, including the utilization of convolutional neural networks, transfer
learning, and semi-supervised learning. We provide an overview of relevant studies to promote further developments
in food category recognition for research and industrial applicationsMRC (MC_PC_17171)Royal Society (RP202G0230)BHF (AA/18/3/34220)Hope Foundation for Cancer Research (RM60G0680)GCRF (P202PF11)Sino-UK Industrial
Fund (RP202G0289)LIAS (P202ED10Data Science
Enhancement Fund (P202RE237)Fight for Sight (24NN201);Sino-UK
Education Fund (OP202006)BBSRC (RM32G0178B8
Evaluation of automated organ segmentation for total-body PET-CT
The ability to diagnose rapidly and accurately and treat patients is substantially facilitated by medical images. Radiologists' visual assessment of medical images is crucial to their study. Segmenting images for diagnostic purposes is a crucial step in the medical imaging process. The purpose of medical image segmentation is to locate and isolate ‘Regions of Interest’ (ROI) within a medical image. Several medical uses rely on this procedure, including diagnosis, patient management, and medical study. Medical image segmentation has applications beyond just diagnosis and treatment planning. Quantitative information from medical images can be extracted by image segmentation and employed in the research of new diagnostic and treatment procedures. In addition, image segmentation is a critical procedure in several programs for image processing, including image fusion and registration. In order to construct a single, high-resolution, high-contrast image of an item or organ from several images, a process called "image registration" is used. A more complete picture of the patient's anatomy can be obtained through image fusion, which entails integrating numerous images from different modalities such as computed tomography (CT) and Magnetic resonance imaging (MRI). Once images are obtained using imaging technologies, they go through post-processing procedures before being analyzed. One of the primary and essential steps in post-processing is image segmentation, which involves dividing the images into parts and utilizing only the relevant sections for analysis. This project explores various imaging technologies and tools that can be utilized for image segmentation. Many open-source imaging tools are available for segmenting medical images across various applications. The objective of this study is to use the Jaccard index to evaluate the degree of similarity between the segmentations produced by various medical image visualization and analysis programs
Automating the multimodal analysis of musculoskeletal imaging in the presence of hip implants
In patients treated with hip arthroplasty, the muscular condition and presence of inflammatory reactions are assessed using magnetic resonance imaging (MRI). As MRI lacks contrast for bony structures, computed tomography (CT) is preferred for clinical evaluation of bone tissue and orthopaedic surgical planning. Combining the complementary information of MRI and CT could improve current clinical practice for diagnosis, monitoring and treatment planning. In particular, the different contrast of these modalities could help better quantify the presence of fatty infiltration to characterise muscular condition after hip replacement. In this thesis, I developed automated processing tools for the joint analysis of CT and MR images of patients with hip implants. In order to combine the multimodal information, a novel nonlinear registration algorithm was introduced, which imposes rigidity constraints on bony structures to ensure realistic deformation. I implemented and thoroughly validated a fully automated framework for the multimodal segmentation of healthy and pathological musculoskeletal structures, as well as implants. This framework combines the proposed registration algorithm with tailored image quality enhancement techniques and a multi-atlas-based segmentation approach, providing robustness against the large population anatomical variability and the presence of noise and artefacts in the images. The automation of muscle segmentation enabled the derivation of a measure of fatty infiltration, the Intramuscular Fat Fraction, useful to characterise the presence of muscle atrophy. The proposed imaging biomarker was shown to strongly correlate with the atrophy radiological score currently used in clinical practice. Finally, a preliminary work on multimodal metal artefact reduction, using an unsupervised deep learning strategy, showed promise for improving the postprocessing of CT and MR images heavily corrupted by metal artefact. This work represents a step forward towards the automation of image analysis in hip arthroplasty, supporting and quantitatively informing the decision-making process about patient’s management
- …