172 research outputs found
Deep learning and localized features fusion for medical image classification
Local image features play an important role in many classification tasks as translation and rotation do not severely deteriorate the classification process. They have been commonly used for medical image analysis. In medical applications, it is important to get accurate diagnosis/aid results in the fastest time possible.
This dissertation tries to tackle these problems, first by developing a localized feature-based classification system for medical images and using these features and to give a classification for the entire image, and second, by improving the computational complexity of feature analysis to make it viable as a diagnostic aid system in practical clinical situations.
For local feature development, a new approach based on combining the rising deep learning paradigm with the use of handcrafted features is developed to classify cervical tissue histology images into different cervical intra-epithelial neoplasia classes. Using deep learning combined with handcrafted features improved the accuracy by 8.4% achieving 80.72% exact class classification accuracy compared to 72.29% when using the benchmark feature-based classification method --Abstract, page iv
ΠΠ²ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΈΠΉ ΠΌΠ΅ΡΠΎΠ΄ Π°Π½Π°Π»ΠΈΠ·Π° ΠΌΡΠ»ΡΡΠΈΡΠΏΠ΅ΠΊΡΡΠ°Π»ΡΠ½ΡΡ ΠΊΠΎΠ»ΡΠΏΠΎΡΠΊΠΎΠΏΠΈΡΠ΅ΡΠΊΠΈΡ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ Π΄Π»Ρ ΡΠ΅Π»Π΅Π²ΠΈΠ·ΠΈΠΎΠ½Π½ΠΎΠΉ ΡΠΈΡΡΠ΅ΠΌΡ Π΄ΠΈΠ°Π³Π½ΠΎΡΡΠΈΠΊΠΈ ΡΠ°ΠΊΠ° ΡΠ΅ΠΉΠΊΠΈ ΠΌΠ°ΡΠΊΠΈ
Automated method of fluorescence images analysis obtained by excitation radiation with a wavelength of 360 and 390 nm is proposed. The method allows to detect the status of tissues of cervix: normal, chronic nonspecific inflammation (CNI) and cervical intraepithelial neoplasia (CIN), and build differential map pathology. For the border CIN/CNI achieved a sensitivity of 87 % and specificity 71 %. The method includes a specific preprocessing of the original images: combining images taken in different lighting conditions and highlight the area of interest. Features of the method are the use of a combination of features calculated for images of different types, and decision rule for classification based data mining techniques.ΠΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΈΠΉ ΠΌΠ΅ΡΠΎΠ΄ Π°Π½Π°Π»ΠΈΠ·Π° ΡΠ»ΡΠΎΡΠ΅ΡΡΠ΅Π½ΡΠ½ΡΡ
ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ, ΠΏΠΎΠ»ΡΡΠ΅Π½Π½ΡΡ
ΠΏΡΠΈ Π²ΠΎΠ·Π±ΡΠΆΠ΄Π°ΡΡΠΈΡ
ΠΈΠ·Π»ΡΡΠ΅Π½ΠΈΡΡ
Ρ Π΄Π»ΠΈΠ½ΠΎΠΉ Π²ΠΎΠ»Π½Ρ 360 ΠΈ 390 Π½ΠΌ. ΠΠ΅ΡΠΎΠ΄ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ Π²ΡΡΠ²ΠΈΡΡ ΡΠΎΡΡΠΎΡΠ½ΠΈΡ ΡΠΊΠ°Π½Π΅ΠΉ ΡΠ΅ΠΉΠΊΠΈ ΠΌΠ°ΡΠΊΠΈ: Π½ΠΎΡΠΌΡ, Π²ΠΎΡΠΏΠ°Π»ΠΈΡΠ΅Π»ΡΠ½ΡΠΉ ΠΏΡΠΎΡΠ΅ΡΡ (chronic nonspecific inflammation - CNI), ΠΈ ΠΎΠ½ΠΊΠΎΠ»ΠΎΠ³ΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΡ (cervical intraepithelial neoplasia - CIN), ΠΈ ΠΏΠΎΡΡΡΠΎΠΈΡΡ Π΄ΠΈΡΡΠ΅ΡΠ΅Π½ΡΠΈΠ°Π»ΡΠ½ΡΡ ΠΊΠ°ΡΡΡ ΠΏΠ°ΡΠΎΠ»ΠΎΠ³ΠΈΠΈ. ΠΠ»Ρ Π³ΡΠ°Π½ΠΈΡΡ CIN/CNI Π΄ΠΎΡΡΠΈΠ³Π½ΡΡΡ ΡΡΠ²ΡΡΠ²ΠΈΡΠ΅Π»ΡΠ½ΠΎΡΡΡ 87 % ΠΈ ΡΠΏΠ΅ΡΠΈΡΠΈΡΠ½ΠΎΡΡΡ 71 %. ΠΠ΅ΡΠΎΠ΄ Π²ΠΊΠ»ΡΡΠ°Π΅Ρ ΡΠΏΠ΅ΡΠΈΠ°Π»ΡΠ½ΡΡ ΠΏΡΠ΅Π΄ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΡ ΠΈΡΡ
ΠΎΠ΄Π½ΡΡ
ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ: ΡΠΎΠ²ΠΌΠ΅ΡΠ΅Π½ΠΈΠ΅ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ, ΠΏΠΎΠ»ΡΡΠ΅Π½Π½ΡΡ
Π² ΡΠ°Π·Π½ΡΡ
ΡΡΠ»ΠΎΠ²ΠΈΡΡ
ΠΎΡΠ²Π΅ΡΠ΅Π½ΠΈΡ ΠΈ Π²ΡΠ΄Π΅Π»Π΅Π½ΠΈΠ΅ ΠΎΠ±Π»Π°ΡΡΠΈ ΠΈΠ½ΡΠ΅ΡΠ΅ΡΠ°. ΠΡΠΎΠ±Π΅Π½Π½ΠΎΡΡΡΠΌΠΈ ΠΌΠ΅ΡΠΎΠ΄Π° ΡΠ²Π»ΡΡΡΡΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ ΡΠΎΠ²ΠΎΠΊΡΠΏΠ½ΠΎΡΡΠΈ ΠΏΡΠΈΠ·Π½Π°ΠΊΠΎΠ², ΡΠ°ΡΡΡΠΈΡΠ°Π½Π½ΡΡ
ΠΏΠΎ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡΠΌ ΡΠ°Π·Π½ΠΎΠ³ΠΎ ΡΠΈΠΏΠ°, ΠΈ ΡΠ΅ΡΠ°ΡΡΠ΅Π΅ ΠΏΡΠ°Π²ΠΈΠ»ΠΎ ΠΏΡΠΈ ΠΊΠ»Π°ΡΡΠΈΡΠΈΠΊΠ°ΡΠΈΠΈ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΌΠ΅ΡΠΎΠ΄ΠΎΠ² ΠΈΠ½ΡΠ΅Π»Π»Π΅ΠΊΡΡΠ°Π»ΡΠ½ΠΎΠ³ΠΎ Π°Π½Π°Π»ΠΈΠ·Π° Π΄Π°Π½Π½ΡΡ
Deep learning for digitized histology image analysis
βCervical cancer is the fourth most frequent cancer that affects women worldwide. Assessment of cervical intraepithelial neoplasia (CIN) through histopathology remains as the standard for absolute determination of cancer. The examination of tissue samples under a microscope requires considerable time and effort from expert pathologists. There is a need to design an automated tool to assist pathologists for digitized histology slide analysis. Pre-cervical cancer is generally determined by examining the CIN which is the growth of atypical cells from the basement membrane (bottom) to the top of the epithelium. It has four grades, including: Normal, CIN1, CIN2, and CIN3. In this research, different facets of an automated digitized histology epithelium assessment pipeline have been explored to mimic the pathologist diagnostic approach. The entire pipeline from slide to epithelium CIN grade has been designed and developed using deep learning models and imaging techniques to analyze the whole slide image (WSI). The process is as follows: 1) identification of epithelium by filtering the regions extracted from a low-resolution image with a binary classifier network; 2) epithelium segmentation; 3) deep regression for pixel-wise segmentation of epithelium by patch-based image analysis; 4) attention-based CIN classification with localized sequential feature modeling. Deep learning-based nuclei detection by superpixels was performed as an extension of our research. Results from this research indicate an improved performance of CIN assessment over state-of-the-art methods for nuclei segmentation, epithelium segmentation, and CIN classification, as well as the development of a prototype WSI-level toolβ--Abstract, page iv
Toward Large Scale Semantic Image Understanding and Retrieval
Semantic image retrieval is a multifaceted, highly complex problem. Not only does the solution to this problem require advanced image processing and computer vision techniques, but it also requires knowledge beyond what can be inferred from the image content alone. In contrast, traditional image retrieval systems are based upon keyword searches on filenames or metadata tags, e.g. Google image search, Flickr search, etc. These conventional systems do not analyze the image content and their keywords are not guaranteed to represent the image. Thus, there is significant need for a semantic image retrieval system that can analyze and retrieve images based upon the content and relationships that exist in the real world.In this thesis, I present a framework that moves towards advancing semantic image retrieval in large scale datasets. At a conceptual level, semantic image retrieval requires the following steps: viewing an image, understanding the content of the image, indexing the important aspects of the image, connecting the image concepts to the real world, and finally retrieving the images based upon the index concepts or related concepts. My proposed framework addresses each of these components in my ultimate goal of improving image retrieval. The first task is the essential task of understanding the content of an image. Unfortunately, typically the only data used by a computer algorithm when analyzing images is the low-level pixel data. But, to achieve human level comprehension, a machine must overcome the semantic gap, or disparity that exists between the image data and human understanding. This translation of the low-level information into a high-level representation is an extremely difficult problem that requires more than the image pixel information. I describe my solution to this problem through the use of an online knowledge acquisition and storage system. This system utilizes the extensible, visual, and interactable properties of Scalable Vector Graphics (SVG) combined with online crowd sourcing tools to collect high level knowledge about visual content.I further describe the utilization of knowledge and semantic data for image understanding. Specifically, I seek to incorporate knowledge in various algorithms that cannot be inferred from the image pixels alone. This information comes from related images or structured data (in the form of hierarchies and ontologies) to improve the performance of object detection and image segmentation tasks. These understanding tasks are crucial intermediate steps towards retrieval and semantic understanding. However, the typical object detection and segmentation tasks requires an abundance of training data for machine learning algorithms. The prior training information provides information on what patterns and visual features the algorithm should be looking for when processing an image. In contrast, my algorithm utilizes related semantic images to extract the visual properties of an object and also to decrease the search space of my detection algorithm. Furthermore, I demonstrate the use of related images in the image segmentation process. Again, without the use of prior training data, I present a method for foreground object segmentation by finding the shared area that exists in a set of images. I demonstrate the effectiveness of my method on structured image datasets that have defined relationships between classes i.e. parent-child, or sibling classes.Finally, I introduce my framework for semantic image retrieval. I enhance the proposed knowledge acquisition and image understanding techniques with semantic knowledge through linked data and web semantic languages. This is an essential step in semantic image retrieval. For example, a car class classified by an image processing algorithm not enhanced by external knowledge would have no idea that a car is a type of vehicle which would also be highly related to a truck and less related to other transportation methods like a train . However, a query for modes of human transportation should return all of the mentioned classes. Thus, I demonstrate how to integrate information from both image processing algorithms and semantic knowledge bases to perform interesting queries that would otherwise be impossible. The key component of this system is a novel property reasoner that is able to translate low level image features into semantically relevant object properties. I use a combination of XML based languages such as SVG, RDF, and OWL in order to link to existing ontologies available on the web. My experiments demonstrate an efficient data collection framework and novel utilization of semantic data for image analysis and retrieval on datasets of people and landmarks collected from sources such as IMDB and Flickr. Ultimately, my thesis presents improvements to the state of the art in visual knowledge representation/acquisition and computer vision algorithms such as detection and segmentation toward the goal of enhanced semantic image retrieval
Data fusion techniques for biomedical informatics and clinical decision support
Data fusion can be used to combine multiple data sources or modalities to facilitate enhanced visualization, analysis, detection, estimation, or classification. Data fusion can be applied at the raw-data, feature-based, and decision-based levels. Data fusion applications of different sorts have been built up in areas such as statistics, computer vision and other machine learning aspects. It has been employed in a variety of realistic scenarios such as medical diagnosis, clinical decision support, and structural health monitoring. This dissertation includes investigation and development of methods to perform data fusion for cervical cancer intraepithelial neoplasia (CIN) and a clinical decision support system. The general framework for these applications includes image processing followed by feature development and classification of the detected region of interest (ROI). Image processing methods such as k-means clustering based on color information, dilation, erosion and centroid locating methods were used for ROI detection. The features extracted include texture, color, nuclei-based and triangle features. Analysis and classification was performed using feature- and decision-level data fusion techniques such as support vector machine, statistical methods such as logistic regression, linear discriminant analysis and voting algorithms --Abstract, page iv
- β¦