172 research outputs found

    Intelligent Screening Systems for Cervical Cancer

    Get PDF

    Deep learning and localized features fusion for medical image classification

    Get PDF
    Local image features play an important role in many classification tasks as translation and rotation do not severely deteriorate the classification process. They have been commonly used for medical image analysis. In medical applications, it is important to get accurate diagnosis/aid results in the fastest time possible. This dissertation tries to tackle these problems, first by developing a localized feature-based classification system for medical images and using these features and to give a classification for the entire image, and second, by improving the computational complexity of feature analysis to make it viable as a diagnostic aid system in practical clinical situations. For local feature development, a new approach based on combining the rising deep learning paradigm with the use of handcrafted features is developed to classify cervical tissue histology images into different cervical intra-epithelial neoplasia classes. Using deep learning combined with handcrafted features improved the accuracy by 8.4% achieving 80.72% exact class classification accuracy compared to 72.29% when using the benchmark feature-based classification method --Abstract, page iv

    АвтоматичСский ΠΌΠ΅Ρ‚ΠΎΠ΄ Π°Π½Π°Π»ΠΈΠ·Π° ΠΌΡƒΠ»ΡŒΡ‚ΠΈΡΠΏΠ΅ΠΊΡ‚Ρ€Π°Π»ΡŒΠ½Ρ‹Ρ… ΠΊΠΎΠ»ΡŒΠΏΠΎΡΠΊΠΎΠΏΠΈΡ‡Π΅ΡΠΊΠΈΡ… ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ для Ρ‚Π΅Π»Π΅Π²ΠΈΠ·ΠΈΠΎΠ½Π½ΠΎΠΉ систСмы диагностики Ρ€Π°ΠΊΠ° шСйки ΠΌΠ°Ρ‚ΠΊΠΈ

    Get PDF
    Automated method of fluorescence images analysis obtained by excitation radiation with a wavelength of 360 and 390 nm is proposed. The method allows to detect the status of tissues of cervix: normal, chronic nonspecific inflammation (CNI) and cervical intraepithelial neoplasia (CIN), and build differential map pathology. For the border CIN/CNI achieved a sensitivity of 87 % and specificity 71 %. The method includes a specific preprocessing of the original images: combining images taken in different lighting conditions and highlight the area of interest. Features of the method are the use of a combination of features calculated for images of different types, and decision rule for classification based data mining techniques.ΠŸΡ€Π΅Π΄Π»ΠΎΠΆΠ΅Π½ автоматичСский ΠΌΠ΅Ρ‚ΠΎΠ΄ Π°Π½Π°Π»ΠΈΠ·Π° флуорСсцСнтных ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ, ΠΏΠΎΠ»ΡƒΡ‡Π΅Π½Π½Ρ‹Ρ… ΠΏΡ€ΠΈ Π²ΠΎΠ·Π±ΡƒΠΆΠ΄Π°ΡŽΡ‰ΠΈΡ… излучСниях с Π΄Π»ΠΈΠ½ΠΎΠΉ Π²ΠΎΠ»Π½Ρ‹ 360 ΠΈ 390 Π½ΠΌ. ΠœΠ΅Ρ‚ΠΎΠ΄ позволяСт Π²Ρ‹ΡΠ²ΠΈΡ‚ΡŒ состояния Ρ‚ΠΊΠ°Π½Π΅ΠΉ шСйки ΠΌΠ°Ρ‚ΠΊΠΈ: Π½ΠΎΡ€ΠΌΡƒ, Π²ΠΎΡΠΏΠ°Π»ΠΈΡ‚Π΅Π»ΡŒΠ½Ρ‹ΠΉ процСсс (chronic nonspecific inflammation - CNI), ΠΈ онкологичСскиС измСнСния (cervical intraepithelial neoplasia - CIN), ΠΈ ΠΏΠΎΡΡ‚Ρ€ΠΎΠΈΡ‚ΡŒ Π΄ΠΈΡ„Ρ„Π΅Ρ€Π΅Π½Ρ†ΠΈΠ°Π»ΡŒΠ½ΡƒΡŽ ΠΊΠ°Ρ€Ρ‚Ρƒ ΠΏΠ°Ρ‚ΠΎΠ»ΠΎΠ³ΠΈΠΈ. Для Π³Ρ€Π°Π½ΠΈΡ†Ρ‹ CIN/CNI достигнуты Ρ‡ΡƒΠ²ΡΡ‚Π²ΠΈΡ‚Π΅Π»ΡŒΠ½ΠΎΡΡ‚ΡŒ 87 % ΠΈ ΡΠΏΠ΅Ρ†ΠΈΡ„ΠΈΡ‡Π½ΠΎΡΡ‚ΡŒ 71 %. ΠœΠ΅Ρ‚ΠΎΠ΄ Π²ΠΊΠ»ΡŽΡ‡Π°Π΅Ρ‚ ΡΠΏΠ΅Ρ†ΠΈΠ°Π»ΡŒΠ½ΡƒΡŽ ΠΏΡ€Π΅Π΄ΠΎΠ±Ρ€Π°Π±ΠΎΡ‚ΠΊΡƒ исходных ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ: совмСщСниС ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ, ΠΏΠΎΠ»ΡƒΡ‡Π΅Π½Π½Ρ‹Ρ… Π² Ρ€Π°Π·Π½Ρ‹Ρ… условиях освСщСния ΠΈ Π²Ρ‹Π΄Π΅Π»Π΅Π½ΠΈΠ΅ области интСрСса. ΠžΡΠΎΠ±Π΅Π½Π½ΠΎΡΡ‚ΡΠΌΠΈ ΠΌΠ΅Ρ‚ΠΎΠ΄Π° ΡΠ²Π»ΡΡŽΡ‚ΡΡ использованиС совокупности ΠΏΡ€ΠΈΠ·Π½Π°ΠΊΠΎΠ², рассчитанных ΠΏΠΎ изобраТСниям Ρ€Π°Π·Π½ΠΎΠ³ΠΎ Ρ‚ΠΈΠΏΠ°, ΠΈ Ρ€Π΅ΡˆΠ°ΡŽΡ‰Π΅Π΅ ΠΏΡ€Π°Π²ΠΈΠ»ΠΎ ΠΏΡ€ΠΈ классификации Π½Π° основС ΠΌΠ΅Ρ‚ΠΎΠ΄ΠΎΠ² ΠΈΠ½Ρ‚Π΅Π»Π»Π΅ΠΊΡ‚ΡƒΠ°Π»ΡŒΠ½ΠΎΠ³ΠΎ Π°Π½Π°Π»ΠΈΠ·Π° Π΄Π°Π½Π½Ρ‹Ρ…

    Deep learning for digitized histology image analysis

    Get PDF
    β€œCervical cancer is the fourth most frequent cancer that affects women worldwide. Assessment of cervical intraepithelial neoplasia (CIN) through histopathology remains as the standard for absolute determination of cancer. The examination of tissue samples under a microscope requires considerable time and effort from expert pathologists. There is a need to design an automated tool to assist pathologists for digitized histology slide analysis. Pre-cervical cancer is generally determined by examining the CIN which is the growth of atypical cells from the basement membrane (bottom) to the top of the epithelium. It has four grades, including: Normal, CIN1, CIN2, and CIN3. In this research, different facets of an automated digitized histology epithelium assessment pipeline have been explored to mimic the pathologist diagnostic approach. The entire pipeline from slide to epithelium CIN grade has been designed and developed using deep learning models and imaging techniques to analyze the whole slide image (WSI). The process is as follows: 1) identification of epithelium by filtering the regions extracted from a low-resolution image with a binary classifier network; 2) epithelium segmentation; 3) deep regression for pixel-wise segmentation of epithelium by patch-based image analysis; 4) attention-based CIN classification with localized sequential feature modeling. Deep learning-based nuclei detection by superpixels was performed as an extension of our research. Results from this research indicate an improved performance of CIN assessment over state-of-the-art methods for nuclei segmentation, epithelium segmentation, and CIN classification, as well as the development of a prototype WSI-level tool”--Abstract, page iv

    Toward Large Scale Semantic Image Understanding and Retrieval

    Get PDF
    Semantic image retrieval is a multifaceted, highly complex problem. Not only does the solution to this problem require advanced image processing and computer vision techniques, but it also requires knowledge beyond what can be inferred from the image content alone. In contrast, traditional image retrieval systems are based upon keyword searches on filenames or metadata tags, e.g. Google image search, Flickr search, etc. These conventional systems do not analyze the image content and their keywords are not guaranteed to represent the image. Thus, there is significant need for a semantic image retrieval system that can analyze and retrieve images based upon the content and relationships that exist in the real world.In this thesis, I present a framework that moves towards advancing semantic image retrieval in large scale datasets. At a conceptual level, semantic image retrieval requires the following steps: viewing an image, understanding the content of the image, indexing the important aspects of the image, connecting the image concepts to the real world, and finally retrieving the images based upon the index concepts or related concepts. My proposed framework addresses each of these components in my ultimate goal of improving image retrieval. The first task is the essential task of understanding the content of an image. Unfortunately, typically the only data used by a computer algorithm when analyzing images is the low-level pixel data. But, to achieve human level comprehension, a machine must overcome the semantic gap, or disparity that exists between the image data and human understanding. This translation of the low-level information into a high-level representation is an extremely difficult problem that requires more than the image pixel information. I describe my solution to this problem through the use of an online knowledge acquisition and storage system. This system utilizes the extensible, visual, and interactable properties of Scalable Vector Graphics (SVG) combined with online crowd sourcing tools to collect high level knowledge about visual content.I further describe the utilization of knowledge and semantic data for image understanding. Specifically, I seek to incorporate knowledge in various algorithms that cannot be inferred from the image pixels alone. This information comes from related images or structured data (in the form of hierarchies and ontologies) to improve the performance of object detection and image segmentation tasks. These understanding tasks are crucial intermediate steps towards retrieval and semantic understanding. However, the typical object detection and segmentation tasks requires an abundance of training data for machine learning algorithms. The prior training information provides information on what patterns and visual features the algorithm should be looking for when processing an image. In contrast, my algorithm utilizes related semantic images to extract the visual properties of an object and also to decrease the search space of my detection algorithm. Furthermore, I demonstrate the use of related images in the image segmentation process. Again, without the use of prior training data, I present a method for foreground object segmentation by finding the shared area that exists in a set of images. I demonstrate the effectiveness of my method on structured image datasets that have defined relationships between classes i.e. parent-child, or sibling classes.Finally, I introduce my framework for semantic image retrieval. I enhance the proposed knowledge acquisition and image understanding techniques with semantic knowledge through linked data and web semantic languages. This is an essential step in semantic image retrieval. For example, a car class classified by an image processing algorithm not enhanced by external knowledge would have no idea that a car is a type of vehicle which would also be highly related to a truck and less related to other transportation methods like a train . However, a query for modes of human transportation should return all of the mentioned classes. Thus, I demonstrate how to integrate information from both image processing algorithms and semantic knowledge bases to perform interesting queries that would otherwise be impossible. The key component of this system is a novel property reasoner that is able to translate low level image features into semantically relevant object properties. I use a combination of XML based languages such as SVG, RDF, and OWL in order to link to existing ontologies available on the web. My experiments demonstrate an efficient data collection framework and novel utilization of semantic data for image analysis and retrieval on datasets of people and landmarks collected from sources such as IMDB and Flickr. Ultimately, my thesis presents improvements to the state of the art in visual knowledge representation/acquisition and computer vision algorithms such as detection and segmentation toward the goal of enhanced semantic image retrieval

    Data fusion techniques for biomedical informatics and clinical decision support

    Get PDF
    Data fusion can be used to combine multiple data sources or modalities to facilitate enhanced visualization, analysis, detection, estimation, or classification. Data fusion can be applied at the raw-data, feature-based, and decision-based levels. Data fusion applications of different sorts have been built up in areas such as statistics, computer vision and other machine learning aspects. It has been employed in a variety of realistic scenarios such as medical diagnosis, clinical decision support, and structural health monitoring. This dissertation includes investigation and development of methods to perform data fusion for cervical cancer intraepithelial neoplasia (CIN) and a clinical decision support system. The general framework for these applications includes image processing followed by feature development and classification of the detected region of interest (ROI). Image processing methods such as k-means clustering based on color information, dilation, erosion and centroid locating methods were used for ROI detection. The features extracted include texture, color, nuclei-based and triangle features. Analysis and classification was performed using feature- and decision-level data fusion techniques such as support vector machine, statistical methods such as logistic regression, linear discriminant analysis and voting algorithms --Abstract, page iv
    • …
    corecore