59 research outputs found

    Computer-Aided Diagnosis of Mammographic Masses Detection Of Ascendable Images Features

    Get PDF
    In any case, the vast majority of them miss the mark concerning adaptability in the recovery arrange, and their analytic precision is, accordingly, restricted. To beat this disadvantage, we propose a versatile technique for recovery and conclusion of mammographic masses speciïŹcally, for an inquiry mammographic zone of interest (ROI), scale-in variation include transform(SIFT)features are removed and sought in a vocabulary tree, which stores all the quantized highlights of already analysed mammographic ROIs. Furthermore, to completely apply the discriminative energy of SIFT highlights, logical data in the vocabulary tree is utilized to reïŹne the weights of tree hubs

    Pixel N-grams for Mammographic Image Classification

    Get PDF
    X-ray screening for breast cancer is an important public health initiative in the management of a leading cause of death for women. However, screening is expensive if mammograms are required to be manually assessed by radiologists. Moreover, manual screening is subject to perception and interpretation errors. Computer aided detection/diagnosis (CAD) systems can help radiologists as computer algorithms are good at performing image analysis consistently and repetitively. However, image features that enhance CAD classification accuracies are necessary for CAD systems to be deployed. Many CAD systems have been developed but the specificity and sensitivity is not high; in part because of challenges inherent in identifying effective features to be initially extracted from raw images. Existing feature extraction techniques can be grouped under three main approaches; statistical, spectral and structural. Statistical and spectral techniques provide global image features but often fail to distinguish between local pattern variations within an image. On the other hand, structural approach have given rise to the Bag-of-Visual-Words (BoVW) model, which captures local variations in an image, but typically do not consider spatial relationships between the visual “words”. Moreover, statistical features and features based on BoVW models are computationally very expensive. Similarly, structural feature computation methods other than BoVW are also computationally expensive and strongly dependent upon algorithms that can segment an image to localize a region of interest likely to contain the tumour. Thus, classification algorithms using structural features require high resource computers. In order for a radiologist to classify the lesions on low resource computers such as Ipads, Tablets, and Mobile phones, in a remote location, it is necessary to develop computationally inexpensive classification algorithms. Therefore, the overarching aim of this research is to discover a feature extraction/image representation model which can be used to classify mammographic lesions with high accuracy, sensitivity and specificity along with low computational cost. For this purpose a novel feature extraction technique called ‘Pixel N-grams’ is proposed. The Pixel N-grams approach is inspired from the character N-gram concept in text categorization. Here, N number of consecutive pixel intensities are considered in a particular direction. The image is then represented with the help of histogram of occurrences of the Pixel N-grams in an image. Shape and texture of mammographic lesions play an important role in determining the malignancy of the lesion. It was hypothesized that the Pixel N-grams would be able to distinguish between various textures and shapes. Experiments carried out on benchmark texture databases and binary basic shapes database have demonstrated that the hypothesis was correct. Moreover, the Pixel N-grams were able to distinguish between various shapes irrespective of size and location of shape in an image. The efficacy of the Pixel N-gram technique was tested on mammographic database of primary digital mammograms sourced from a radiological facility in Australia (LakeImaging Pty Ltd) and secondary digital mammograms (benchmark miniMIAS database). A senior radiologist from LakeImaging provided real time de-identified high resolution mammogram images with annotated regions of interests (which were used as groundtruth), and valuable radiological diagnostic knowledge. Two types of classifications were observed on these two datasets. Normal/abnormal classification useful for automated screening and circumscribed/speculation/normal classification useful for automated diagnosis of breast cancer. The classification results on both the mammography datasets using Pixel N-grams were promising. Classification performance (Fscore, sensitivity and specificity) using Pixel N-gram technique was observed to be significantly better than the existing techniques such as intensity histogram, co-occurrence matrix based features and comparable with the BoVW features. Further, Pixel N-gram features are found to be computationally less complex than the co-occurrence matrix based features as well as BoVW features paving the way for mammogram classification on low resource computers. Although, the Pixel N-gram technique was designed for mammographic classification, it could be applied to other image classification applications such as diabetic retinopathy, histopathological image classification, lung tumour detection using CT images, brain tumour detection using MRI images, wound image classification and tooth decay classification using dentistry x-ray images. Further, texture and shape classification is also useful for classification of real world images outside the medical domain. Therefore, the pixel N-gram technique could be extended for applications such as classification of satellite imagery and other object detection tasks.Doctor of Philosoph

    Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database

    Full text link
    Radiologists in their daily work routinely find and annotate significant abnormalities on a large number of radiology images. Such abnormalities, or lesions, have collected over years and stored in hospitals' picture archiving and communication systems. However, they are basically unsorted and lack semantic annotations like type and location. In this paper, we aim to organize and explore them by learning a deep feature representation for each lesion. A large-scale and comprehensive dataset, DeepLesion, is introduced for this task. DeepLesion contains bounding boxes and size measurements of over 32K lesions. To model their similarity relationship, we leverage multiple supervision information including types, self-supervised location coordinates and sizes. They require little manual annotation effort but describe useful attributes of the lesions. Then, a triplet network is utilized to learn lesion embeddings with a sequential sampling strategy to depict their hierarchical similarity structure. Experiments show promising qualitative and quantitative results on lesion retrieval, clustering, and classification. The learned embeddings can be further employed to build a lesion graph for various clinically useful applications. We propose algorithms for intra-patient lesion matching and missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde

    Breast cancer diagnosis: a survey of pre-processing, segmentation, feature extraction and classification

    Get PDF
    Machine learning methods have been an interesting method in the field of medical for many years, and they have achieved successful results in various fields of medical science. This paper examines the effects of using machine learning algorithms in the diagnosis and classification of breast cancer from mammography imaging data. Cancer diagnosis is the identification of images as cancer or non-cancer, and this involves image preprocessing, feature extraction, classification, and performance analysis. This article studied 93 different references mentioned in the previous years in the field of processing and tries to find an effective way to diagnose and classify breast cancer. Based on the results of this research, it can be concluded that most of today’s successful methods focus on the use of deep learning methods. Finding a new method requires an overview of existing methods in the field of deep learning methods in order to make a comparison and case study

    Visual character N-grams for classification and retrieval of radiological images

    Get PDF
    Diagnostic radiology struggles to maintain high interpretation accuracy. Retrieval of past similar cases would help the inexperienced radiologist in the interpretation process. Character n-gram model has been effective in text retrieval context in languages such as Chinese where there are no clear word boundaries. We propose the use of visual character n-gram model for representation of image for classification and retrieval purposes. Regions of interests in mammographic images are represented with the character n-gram features. These features are then used as input to back-propagation neural network for classification of regions into normal and abnormal categories. Experiments on miniMIAS database show that character n-gram features are useful in classifying the regions into normal and abnormal categories. Promising classification accuracies are observed (83.33%) for fatty background tissue warranting further investigation. We argue that Classifying regions of interests would reduce the number of comparisons necessary for finding similar images from the database and hence would reduce the time required for retrieval of past similar cases

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    A Bottom-Up Review of Image Analysis Methods for Suspicious Region Detection in Mammograms.

    Get PDF
    Breast cancer is one of the most common death causes amongst women all over the world. Early detection of breast cancer plays a critical role in increasing the survival rate. Various imaging modalities, such as mammography, breast MRI, ultrasound and thermography, are used to detect breast cancer. Though there is a considerable success with mammography in biomedical imaging, detecting suspicious areas remains a challenge because, due to the manual examination and variations in shape, size, other mass morphological features, mammography accuracy changes with the density of the breast. Furthermore, going through the analysis of many mammograms per day can be a tedious task for radiologists and practitioners. One of the main objectives of biomedical imaging is to provide radiologists and practitioners with tools to help them identify all suspicious regions in a given image. Computer-aided mass detection in mammograms can serve as a second opinion tool to help radiologists avoid running into oversight errors. The scientific community has made much progress in this topic, and several approaches have been proposed along the way. Following a bottom-up narrative, this paper surveys different scientific methodologies and techniques to detect suspicious regions in mammograms spanning from methods based on low-level image features to the most recent novelties in AI-based approaches. Both theoretical and practical grounds are provided across the paper sections to highlight the pros and cons of different methodologies. The paper's main scope is to let readers embark on a journey through a fully comprehensive description of techniques, strategies and datasets on the topic

    Mammography

    Get PDF
    In this volume, the topics are constructed from a variety of contents: the bases of mammography systems, optimization of screening mammography with reference to evidence-based research, new technologies of image acquisition and its surrounding systems, and case reports with reference to up-to-date multimodality images of breast cancer. Mammography has been lagged in the transition to digital imaging systems because of the necessity of high resolution for diagnosis. However, in the past ten years, technical improvement has resolved the difficulties and boosted new diagnostic systems. We hope that the reader will learn the essentials of mammography and will be forward-looking for the new technologies. We want to express our sincere gratitude and appreciation?to all the co-authors who have contributed their work to this volume

    Enhancing Breast Cancer Prediction Using Unlabeled Data

    Get PDF
    Selles vĂ€itekirjas esitatakse sildistamata andmeid kasutav sĂŒvaĂ”ppe lĂ€henemine rinna infiltratiivse duktaalse kartsinoomi koeregioonide automaatseks klassifitseerimiseks rinnavĂ€hi patoloogilistes digipreparaatides. SĂŒvaĂ”ppe meetodite tööpĂ”himĂ”te on sarnane inimajule, mis töötab samuti mitmetel tĂ”lgendustasanditel. Need meetodid on osutunud tulemuslikeks ka vĂ€ga keerukate probleemide nagu pildiliigituse ja esemetuvastuse lahendamisel, ĂŒletades seejuures varasemate lahendusviiside efektiivsust. SĂŒvaĂ”ppeks on aga vaja suurt hulka sildistatud andmeid, mida vĂ”ib olla keeruline saada, eriti veel meditsiinis, kuna nii haiglad kui ka patsiendid ei pruugi olla nĂ”us sedavĂ”rd delikaatset teavet loovutama. Lisaks sellele on masinĂ”ppesĂŒsteemide saavutatavate aina paremate tulemuste hinnaks nende sĂŒsteemide sisemise keerukuse kasv. Selle sisemise keerukuse tĂ”ttu muutub raskemaks ka nende sĂŒsteemide töö mĂ”istmine, mistĂ”ttu kasutajad ei kipu neid usaldama. Meditsiinilisi diagnoose ei saa jĂ€rgida pimesi, kuna see vĂ”ib endaga kaasa tuua patsiendi tervise kahjustamise. Mudeli mĂ”istetavuse tagamine on seega oluline viis sĂŒsteemi usaldatavuse tĂ”stmiseks, eriti just masinĂ”ppel pĂ”hinevate mudelite laialdasel rakendamisel sellistel kriitilise tĂ€htsusega aladel nagu seda on meditsiin. Infiltratiivne duktaalne kartsinoom on ĂŒks levinumaid ja ka agressiivsemaid rinnavĂ€hi vorme, moodustades peaaegu 80% kĂ”igist juhtumitest. Selle diagnoosimine on patoloogidele vĂ€ga keerukas ja ajakulukas ĂŒlesanne, kuna nĂ”uab vĂ”imalike pahaloomuliste kasvajate avastamiseks paljude healoomuliste piirkondade uurimist. Samas on infiltratiivse duktaalse kartsinoomi digipatoloogias tĂ€pne piiritlemine vĂ€hi agressiivsuse hindamise aspektist ĂŒlimalt oluline. KĂ€esolevas uurimuses kasutatakse konvolutsioonilist nĂ€rvivĂ”rku arendamaks vĂ€lja infiltratiivse duktaalse kartsinoomi diagnoosimisel rakendatav pooleldi juhitud Ă”ppe skeem. VĂ€lja pakutud raamistik suurendab esmalt vĂ€ikest sildistatud andmete hulka generatiivse vĂ”istlusliku vĂ”rgu loodud sĂŒnteetiliste meditsiiniliste kujutistega. SeejĂ€rel kasutatakse juba eelnevalt treenitud vĂ”rku, et selle suurendatud andmekogumi peal lĂ€bi viia kujutuvastus, misjĂ€rel sildistamata andmed sildistatakse andmesildistusalgoritmiga. Töötluse tulemusena saadud sildistatud andmeid eelmainitud konvolutsioonilisse nĂ€rvivĂ”rku sisestades saavutatakse rahuldav tulemus: ROC kĂ”vera alla jÀÀv pindala ja F1 skoor on vastavalt 0.86 ja 0.77. Lisaks sellele vĂ”imaldavad vĂ€lja pakutud mĂ”istetavuse tĂ”stmise tehnikad nĂ€ha ka meditsiinilistele prognooside otsuse tegemise protsessi seletust, mis omakorda teeb sĂŒsteemi usaldamise kasutajatele lihtsamaks. KĂ€esolev uurimus nĂ€itab, et konvolutsioonilise nĂ€rvivĂ”rgu tehtud otsuseid aitab paremini mĂ”ista see, kui kasutajatele visualiseeritakse konkreetse juhtumi puhul infiltratiivse duktaalse kartsinoomi positiivse vĂ”i negatiivse otsuse langetamisel sĂŒsteemi jaoks kĂ”ige olulisemaks osutunud piirkondi.The following thesis presents a deep learning (DL) approach for automatic classification of invasive ductal carcinoma (IDC) tissue regions in whole slide images (WSI) of breast cancer (BC) using unlabeled data. DL methods are similar to the way the human brain works across different interpretation levels. These techniques have shown to outperform traditional approaches of the most complex problems such as image classification and object detection. However, DL requires a broad set of labeled data that is difficult to obtain, especially in the medical field as neither the hospitals nor the patients are willing to reveal such sensitive information. Moreover, machine learning (ML) systems are achieving better performance at the cost of becoming increasingly complex. Because of that, they become less interpretable that causes distrust from the users. Model interpretability is a way to enhance trust in a system. It is a very desirable property, especially crucial with the pervasive adoption of ML-based models in the critical domains like the medical field. With medical diagnostics, the predictions cannot be blindly followed as it may result in harm to the patient. IDC is one of the most common and aggressive subtypes of all breast cancers accounting nearly 80% of them. Assessment of the disease is a very time-consuming and challenging task for pathologists, as it involves scanning large swatches of benign regions to identify an area of malignancy. Meanwhile, accurate delineation of IDC in WSI is crucial for the estimation of grading cancer aggressiveness. In the following study, a semi-supervised learning (SSL) scheme is developed using the deep convolutional neural network (CNN) for IDC diagnosis. The proposed framework first augments a small set of labeled data with synthetic medical images, generated by the generative adversarial network (GAN) that is followed by feature extraction using already pre-trained network on the larger dataset and a data labeling algorithm that labels a much broader set of unlabeled data. After feeding the newly labeled set into the proposed CNN model, acceptable performance is achieved: the AUC and the F-measure accounting for 0.86, 0.77, respectively. Moreover, proposed interpretability techniques produce explanations for medical predictions and build trust in the presented CNN. The following study demonstrates that it is possible to enable a better understanding of the CNN decisions by visualizing areas that are the most important for a particular prediction and by finding elements that are the reasons for IDC, Non-IDC decisions made by the network
    • 

    corecore