3 research outputs found

    Visual character N-grams for classification and retrieval of radiological images

    Get PDF
    Diagnostic radiology struggles to maintain high interpretation accuracy. Retrieval of past similar cases would help the inexperienced radiologist in the interpretation process. Character n-gram model has been effective in text retrieval context in languages such as Chinese where there are no clear word boundaries. We propose the use of visual character n-gram model for representation of image for classification and retrieval purposes. Regions of interests in mammographic images are represented with the character n-gram features. These features are then used as input to back-propagation neural network for classification of regions into normal and abnormal categories. Experiments on miniMIAS database show that character n-gram features are useful in classifying the regions into normal and abnormal categories. Promising classification accuracies are observed (83.33%) for fatty background tissue warranting further investigation. We argue that Classifying regions of interests would reduce the number of comparisons necessary for finding similar images from the database and hence would reduce the time required for retrieval of past similar cases

    Desarrollo de un framework para la recuperación de imágenes a partir del ingreso de dibujos a mano alzada

    Get PDF
    En la actualidad las personas demandan constantemente información que les ayude a realizar todo tipo de acciones. Ante esta necesidad surgieron los buscadores web y durante un tiempo permitieron la obtención de información de forma óptima. No obstante, ante la creciente generación de contenido multimedia como imágenes y videos, estos buscadores vieron afectados en gran medida sus funcionalidades al ser incapaces de describir a través de palabras el contenido de objetos abstractos presentes en dichas imágenes. Como alternativas de solución surgen los sistemas de recuperación de imágenes por contenido, cuyo uso se extiende inclusive a la realización de búsquedas más complejas como la recuperación de información en videos. Estos sistemas de recuperación de información visual comprenden dos conocidas áreas: similitud de imágenes y dibujos a mano alzada. En el caso de la búsqueda por similitud de imágenes se permite una mayor aproximación a las imágenes que el usuario desea obtener como resultado de su búsqueda, pero implica que el usuario disponga de una imagen previa de la que desea buscar; por lo que no tiene mucho sentido buscar una imagen si ya se cuenta con otra. Por otro lado, el uso de dibujos hechos a mano es un medio innato de representación del conocimiento utilizado desde tiempos antiguos y que las personas emplean desde edad temprana. Entonces, ¿por qué no utilizar los dibujos a mano alzada como un parámetro de entrada del motor de búsqueda de imágenes? Es lógico pensar que, mediante el uso de trazos, muchos de los problemas presentes en los buscadores tradicionales serían resueltos. No obstante, el desarrollo de esta alternativa de solución trae consigo nuevas e interesantes dificultades a enfrentar. En el presente proyecto de fin de carrera se desarrollará un framework de recuperación de imágenes mediante la especificación de dibujos a mano alzada como parámetro de entrada. Para ello se creará un algoritmo que priorice la obtención de resultados eficaces a partir del uso de técnicas de inteligencia artificial, visión computacional y sistemas de indexación de imágenes. El presente documento se encuentra dividido en 7 capítulos, los cuales abarcan lo siguiente: el primer capítulo presenta el contexto sobre el cual actúa el proyecto de tesis, sus objetivos, los resultados y las herramientas utilizadas para la obtención de estos; el segundo capítulo define los conceptos básicos y técnicos necesarios para un mayor entendimiento durante el desarrollo del framework; el tercer capítulo presenta una muestra de los trabajos más importantes aplicados hasta la fecha en el campo de recuperación de imágenes; el cuarto capítulo describe en detalle cómo se ideó la representación de las imágenes según la metodología de bolsa de características; el quinto capítulo hace hincapié en el diseño e implementación del proceso de comparación y recuperación de imágenes a partir del ingreso de trazos a mano alzada por parte del usuario; el sexto capítulo realiza un análisis de los resultados obtenidos y la validación de estos; finalmente, el séptimo capítulo presenta las conclusiones y recomendaciones obtenidas a lo largo del proyecto de tesis.Tesi

    Pixel N-grams for Mammographic Image Classification

    Get PDF
    X-ray screening for breast cancer is an important public health initiative in the management of a leading cause of death for women. However, screening is expensive if mammograms are required to be manually assessed by radiologists. Moreover, manual screening is subject to perception and interpretation errors. Computer aided detection/diagnosis (CAD) systems can help radiologists as computer algorithms are good at performing image analysis consistently and repetitively. However, image features that enhance CAD classification accuracies are necessary for CAD systems to be deployed. Many CAD systems have been developed but the specificity and sensitivity is not high; in part because of challenges inherent in identifying effective features to be initially extracted from raw images. Existing feature extraction techniques can be grouped under three main approaches; statistical, spectral and structural. Statistical and spectral techniques provide global image features but often fail to distinguish between local pattern variations within an image. On the other hand, structural approach have given rise to the Bag-of-Visual-Words (BoVW) model, which captures local variations in an image, but typically do not consider spatial relationships between the visual “words”. Moreover, statistical features and features based on BoVW models are computationally very expensive. Similarly, structural feature computation methods other than BoVW are also computationally expensive and strongly dependent upon algorithms that can segment an image to localize a region of interest likely to contain the tumour. Thus, classification algorithms using structural features require high resource computers. In order for a radiologist to classify the lesions on low resource computers such as Ipads, Tablets, and Mobile phones, in a remote location, it is necessary to develop computationally inexpensive classification algorithms. Therefore, the overarching aim of this research is to discover a feature extraction/image representation model which can be used to classify mammographic lesions with high accuracy, sensitivity and specificity along with low computational cost. For this purpose a novel feature extraction technique called ‘Pixel N-grams’ is proposed. The Pixel N-grams approach is inspired from the character N-gram concept in text categorization. Here, N number of consecutive pixel intensities are considered in a particular direction. The image is then represented with the help of histogram of occurrences of the Pixel N-grams in an image. Shape and texture of mammographic lesions play an important role in determining the malignancy of the lesion. It was hypothesized that the Pixel N-grams would be able to distinguish between various textures and shapes. Experiments carried out on benchmark texture databases and binary basic shapes database have demonstrated that the hypothesis was correct. Moreover, the Pixel N-grams were able to distinguish between various shapes irrespective of size and location of shape in an image. The efficacy of the Pixel N-gram technique was tested on mammographic database of primary digital mammograms sourced from a radiological facility in Australia (LakeImaging Pty Ltd) and secondary digital mammograms (benchmark miniMIAS database). A senior radiologist from LakeImaging provided real time de-identified high resolution mammogram images with annotated regions of interests (which were used as groundtruth), and valuable radiological diagnostic knowledge. Two types of classifications were observed on these two datasets. Normal/abnormal classification useful for automated screening and circumscribed/speculation/normal classification useful for automated diagnosis of breast cancer. The classification results on both the mammography datasets using Pixel N-grams were promising. Classification performance (Fscore, sensitivity and specificity) using Pixel N-gram technique was observed to be significantly better than the existing techniques such as intensity histogram, co-occurrence matrix based features and comparable with the BoVW features. Further, Pixel N-gram features are found to be computationally less complex than the co-occurrence matrix based features as well as BoVW features paving the way for mammogram classification on low resource computers. Although, the Pixel N-gram technique was designed for mammographic classification, it could be applied to other image classification applications such as diabetic retinopathy, histopathological image classification, lung tumour detection using CT images, brain tumour detection using MRI images, wound image classification and tooth decay classification using dentistry x-ray images. Further, texture and shape classification is also useful for classification of real world images outside the medical domain. Therefore, the pixel N-gram technique could be extended for applications such as classification of satellite imagery and other object detection tasks.Doctor of Philosoph
    corecore