318 research outputs found

    Bag-of-Colors for Biomedical Document Image Classification

    Get PDF
    The number of biomedical publications has increased noticeably in the last 30 years. Clinicians and medical researchers regularly have unmet information needs but require more time for searching than is usually available to find publications relevant to a clinical situation. The techniques described in this article are used to classify images from the biomedical open access literature into categories, which can potentially reduce the search time. Only the visual information of the images is used to classify images based on a benchmark database of ImageCLEF 2011 created for the task of image classification and image retrieval. We evaluate particularly the importance of color in addition to the frequently used texture and grey level features. Results show that bagsā€“ofā€“colors in combination with the Scale Invariant Feature Transform (SIFT) provide an image representation allowing to improve the classification quality. Accuracy improved from 69.75% of the best system in ImageCLEF 2011 using visual information, only, to 72.5% of the system described in this paper. The results highlight the importance of color for the classification of biomedical images

    Overview of the ImageCLEF 2015 medical classification task

    Get PDF
    This articles describes the ImageCLEF 2015 Medical Clas-sification task. The task contains several subtasks that all use a dataset of figures from the biomedical open access literature (PubMed Cen-tral). Particularly compound figures are targeted that are frequent inthe literature. For more detailed information analysis and retrieval it isimportant to extract targeted information from the compound figures.The proposed tasks include compound figure detection (separating com-pound from other figures), multiā€“label classification (define all sub typespresent), figure separation (find boundaries of the subfigures) and modal-ity classification (detecting the figure type of each subfigure). The tasksare described with the participation of international research groups inthe tasks. The results of the participants are then described and analysedto identify promising techniques

    Unsupervised learning for concept detection in medical images: a comparative analysis

    Full text link
    As digital medical imaging becomes more prevalent and archives increase in size, representation learning exposes an interesting opportunity for enhanced medical decision support systems. On the other hand, medical imaging data is often scarce and short on annotations. In this paper, we present an assessment of unsupervised feature learning approaches for images in the biomedical literature, which can be applied to automatic biomedical concept detection. Six unsupervised representation learning methods were built, including traditional bags of visual words, autoencoders, and generative adversarial networks. Each model was trained, and their respective feature space evaluated using images from the ImageCLEF 2017 concept detection task. We conclude that it is possible to obtain more powerful representations with modern deep learning approaches, in contrast with previously popular computer vision methods. Although generative adversarial networks can provide good results, they are harder to succeed in highly varied data sets. The possibility of semi-supervised learning, as well as their use in medical information retrieval problems, are the next steps to be strongly considered

    Computer Vision Algorithms For An Automated Harvester

    Get PDF
    Image classification and segmentation are the two main important parts in the 3D vision system of a harvesting robot. Regarding the first part, the vision system aids in the real time identification of contaminated areas of the farm based on the damage identified using the robotā€™s camera. To solve the problem of identification, a fast and non-destructive method, Support Vector Machine (SVM), is applied to improve the recognition accuracy and efficiency of the robot. Initially, a median filter is applied to remove the inherent noise in the colored image. SIFT features of the image are then extracted and computed forming a vector, which is then quantized into visual words. Finally, the histogram of the frequency of each element in the visual vocabulary is created and fed into an SVM classifier, which categorizes the mushrooms as either class one or class two. Our preliminary results for image classification were promising and the experiments carried out on the data set highlight fast computation time and a high rate of accuracy, reaching over 90% using this method, which can be employed in real life scenario. As pertains to image Segmentation on the other hand, the vision system aids in real time identification of mushrooms but a stiff challenge is encountered in robot vision as the irregularly spaced mushrooms of uneven sizes often occlude each other due to the nature of mushroom growth in the growing environment. We address the issue of mushroom segmentation by following a multi-step process; the images are first segmented in HSV color space to locate the area of interest and then both the image gradient information from the area of interest and Hough transform methods are used to locate the center position and perimeter of each individual mushroom in XY plane. Afterwards, the depth map information given by Microsoft Kinect is employed to estimate the Z- depth of each individual mushroom, which is then being used to measure the distance between the robot end effector and center coordinate of each individual mushroom. We tested this algorithm under various environmental conditions and our segmentation results indicate this method provides sufficient computational speed and accuracy

    Deep learning for quantitative motion tracking based on optical coherence tomography

    Get PDF
    Optical coherence tomography (OCT) is a cross-sectional imaging modality based on low coherence light interferometry. OCT has been widely used in diagnostic ophthalmology and has found applications in other biomedical fields such as cancer detection and surgical guidance. In the Laboratory of Biophotonics Imaging and Sensing at New Jersey Institute of Technology, we developed a unique needle OCT imager based on a single fiber probe for breast cancer imaging. The needle OCT imager with sub-millimeter diameter can be inserted into tissue for minimally invasive in situ breast imaging. OCT imaging provides spatial resolution similar to histology and has the potential to become a device to perform virtual biopsy to fast and accurate breast cancer diagnosis, because abnormal breast tissue and normal breast tissue have different characteristics in OCT image. The morphological features of OCT image are related to the microscopic structure of the tissue and the speckle pattern in OCT image is related to cellular/subcellular optical properties of the tissue. In addition, depth attenuation of OCT signal depends on the scattering and absorption properties of the tissue. However, the above described OCT image features are at different spatial scales and it is challenging for human visualization to effectively recognize these features for tissue classification. Particularly, our needle OCT imager, given its simplicity and small form factor, does not have a mechanical scanner for beam steering and relies on manual scan to generate 2D images. The nonconstant translation speed of the probe in manual scanning inevitably introduces distortion artifacts in OCT imaging, which further complicates the tissue characterization task.] OCT images of tissue samples provide comprehensive information about the morphology of normal and unhealthy tissue. Image analysis of tissue morphology can help cancer researchers develop a better understanding of cancer biology. Classification of tissue images and recovering distorted OCT images are two common tasks in tissue image analysis. In this master thesis project, a novel deep learning approach is investigated to extract beam scanning speed from different samples. Furthermore, a novel technique is investigated and tested to recover distorted OCT images. The long-term goal of this study is to achieve robust tissue classification for breast cancer diagnosis, based on a simple single fiber OCT instrument. The deep learning network utilized in this study depends on Convolutional Neural Network (CNN) and NaĆÆve Bayes Classifier. For image retrieval, we used algorithms that extract, represent and match common features between images. The CNN network achieved accuracy of 97% in tissue type and scanning speed classification, while the image retrieval algorithms achieved very high-quality recovered image compared to the reference image

    Pixel N-grams for Mammographic Image Classification

    Get PDF
    X-ray screening for breast cancer is an important public health initiative in the management of a leading cause of death for women. However, screening is expensive if mammograms are required to be manually assessed by radiologists. Moreover, manual screening is subject to perception and interpretation errors. Computer aided detection/diagnosis (CAD) systems can help radiologists as computer algorithms are good at performing image analysis consistently and repetitively. However, image features that enhance CAD classification accuracies are necessary for CAD systems to be deployed. Many CAD systems have been developed but the specificity and sensitivity is not high; in part because of challenges inherent in identifying effective features to be initially extracted from raw images. Existing feature extraction techniques can be grouped under three main approaches; statistical, spectral and structural. Statistical and spectral techniques provide global image features but often fail to distinguish between local pattern variations within an image. On the other hand, structural approach have given rise to the Bag-of-Visual-Words (BoVW) model, which captures local variations in an image, but typically do not consider spatial relationships between the visual ā€œwordsā€. Moreover, statistical features and features based on BoVW models are computationally very expensive. Similarly, structural feature computation methods other than BoVW are also computationally expensive and strongly dependent upon algorithms that can segment an image to localize a region of interest likely to contain the tumour. Thus, classification algorithms using structural features require high resource computers. In order for a radiologist to classify the lesions on low resource computers such as Ipads, Tablets, and Mobile phones, in a remote location, it is necessary to develop computationally inexpensive classification algorithms. Therefore, the overarching aim of this research is to discover a feature extraction/image representation model which can be used to classify mammographic lesions with high accuracy, sensitivity and specificity along with low computational cost. For this purpose a novel feature extraction technique called ā€˜Pixel N-gramsā€™ is proposed. The Pixel N-grams approach is inspired from the character N-gram concept in text categorization. Here, N number of consecutive pixel intensities are considered in a particular direction. The image is then represented with the help of histogram of occurrences of the Pixel N-grams in an image. Shape and texture of mammographic lesions play an important role in determining the malignancy of the lesion. It was hypothesized that the Pixel N-grams would be able to distinguish between various textures and shapes. Experiments carried out on benchmark texture databases and binary basic shapes database have demonstrated that the hypothesis was correct. Moreover, the Pixel N-grams were able to distinguish between various shapes irrespective of size and location of shape in an image. The efficacy of the Pixel N-gram technique was tested on mammographic database of primary digital mammograms sourced from a radiological facility in Australia (LakeImaging Pty Ltd) and secondary digital mammograms (benchmark miniMIAS database). A senior radiologist from LakeImaging provided real time de-identified high resolution mammogram images with annotated regions of interests (which were used as groundtruth), and valuable radiological diagnostic knowledge. Two types of classifications were observed on these two datasets. Normal/abnormal classification useful for automated screening and circumscribed/speculation/normal classification useful for automated diagnosis of breast cancer. The classification results on both the mammography datasets using Pixel N-grams were promising. Classification performance (Fscore, sensitivity and specificity) using Pixel N-gram technique was observed to be significantly better than the existing techniques such as intensity histogram, co-occurrence matrix based features and comparable with the BoVW features. Further, Pixel N-gram features are found to be computationally less complex than the co-occurrence matrix based features as well as BoVW features paving the way for mammogram classification on low resource computers. Although, the Pixel N-gram technique was designed for mammographic classification, it could be applied to other image classification applications such as diabetic retinopathy, histopathological image classification, lung tumour detection using CT images, brain tumour detection using MRI images, wound image classification and tooth decay classification using dentistry x-ray images. Further, texture and shape classification is also useful for classification of real world images outside the medical domain. Therefore, the pixel N-gram technique could be extended for applications such as classification of satellite imagery and other object detection tasks.Doctor of Philosoph

    Towards the improvement of textual anatomy image classification using image local features

    Full text link

    Machine Learning Methods for Medical and Biological Image Computing

    Get PDF
    Medical and biological imaging technologies provide valuable visualization information of structure and function for an organ from the level of individual molecules to the whole object. Brain is the most complex organ in body, and it increasingly attracts intense research attentions with the rapid development of medical and bio-logical imaging technologies. A massive amount of high-dimensional brain imaging data being generated makes the design of computational methods for eļ¬ƒcient analysis on those images highly demanded. The current study of computational methods using hand-crafted features does not scale with the increasing number of brain images, hindering the pace of scientiļ¬c discoveries in neuroscience. In this thesis, I propose computational methods using high-level features for automated analysis of brain images at diļ¬€erent levels. At the brain function level, I develop a deep learning based framework for completing and integrating multi-modality neuroimaging data, which increases the diagnosis accuracy for Alzheimerā€™s disease. At the cellular level, I propose to use three dimensional convolutional neural networks (CNNs) for segmenting the volumetric neuronal images, which improves the performance of digital reconstruction of neuron structures. I design a novel CNN architecture such that the model training and testing image prediction can be implemented in an end-to-end manner. At the molecular level, I build a voxel CNN classiļ¬er to capture discriminative features of the input along three spatial dimensions, which facilitate the identiļ¬cation of secondary structures of proteins from electron microscopy im-ages. In order to classify genes speciļ¬cally expressed in diļ¬€erent brain cell-type, I propose to use invariant image feature descriptors to capture local gene expression information from cellular-resolution in situ hybridization images. I build image-level representations by applying regularized learning and vector quantization on generated image descriptors. The developed computational methods in this dissertation are evaluated using images from medical and biological experiments in comparison with baseline methods. Experimental results demonstrate that the developed representations, formulations, and algorithms are eļ¬€ective and eļ¬ƒcient in learning from brain imaging data

    An Approach Of Features Extraction And Heatmaps Generation Based Upon Cnns And 3D Object Models

    Get PDF
    The rapid advancements in artificial intelligence have enabled recent progress of self-driving vehicles. However, the dependence on 3D object models and their annotations collected and owned by individual companies has become a major problem for the development of new algorithms. This thesis proposes an approach of directly using graphics models created from open-source datasets as the virtual representation of real-world objects. This approach uses Machine Learning techniques to extract 3D feature points and to create annotations from graphics models for the recognition of dynamic objects, such as cars, and for the verification of stationary and variable objects, such as buildings and trees. Moreover, it generates heat maps for the elimination of stationary/variable objects in real-time images before working on the recognition of dynamic objects. The proposed approach helps to bridge the gap between the virtual and physical worlds and to facilitate the development of new algorithms for self-driving vehicles
    • ā€¦
    corecore