30 research outputs found

    Automatic Tuberculosis Detection Using Chest X-ray Analysis With Position Enhanced Structural Information

    Get PDF
    For Tuberculosis (TB) detection beside the more expensive diagnosis solutions such as culture or sputum smear analysis one could consider the automatic analysis of the chest X-ray (CXR). This could mimic the lung region reading by the radiologist and it could provide an inexpensive solution to analyze and diagnose pulmonary abnormalities such as TB, a disease which often co-occurs with HIV. This software based pulmonary screening can be a reliable and affordable alternative solution for rural populations in different parts of the world such as the Indian subcontinent, Africa, etc. The fully automatic system we are proposing is processing the incoming CXR image by applying image processing techniques to detect the region of interest (ROI) followed by a computationally cheap feature extraction involving edge detection using Laplacian of Gaussian which we enrich by counting the local distribution of the intensities. The choice to “zoom in” the ROI and look for abnormalities locally is motivated by the fact that some pulmonary abnormalities are localized in specific regions of the lungs. Later on the classifiers can decide about the normal or abnormal nature of each lung X-ray. Our goal is to find a simple feature descriptor, instead of a combination of several ones, -proposed and promoted in recent years\u27 literature, which can properly and simply describe the different pathological alterations in the lungs. Our experiments report results on two publicly available benchmark data collections 11 https://ceb.nlm.nih.gov/repos/chestImages.php, namely the Shenzhen and the Montgomery collection. For performance evaluation, measures such as area under the curve (AUC), and accuracy (ACC) were considered, achieving AUC = 0.81 (ACC = 83.33%) and AUC = 0.96 (ACC = 96.35%) for the Montgomery and Shenzhen collections, respectively. Several comparisons are also provided to other state-of-the-art systems reported recently in the field

    Classifiers combination for recognition score improvement

    Get PDF
    Rapport de contrat.In this report we describe two widely used combinational methods: stacked generalization and respectively the behavior knowledge space in order to improve the recognition scores obtained with the multi-layer perceptron and respectively the support vector machines. By the combination of these two classifiers we achieved good results, taking in consideration the drawbacks, in the database were low represented classes, and in the mean time the quality of the images was not sufficiently good. By introducing some regroupment operations on the classes respectively some rejection criterias on the final decision rules, we raised satisfactory recognition scores. We found that the stacked generalization method is giving better results than the BKS method which can be explained with the complexity of the final decision rules used in these two methods. As future works, we suggested to introduce some new classifiers in order to improve the combinational process in the BKS method and to refine the normalization process for the real images by addign more local information in the trandformation process

    Training and evaluation of the models for isolated character recognition

    Get PDF
    Rapport de contrat.In this report we describe a special multi-layer perceptron wich was trained on the image database in order to obtain the classification of the test database. We used the same database to train for each class an SVM and after that we used a normalization process in order to be able to compare the different results given by the SVMs on the test database, as they are not n-class classifiers. As the results obtained were not enough satisfactory for our industrial partner, we proposed a combination scheme, in order to interact the different decisions given by the different classifiers and to obtain a high recognition score

    Characterization and normalization of the image database

    Get PDF
    Rapport de contrat.In this report it is described the characterization of the image database containing characters in different sizes, different orientations and in the mean time written with different font types, coming from our industrial partner. In order to normalize the images we used the Goshtasby transform, which is a size and rotation invariant transform and works with shape matrices. In order to capture in the shapes more local information from the images, we proposed a modification of the transformation

    Structural Information Implant in a Context Based Segmentation-Free HMM Handwritten Word Recognition System for Latin and Bangla Script

    Get PDF
    In this paper, an improvement of a 2D stochastic model based handwritten entity recognition system is described. To model the handwriting considered as being a two dimensional signal, a context based, segmentation-free Hidden Markov Model (HMM) recognition system was used. The baseline approach combines a Markov Random Field (MRF) and a HMM so-called Non-Symmetric Half Plane Hidden Markov Model (NSHP-HMM). To improve the results performed by this baseline system operating just on low-level pixel information an extension of the NSHP-HMM is proposed. The mechanism allows to extend the observations of the NSHP-HMM by implanting structural information in the system. At present, the accuracy of the system on the SRTP (Service de Recherche Technique de la Poste) French postal check database is 87.52% while for the handwritten Bangla city names is 86.80%. The gain using this structural information for the SRTP dataset is 1.57%

    Looking at faces in the wild

    Get PDF
    Recent advances in the face detection (FD) and recognition (FR) technology may give an impression that the problem of face matching is essentially solved, e.g. via deep learning models using thousands of samples per face for training and validation on the available benchmark data-sets. Human vision system seems to handle face localization and matching problem differently from the modern FR systems, since humans detect faces instantly even in most cluttered environments, and often require a single view of a face to reliably distinguish it from all others. This prompted us to take a biologically inspired look at building a cognitive architecture that uses artificial neural nets at the face detection stage and adapts a single image per person (SIPP) approach for face image matching

    Deep Learning of 2-D Images Representing n-D Data in General Line Coordinates

    Get PDF
    While knowledge discovery and n-D data visualization procedures are often efficient, the loss of information, occlusion, and clutter continue to be a challenge. General Line Coordinates (GLC) is a rather new technique to deal with such artifacts. GLC-Linear, which is one of the methods in GLC, allows transforming n-D numerical data to their visual representation as polylines losslessly. The method proposed in this paper uses these 2-D visual representations as input to Convolutional Neural Network (CNN) classifiers. The obtained classification accuracies are close to the ones obtained by other machine learning algorithms. The main benefit of the method is the possibility to use the lossless visualization of n-dimensional data for interpretation and explanation of the discovered relationships besides the classical classification using statistical learning strategies

    Cultural Conversations Spring 2019

    Get PDF
    Cultural storytelling event sponsored by Ellensburg Public Library, Brooks Library, and Office of Interational Studies and Programs. Held in the Hal Holmes Community Center. The speaker for this event is CWU Professor Szilard Vajdahttps://digitalcommons.cwu.edu/libraryevents/1162/thumbnail.jp

    Towards Feature Learning for HMM-based Offline Handwriting Recognition

    Get PDF
    Statistical modelling techniques for automatic reading systems substantially rely on the availability of compact and meaningful feature representations. State-of-the-art feature extraction for offline handwriting recognition is usually based on heuristic approaches that describe either basic geometric properties or statistical distributions of raw pixel values. Working well on average, still fundamental insights into the nature of handwriting are desired. In this paper we present a novel approach for the automatic extraction of appearance-based representations of offline handwriting data. Given the framework of deep belief networks -- Restricted Boltzmann Machines -- a two-stage method for feature learning and optimization is developed. Given two standard corpora of both Arabic and Roman handwriting data it is demonstrated across script boundaries, that automatically learned features achieve recognition results comparable to state-of-the-art handcrafted features. Given these promising results the potential of feature learning for future reading systems is discussed

    A Fast Learning Strategy Using Pattern Selection for Feedforward Neural Networks

    Get PDF
    http://www.suvisoft.comInternational audienceIntelligent pattern selection is an active learning strategy where the classifiers select during training the most informative patterns. This paper investigates such a strategy where the informativeness of a pattern is measured as the approximation error produced by the classifier. The algorithm builds the training corpus starting from a small randomly chosen initial dataset and new patterns are added to the learning corpus based on their error sensitivity. The training dataset expansion is based on the selection of the most erroneous patterns. Our experimental results on MNIST 1 separated digit dataset show that only 3.26%of training data are sufficient for training purpose without decreasing the performance (98.36%) of the resulting neural classifier
    corecore