99,644 research outputs found

    Automatic goal allocation for a planetary rover with DSmT

    Get PDF
    In this chapter, we propose an approach for assigning aninterest level to the goals of a planetary rover. Assigning an interest level to goals, allows the rover to autonomously transform and reallocate the goals. The interest level is defined by data-fusing payload and navigation information. The fusion yields an 'interest map',that quantifies the level of interest of each area around the rover. In this way the planner can choose the most interesting scientific objectives to be analysed, with limited human intervention, and reallocates its goals autonomously. The Dezert-Smarandache Theory of Plausible and Paradoxical Reasoning was used for information fusion: this theory allows dealing with vague and conflicting data. In particular, it allows us to directly model the behaviour of the scientists that have to evaluate the relevance of a particular set of goals. This chaptershows an application of the proposed approach to the generation of a reliable interest map

    Improved detection and characterization of obscured central gland tumors of the prostate: texture analysis of non contrast and contrast enhanced MR images for differentiation of benign prostate hyperplasia (BPH) nodules and cancer

    Full text link
    OBJECTIVE: The purpose of this study to assess the value of texture analysis (TA) for prostate cancer (PCa) detection on T2 weighted images (T2WI) and dynamic contrast-enhanced images (DCE) by differentiating between the PCa and Benign Prostate Hyperplasia (BPH). MATERIALS & METHODS: This study used 10 retrospective MRI data sets that were acquired from men with confirmed PCa. The prostate region of interest (ROI) was delineated by an expert on MRI data sets using automated prostate capsule segmentation scheme. The statistical significance test was used for feature selection scheme for optimal differentiation of PCa from BPH on MR images. In pre-processing, for T2-WI, Bias correction and all images intensities are standardized to a representative template. For DCE images, Bias correction and all images are registered to time point 1 for that patient. Following pre-processing texture, features from ROI were extracted and analyzed. Texture features that were extracted are: Intensity mean and standard deviation, Sobel (Edge detection), Haralick features, and Gabor features. RESULTS: In T2-WI, statistically significant differences were observed in Haralick features. In DCE images, statistically significant differences were observed in mean intensity, Sobel, Gabor, and Haralick features. CONCLUSION: BPH is better differentiated in DCE images compared to T2-WI. The statically significant features may be combined to build a BPH vs. cancer detection system in future

    Feature Extraction of Chest X-ray Images and Analysis using PCA and kPCA

    Get PDF
    Tuberculosis (TB) is an infectious disease caused by mycobacterium which can be diagnosed by its various symptoms like fever, cough, etc. Tuberculosis can also be analyzed by understanding the chest x-ray of the patient which is revealed by an expert physician .The chest x-ray image contains many features which cannot be directly used by any computer system for analyzing the disease. Features of chest x-ray images must be understood and extracted, so that it can be processed to a form to be fed to any computer system for disease analysis. This paper presents feature extraction of chest x-ray image which can be used as an input for any data mining algorithm for TB disease analysis. So texture and shape based features are extracted from x-ray image using image processing concepts. The features extracted are analyzed using principal component analysis (PCA) and kernel principal component analysis (kPCA) techniques. Filter and wrapper feature selection method using linear regression model were applied on these techniques. The performance of PCA and kPCA are analyzed and found that the accuracy of PCA using wrapper approach is 96.07%Ā  Ā when compared to the accuracy of kPCA which is 62.50%. PCA performs well than kPCA with a good accuracy

    A Statistical Modeling Approach to Computer-Aided Quantification of Dental Biofilm

    Full text link
    Biofilm is a formation of microbial material on tooth substrata. Several methods to quantify dental biofilm coverage have recently been reported in the literature, but at best they provide a semi-automated approach to quantification with significant input from a human grader that comes with the graders bias of what are foreground, background, biofilm, and tooth. Additionally, human assessment indices limit the resolution of the quantification scale; most commercial scales use five levels of quantification for biofilm coverage (0%, 25%, 50%, 75%, and 100%). On the other hand, current state-of-the-art techniques in automatic plaque quantification fail to make their way into practical applications owing to their inability to incorporate human input to handle misclassifications. This paper proposes a new interactive method for biofilm quantification in Quantitative light-induced fluorescence (QLF) images of canine teeth that is independent of the perceptual bias of the grader. The method partitions a QLF image into segments of uniform texture and intensity called superpixels; every superpixel is statistically modeled as a realization of a single 2D Gaussian Markov random field (GMRF) whose parameters are estimated; the superpixel is then assigned to one of three classes (background, biofilm, tooth substratum) based on the training set of data. The quantification results show a high degree of consistency and precision. At the same time, the proposed method gives pathologists full control to post-process the automatic quantification by flipping misclassified superpixels to a different state (background, tooth, biofilm) with a single click, providing greater usability than simply marking the boundaries of biofilm and tooth as done by current state-of-the-art methods.Comment: 10 pages, 7 figures, Journal of Biomedical and Health Informatics 2014. keywords: {Biomedical imaging;Calibration;Dentistry;Estimation;Image segmentation;Manuals;Teeth}, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758338&isnumber=636350

    Diabetic Macular Edema Characterization and Visualization Using Optical Coherence Tomography Images

    Get PDF
    Diabetic Retinopathy and Diabetic Macular Edema (DME) represent one of the main causes of blindness in developed countries. They are characterized by fluid deposits in the retinal layers, causing a progressive vision loss over the time. The clinical literature defines three DME types according to the texture and disposition of the fluid accumulations: Cystoid Macular Edema (CME), Diffuse Retinal Thickening (DRT) and Serous Retinal Detachment (SRD). Detecting each one is essential as, depending on their presence, the expert will decide on the adequate treatment of the pathology. In this work, we propose a robust detection and visualization methodology based on the analysis of independent image regions. We study a complete and heterogeneous library of 375 texture and intensity features in a dataset of 356 labeled images from two of the most used capture devices in the clinical domain: a CIRRUSTM HD-OCT 500 Carl Zeiss Meditec and 179 OCT images from a modular HRA + OCT SPECTRALIS(R) from Heidelberg Engineering, Inc. We extracted 33,810 samples for each type of DME for the feature analysis and incremental training of four different classifier paradigms. This way, we achieved an 84.04% average accuracy for CME, 78.44% average accuracy for DRT and 95.40% average accuracy for SRD. These models are used to generate an intuitive visualization of the fluid regions. We use an image sampling and voting strategy, resulting in a system capable of detecting and characterizing the three types of DME presenting them in an intuitive and repeatable way

    Self-organizing maps for texture classification

    Get PDF

    Automating the construction of scene classifiers for content-based video retrieval

    Get PDF
    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a two stage procedure. First, small image fragments called patches are classified. Second, frequency vectors of these patch classifications are fed into a second classifier for global scene classification (e.g., city, portraits, or countryside). The first stage classifiers can be seen as a set of highly specialized, learned feature detectors, as an alternative to letting an image processing expert determine features a priori. We present results for experiments on a variety of patch and image classes. The scene classifier has been used successfully within television archives and for Internet porn filtering

    K-Space at TRECVid 2007

    Get PDF
    In this paper we describe K-Space participation in TRECVid 2007. K-Space participated in two tasks, high-level feature extraction and interactive search. We present our approaches for each of these activities and provide a brief analysis of our results. Our high-level feature submission utilized multi-modal low-level features which included visual, audio and temporal elements. Specific concept detectors (such as Face detectors) developed by K-Space partners were also used. We experimented with different machine learning approaches including logistic regression and support vector machines (SVM). Finally we also experimented with both early and late fusion for feature combination. This year we also participated in interactive search, submitting 6 runs. We developed two interfaces which both utilized the same retrieval functionality. Our objective was to measure the effect of context, which was supported to different degrees in each interface, on user performance. The first of the two systems was a ā€˜shotā€™ based interface, where the results from a query were presented as a ranked list of shots. The second interface was ā€˜broadcastā€™ based, where results were presented as a ranked list of broadcasts. Both systems made use of the outputs of our high-level feature submission as well as low-level visual features
    • ā€¦
    corecore