10 research outputs found

    Continuous Local Histogram Descriptor For Diagnosis of Bronchiolitis Obliterans

    Get PDF
    Texture feature is an important feature analysis method in computer-aided diagnosis systems for disease diagnosis. However, texture feature itself could not provide an overall description of the diseases. In this paper, we propose Continuous Local Feature (CLH) to diagnose the Bronchiolitis Obliterans (BO) lung diseases in the chest computer tomography images. CLH is based on the continuous combination of histograms of local texture feature, local shape feature, and the brightness feature. Because CLH extracts more information, it has high discriminating power and is able to classify between the BO lung disease and normal lung region effectively. The experimental results in classifying between BO and normal lung region show that CLH achieves 98.15% of average sensitivity whereas Local Binary Patterns and Gray Level Run Length Matrix achieve 73% and 75.8% of average sensitivities, respectively. In the receiver operating curve analysis, CLH archives 0.9 of area under curve (AUC) whereas LBP and GLRLM achieve 0.78 and 0.86 of AUCs

    Detection of man-made structures in aerial imagery using quasi-supervised learning and texture features

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2010Includes bibliographical references (leaves: 59-61)Text in English; Abstract: Turkish and Englishx, 61 leavesIn this thesis, the quasi-supervised statistical learning algorithm has been applied for texture recognitioning analysis. The main objective of the proposed method is to detect man-made objects or differences on the terrain as a result of habitating. From this point of view, gaining information about human presence in a region of interest using aerial imagery is of vital importance. This task is adressed using a machine learning paradigm in a quasi-supervised learning. Eigthteen different sized aerial images were used in all computations and analysis. The available data was divided into a reference control set which consist of normalcy condition samples with no human presence, and a mixed testing data set which consisting images of habitate and cultivated terrain. Grey level co-occurrence matrices were then computed for each block and .Haralick Features. were extracted and organized into a texture vector. The quasi-supervised learning was then applied to the collection of texture vectors to identify those image blocks which show human presence in the test data set. In the performance evaluatian part, detected abnormal areas were compared with manually labeled data to determine the corresponding reciever operating characteristic curve. The results showed that the quasi-supervised learning algorithm is able to identify the indicators of human presence in a region such as houses, roads and objects that are not likely to be observed in areas free from human habitation

    Perceptual image analysis

    Get PDF
    The problem considered in this paper is one of extracting perceptually relevant information from groups of objects based on their descriptions. Object descriptions are qualitatively represented by feature-value vectors containing probe function values computed in a manner similar to feature extraction in pattern classification theory. The work presented here is a generalisation of a solution to extracting perceptual information from images using near sets theory which provides a framework for measuring the perceptual nearness of objects. Further, near set theory is used to define a perception-based approach to image analysis that is inspired by traditional mathematical morphology and an application of this methodology is given by way of segmentation evaluation. The contribution of this article is the introduction of a new method of unsupervised segmentation evaluation that is base on human perception rather than on properties of ideal segmentations as is normally the case.https://www.inderscience.com/info/inarticle.php?artid=3309

    Brain Tumor Analysis and Classification of Brain MR Images

    Get PDF

    Brain Tumor Analysis and Classification of Brain MR Images

    Get PDF

    A Fast Texture Feature Extraction Method for Region-based Image Segmentation

    No full text
    Region-based image segmentation is one popular approach to image segmentation of generic images. In these methods, an image is partitioned into connected regions by grouping neighboring pixels of similar features, and adjacent regions are then merged with respect to the similarities between the features in these regions. To achieve fine-grain segmentation at the pixel level, we must be able to define features on a per-pixel basis. This is straightforward for color information, but not for texture. Typically texture feature extraction is very computationally intensive for individual pixels. In this paper, we propose a novel fast texture feature extraction method which takes advantage of the similarities between the neighboring pixels. The experiments demonstrate that our method can greatly increase the extraction speed while keeping the distortion within a reasonable range

    A fast texture feature extraction method for region-based image segmentation

    No full text

    Artificial Intelligence Based Classification for Urban Surface Water Modelling

    Get PDF
    Estimations and predictions of surface water runoff can provide very useful insights, regarding flood risks in urban areas. To automatically predict the flow behaviour of the rainfall-runoff water, in real-world satellite images, it is important to precisely identify permeable and impermeable areas. This identification indicates and helps to calculate the amount of surface water, by taking into account the amount of water being absorbed in a permeable area and what remains on the impermeable area. In this research, a model of surface water has been established, to predict the behavioural flow of rainfall-runoff water. This study employs a combination of image processing, artificial intelligence and machine learning techniques, for automatic segmentation and classification of permeable and impermeable areas, in satellite images. These techniques investigate the image classification approaches for classifying three land-use categories (roofs, roads, and pervious areas), commonly found in satellite images of the earth’s surface. Three different classification scenarios are investigated, to select the best classification model. The first scenario involves pixel by pixel classification of images, using Classification Tree and Random Forest classification techniques, in 2 different settings of sequential and parallel execution of algorithms. In the second classification scenario, the image is divided into objects, by using Superpixels (SLIC) segmentation method, while three kinds of feature sets are extracted from the segmented objects. The performance of eight different supervised machine learning classifiers is probed, using 5-fold cross-validation, for multiple SLIC values, while detailed performance comparisons lead to conclusions about the classification into different classes, regarding Object-based and Pixel-based classification schemes. Pareto analysis and Knee point selection are used to select SLIC value and the suitable type of classification, among the aforementioned two. Furthermore, a new diversity and weighted sum-based ensemble classification model, called ParetoEnsemble, is proposed, in this classification scenario. The weights are applied to selected component classifiers of an ensemble, creating a strong classifier, where classification is done based on multiple votes from candidate classifiers of the ensemble, as opposed to individual classifiers, where classification is done based on a single vote, from only one classifier. Unbalanced and balanced data-based classification results are also evaluated, to determine the most suitable mode, for satellite image classifications, in this study. Convolutional Neural Networks, based on semantic segmentation, are also employed in the classification phase, as a third scenario, to evaluate the strength of deep learning model SegNet, in the classification of satellite imaging. The best results, from the three classification scenarios, are compared and the best classification method, among the three scenarios, is used in the next phase of water modelling, with the InfoWorks ICM software, to explore the potential of modelling process, regarding a partially automated surface water network. By using the parameter settings, with a specified amount of simulated rain falling, onto the imaged area, the amount of surface water flow is estimated, to get predictions about runoff situations in urban areas, since runoff, in such a situation, can be high enough to pose a dangerous flood risk. The area of Feock, in Cornwall, is used as a simulation area of study, in this research, where some promising results have been derived, regarding classification and modelling of runoff. The correlation coefficient estimation, between classification and runoff accuracy, provides useful insight, regarding the dependence of runoff performance on classification performance. The trained system was tested on some unknown area images as well, demonstrating a reasonable performance, considering the training and classification limitations and conditions. Furthermore, in these unknown area images, reasonable estimations were derived, regarding surface water runoff. An analysis of unbalanced and balanced data-based classification and runoff estimations, for multiple parameter configurations, provides aid to the selection of classification and modelling parameter values, to be used in future unknown data predictions. This research is founded on the incorporation of satellite imaging into water modelling, using selective images for analysis and assessment of results. This system can be further improved, and runoff predictions of high precision can be better achieved, by adding more high-resolution images to the classifiers training. The added variety, to the trained model, can lead to an even better classification of any unknown image, which could eventually provide better modelling and better insights into surface water modelling. Moreover, the modelling phase can be extended, in future research, to deal with real-time parameters, by calibrating the model, after the classification phase, in order to observe the impact of classification on the actual calibration

    An automatic system for classification of breast cancer lesions in ultrasound images

    Get PDF
    Breast cancer is the most common of all cancers and second most deadly cancer in women in the developed countries. Mammography and ultrasound imaging are the standard techniques used in cancer screening. Mammography is widely used as the primary tool for cancer screening, however it is invasive technique due to radiation used. Ultrasound seems to be good at picking up many cancers missed by mammography. In addition, ultrasound is non-invasive as no radiation is used, portable and versatile. However, ultrasound images have usually poor quality because of multiplicative speckle noise that results in artifacts. Because of noise segmentation of suspected areas in ultrasound images is a challenging task that remains an open problem despite many years of research. In this research, a new method for automatic detection of suspected breast cancer lesions using ultrasound is proposed. In this fully automated method, new de-noising and segmentation techniques are introduced and high accuracy classifier using combination of morphological and textural features is used. We use a combination of fuzzy logic and compounding to denoise ultrasound images and reduce shadows. We introduced a new method to identify the seed points and then use region growing method to perform segmentation. For preliminary classification we use three classifiers (ANN, AdaBoost, FSVM) and then we use a majority voting to get the final result. We demonstrate that our automated system performs better than the other state-of-the-art systems. On our database containing ultrasound images for 80 patients we reached accuracy of 98.75% versus ABUS method with 88.75% accuracy and Hybrid Filtering method with 92.50% accuracy. Future work would involve a larger dataset of ultrasound images and we will extend our system to handle colour ultrasound images. We will also study the impact of larger number of texture and morphological features as well as weighting scheme on performance of our classifier. We will also develop an automated method to identify the "wall thickness" of a mass in breast ultrasound images. Presently the wall thickness is extracted manually with the help of a physician
    corecore