2,666 research outputs found

    Texture spectrum coupled with entropy and homogeneity image features for myocardium muscle characterization

    Get PDF
    People in middle/later age often suffer from heart muscle damage due to coronary artery disease associated to myocardial infarction. In young people, the genetic forms of cardiomyopathies (heart muscle disease) are the utmost protuberant cause of myocardial disease. Accurate early detected information regarding the myocardial tissue structure is a key answer for tracking the progress of several myocardial diseases. The present work proposes a new method for myocardium muscle texture classification based on entropy, homogeneity and on the texture unit-based texture spectrum approaches. Entropy and homogeneity are generated in moving windows of size 3x3 and 5x5 to enhance the texture features and to create the premise of differentiation of the myocardium structures. Texture is then statistically analyzed using the texture spectrum approach. Texture classification is achieved based on a fuzzy c–means descriptive classifier. The noise sensitivity of the fuzzy c–means classifier is overcome by using the image features. The proposed method is tested on a dataset of 80 echocardiographic ultrasound images in both short-axis and long-axis in apical two chamber view representations, for normal and infarct pathologies. The results established that the entropy-based features provided superior clustering results compared to homogeneity

    Informational Paradigm, management of uncertainty and theoretical formalisms in the clustering framework: A review

    Get PDF
    Fifty years have gone by since the publication of the first paper on clustering based on fuzzy sets theory. In 1965, L.A. Zadeh had published “Fuzzy Sets” [335]. After only one year, the first effects of this seminal paper began to emerge, with the pioneering paper on clustering by Bellman, Kalaba, Zadeh [33], in which they proposed a prototypal of clustering algorithm based on the fuzzy sets theory

    CT liver tumor segmentation hybrid approach using neutrosophic sets, fast fuzzy c-means and adaptive watershed algorithm

    Get PDF
    Liver tumor segmentation from computed tomography (CT) images is a critical and challenging task. Due to the fuzziness in the liver pixel range, the neighboring organs of the liver with the same intensity, high noise and large variance of tumors. The segmentation process is necessary for the detection, identification, and measurement of objects in CT images. We perform an extensive review of the CT liver segmentation literature

    Taming Wild High Dimensional Text Data with a Fuzzy Lash

    Full text link
    The bag of words (BOW) represents a corpus in a matrix whose elements are the frequency of words. However, each row in the matrix is a very high-dimensional sparse vector. Dimension reduction (DR) is a popular method to address sparsity and high-dimensionality issues. Among different strategies to develop DR method, Unsupervised Feature Transformation (UFT) is a popular strategy to map all words on a new basis to represent BOW. The recent increase of text data and its challenges imply that DR area still needs new perspectives. Although a wide range of methods based on the UFT strategy has been developed, the fuzzy approach has not been considered for DR based on this strategy. This research investigates the application of fuzzy clustering as a DR method based on the UFT strategy to collapse BOW matrix to provide a lower-dimensional representation of documents instead of the words in a corpus. The quantitative evaluation shows that fuzzy clustering produces superior performance and features to Principal Components Analysis (PCA) and Singular Value Decomposition (SVD), two popular DR methods based on the UFT strategy

    TEXTURAL ANALYSIS AND STATISTICAL INVESTIGATION OF PATTERNS IN SYNTHETIC APERTURE SONAR IMAGES

    Get PDF
    Textural analysis and statistical investigation of patterns in synthetic aperture sonar (SAS) images is useful for oceanographic purposes such as biological habitat mapping or bottom type identification for offshore construction. Seafloor classification also has many tactical benefits for the U.S. Navy in terms of mine identification and undersea warfare. Common methods of texture analysis rely on statistical moments of image intensity, or more generally, the probability density function of the scene. One of the most common techniques uses Haralick’s Grey Level Co-occurrence Matrix (GLCM) to calculate image features used in the applications listed above. Although widely used, seafloor classification and segmentation are difficult using Haralick features. Typically, these features are calculated at a single scale. Improvements based on the understanding that patterns are multiscale was compared with this baseline, with a goal of improving seafloor classification. Synthetic aperture sonar (SAS) data was provided by the Norwegian Research Defense Establishment for this work, and was labeled into six distinct seafloor classes, with 757 total examples. We analyze the feature importance determined by neighborhood component analysis as a function of scale and direction to determine which spatial scale and azimuthal direction is most informative for good classification performance.Office of Naval Research, Arlington, VA , 22217Lieutenant, United States NavyApproved for public release. Distribution is unlimited

    The Impact of Different Image Thresholding based Mammogram Image Segmentation- A Review

    Get PDF
    Images are examined and discretized numerical capacities. The goal of computerized image processing is to enhance the nature of pictorial data and to encourage programmed machine elucidation. A computerized imaging framework ought to have fundamental segments for picture procurement, exceptional equipment for encouraging picture applications, and a tremendous measure of memory for capacity and info/yield gadgets. Picture segmentation is the field broadly scrutinized particularly in numerous restorative applications and still offers different difficulties for the specialists. Segmentation is a critical errand to recognize districts suspicious of tumor in computerized mammograms. Every last picture have distinctive sorts of edges and diverse levels of limits. In picture transforming, the most regularly utilized strategy as a part of extricating articles from a picture is "thresholding". Thresholding is a prevalent device for picture segmentation for its straightforwardness, particularly in the fields where ongoing handling is required

    Temporal - spatial recognizer for multi-label data

    Get PDF
    Pattern recognition is an important artificial intelligence task with practical applications in many fields such as medical and species distribution. Such application involves overlapping data points which are demonstrated in the multi- label dataset. Hence, there is a need for a recognition algorithm that can separate the overlapping data points in order to recognize the correct pattern. Existing recognition methods suffer from sensitivity to noise and overlapping points as they could not recognize a pattern when there is a shift in the position of the data points. Furthermore, the methods do not implicate temporal information in the process of recognition, which leads to low quality of data clustering. In this study, an improved pattern recognition method based on Hierarchical Temporal Memory (HTM) is proposed to solve the overlapping in data points of multi- label dataset. The imHTM (Improved HTM) method includes improvement in two of its components; feature extraction and data clustering. The first improvement is realized as TS-Layer Neocognitron algorithm which solves the shift in position problem in feature extraction phase. On the other hand, the data clustering step, has two improvements, TFCM and cFCM (TFCM with limit- Chebyshev distance metric) that allows the overlapped data points which occur in patterns to be separated correctly into the relevant clusters by temporal clustering. Experiments on five datasets were conducted to compare the proposed method (imHTM) against statistical, template and structural pattern recognition methods. The results showed that the percentage of success in recognition accuracy is 99% as compared with the template matching method (Featured-Based Approach, Area-Based Approach), statistical method (Principal Component Analysis, Linear Discriminant Analysis, Support Vector Machines and Neural Network) and structural method (original HTM). The findings indicate that the improved HTM can give an optimum pattern recognition accuracy, especially the ones in multi- label dataset
    • …
    corecore