286 research outputs found

    Informational Paradigm, management of uncertainty and theoretical formalisms in the clustering framework: A review

    Get PDF
    Fifty years have gone by since the publication of the first paper on clustering based on fuzzy sets theory. In 1965, L.A. Zadeh had published “Fuzzy Sets” [335]. After only one year, the first effects of this seminal paper began to emerge, with the pioneering paper on clustering by Bellman, Kalaba, Zadeh [33], in which they proposed a prototypal of clustering algorithm based on the fuzzy sets theory

    An Improved Fuzzy c

    Get PDF
    To organize the wide variety of data sets automatically and acquire accurate classification, this paper presents a modified fuzzy c-means algorithm (SP-FCM) based on particle swarm optimization (PSO) and shadowed sets to perform feature clustering. SP-FCM introduces the global search property of PSO to deal with the problem of premature convergence of conventional fuzzy clustering, utilizes vagueness balance property of shadowed sets to handle overlapping among clusters, and models uncertainty in class boundaries. This new method uses Xie-Beni index as cluster validity and automatically finds the optimal cluster number within a specific range with cluster partitions that provide compact and well-separated clusters. Experiments show that the proposed approach significantly improves the clustering effect

    Fuzzy Techniques for Decision Making 2018

    Get PDF
    Zadeh's fuzzy set theory incorporates the impreciseness of data and evaluations, by imputting the degrees by which each object belongs to a set. Its success fostered theories that codify the subjectivity, uncertainty, imprecision, or roughness of the evaluations. Their rationale is to produce new flexible methodologies in order to model a variety of concrete decision problems more realistically. This Special Issue garners contributions addressing novel tools, techniques and methodologies for decision making (inclusive of both individual and group, single- or multi-criteria decision making) in the context of these theories. It contains 38 research articles that contribute to a variety of setups that combine fuzziness, hesitancy, roughness, covering sets, and linguistic approaches. Their ranges vary from fundamental or technical to applied approaches

    EXPLOITING HIGHER ORDER UNCERTAINTY IN IMAGE ANALYSIS

    Get PDF
    Soft computing is a group of methodologies that works synergistically to provide flexible information processing capability for handling real-life ambiguous situations. Its aim is to exploit the tolerance for imprecision, uncertainty, approximate reasoning, and partial truth in order to achieve tractability, robustness, and low-cost solutions. Soft computing methodologies (involving fuzzy sets, neural networks, genetic algorithms, and rough sets) have been successfully employed in various image processing tasks including image segmentation, enhancement and classification, both individually or in combination with other soft computing techniques. The reason of such success has its motivation in the fact that soft computing techniques provide a powerful tools to describe uncertainty, naturally embedded in images, which can be exploited in various image processing tasks. The main contribution of this thesis is to present tools for handling uncertainty by means of a rough-fuzzy framework for exploiting feature level uncertainty. The first contribution is the definition of a general framework based on the hybridization of rough and fuzzy sets, along with a new operator called RF-product, as an effective solution to some problems in image analysis. The second and third contributions are devoted to prove the effectiveness of the proposed framework, by presenting a compression method based on vector quantization and its compression capabilities and an HSV color image segmentation technique

    Combining rough and fuzzy sets for feature selection

    Get PDF

    Hematological image analysis for acute lymphoblastic leukemia detection and classification

    Get PDF
    Microscopic analysis of peripheral blood smear is a critical step in detection of leukemia.However, this type of light microscopic assessment is time consuming, inherently subjective, and is governed by hematopathologists clinical acumen and experience. To circumvent such problems, an efficient computer aided methodology for quantitative analysis of peripheral blood samples is required to be developed. In this thesis, efforts are therefore made to devise methodologies for automated detection and subclassification of Acute Lymphoblastic Leukemia (ALL) using image processing and machine learning methods.Choice of appropriate segmentation scheme plays a vital role in the automated disease recognition process. Accordingly to segment the normal mature lymphocyte and malignant lymphoblast images into constituent morphological regions novel schemes have been proposed. In order to make the proposed schemes viable from a practical and real–time stand point, the segmentation problem is addressed in both supervised and unsupervised framework. These proposed methods are based on neural network,feature space clustering, and Markov random field modeling, where the segmentation problem is formulated as pixel classification, pixel clustering, and pixel labeling problem respectively. A comprehensive validation analysis is presented to evaluate the performance of four proposed lymphocyte image segmentation schemes against manual segmentation results provided by a panel of hematopathologists. It is observed that morphological components of normal and malignant lymphocytes differ significantly. To automatically recognize lymphoblasts and detect ALL in peripheral blood samples, an efficient methodology is proposed.Morphological, textural and color features are extracted from the segmented nucleus and cytoplasm regions of the lymphocyte images. An ensemble of classifiers represented as EOC3 comprising of three classifiers shows highest classification accuracy of 94.73% in comparison to individual members. The subclassification of ALL based on French–American–British (FAB) and World Health Organization (WHO) criteria is essential for prognosis and treatment planning. Accordingly two independent methodologies are proposed for automated classification of malignant lymphocyte (lymphoblast) images based on morphology and phenotype. These methods include lymphoblast image segmentation, nucleus and cytoplasm feature extraction, and efficient classification

    Foetal echocardiographic segmentation

    Get PDF
    Congenital heart disease affects just under one percentage of all live births [1]. Those defects that manifest themselves as changes to the cardiac chamber volumes are the motivation for the research presented in this thesis. Blood volume measurements in vivo require delineation of the cardiac chambers and manual tracing of foetal cardiac chambers is very time consuming and operator dependent. This thesis presents a multi region based level set snake deformable model applied in both 2D and 3D which can automatically adapt to some extent towards ultrasound noise such as attenuation, speckle and partial occlusion artefacts. The algorithm presented is named Mumford Shah Sarti Collision Detection (MSSCD). The level set methods presented in this thesis have an optional shape prior term for constraining the segmentation by a template registered to the image in the presence of shadowing and heavy noise. When applied to real data in the absence of the template the MSSCD algorithm is initialised from seed primitives placed at the centre of each cardiac chamber. The voxel statistics inside the chamber is determined before evolution. The MSSCD stops at open boundaries between two chambers as the two approaching level set fronts meet. This has significance when determining volumes for all cardiac compartments since cardiac indices assume that each chamber is treated in isolation. Comparison of the segmentation results from the implemented snakes including a previous level set method in the foetal cardiac literature show that in both 2D and 3D on both real and synthetic data, the MSSCD formulation is better suited to these types of data. All the algorithms tested in this thesis are within 2mm error to manually traced segmentation of the foetal cardiac datasets. This corresponds to less than 10% of the length of a foetal heart. In addition to comparison with manual tracings all the amorphous deformable model segmentations in this thesis are validated using a physical phantom. The volume estimation of the phantom by the MSSCD segmentation is to within 13% of the physically determined volume

    IMPROVING UNDERSTANDABILITY AND UNCERTAINTY MODELING OF DATA USING FUZZY LOGIC SYSTEMS

    Get PDF
    The need for automation, optimality and efficiency has made modern day control and monitoring systems extremely complex and data abundant. However, the complexity of the systems and the abundance of raw data has reduced the understandability and interpretability of data which results in a reduced state awareness of the system. Furthermore, different levels of uncertainty introduced by sensors and actuators make interpreting and accurately manipulating systems difficult. Classical mathematical methods lack the capability to capture human knowledge and increase understandability while modeling such uncertainty. Fuzzy Logic has been shown to alleviate both these problems by introducing logic based on vague terms that rely on human understandable terms. The use of linguistic terms and simple consequential rules increase the understandability of system behavior as well as data. Use of vague terms and modeling data from non-discrete prototypes enables modeling of uncertainty. However, due to recent trends, the primary research of fuzzy logic have been diverged from the basic concept of understandability. Furthermore, high computational costs to achieve robust uncertainty modeling have led to restricted use of such fuzzy systems in real-world applications. Thus, the goal of this dissertation is to present algorithms and techniques that improve understandability and uncertainty modeling using Fuzzy Logic Systems. In order to achieve this goal, this dissertation presents the following major contributions: 1) a novel methodology for generating Fuzzy Membership Functions based on understandability, 2) Linguistic Summarization of data using if-then type consequential rules, and 3) novel Shadowed Type-2 Fuzzy Logic Systems for uncertainty modeling. Finally, these presented techniques are applied to real world systems and data to exemplify their relevance and usage

    Ground elevation models and land cover classifers for decimetre resolution urban monitoring

    Get PDF
    This thesis presents methods that enable the generation of quantitative environmental indicators for remotely monitoring urban regions. Its contributions are a new morphological and surface fitting hybrid algorithm for the generation of ground elevation models, a vegetation classifier and significant research into Canonical Variate Analysis with Rational Polynomials (a feature extraction method that normalises the topographic illumination effect). These methods were tested on a 9600 square kilometre, 20cm resolution dataset covering Perth
    corecore