8,261 research outputs found

    Automatic histogram threshold using fuzzy measures

    Get PDF
    In this paper, an automatic histogram threshold approach based on a fuzziness measure is presented. This work is an improvement of an existing method. Using fuzzy logic concepts, the problems involved in finding the minimum of a criterion function are avoided. Similarity between gray levels is the key to find an optimal threshold. Two initial regions of gray levels, located at the boundaries of the histogram, are defined. Then, using an index of fuzziness, a similarity process is started to find the threshold point. A significant contrast between objects and background is assumed. Previous histogram equalization is used in small contrast images. No prior knowledge of the image is required.info:eu-repo/semantics/publishedVersio

    Fuzzy geometry, entropy, and image information

    Get PDF
    Presented here are various uncertainty measures arising from grayness ambiguity and spatial ambiguity in an image, and their possible applications as image information measures. Definitions are given of an image in the light of fuzzy set theory, and of information measures and tools relevant for processing/analysis e.g., fuzzy geometrical properties, correlation, bound functions and entropy measures. Also given is a formulation of algorithms along with management of uncertainties for segmentation and object extraction, and edge detection. The output obtained here is both fuzzy and nonfuzzy. Ambiguity in evaluation and assessment of membership function are also described

    The commodity chain of the household: from survey design to policy and practice

    Get PDF
    Data collection and analysis and policy formulation all require a social unit to be defined, generally called the household. Multidisciplinary evidence shows that households as defined by survey practitioners often bear little resemblance to lived socio-economic units. This study examines how a shared language, the 'household', can generate misunderstandings because different groups with distinctive understandings of the term 'household' are often unaware that others may be using ‘household’ differently. Results from 4 interlinked and iterative methods are presented: review of household survey documentation (1950s-present); ethnographic ground-truthing fieldwork; in-depth key informant interviews; and modelling. Results show that whereas data collectors have a clear idea of what a `household` is, data users are often unaware of the nuances of the constraints imposed by data collection. This has implications for policy planning and practice. What interviewees consider when they think of their household can differ systematically from data collectors' definitions

    Fuzzy clustering with volume prototypes and adaptive cluster merging

    Get PDF
    Two extensions to the objective function-based fuzzy clustering are proposed. First, the (point) prototypes are extended to hypervolumes, whose size can be fixed or can be determined automatically from the data being clustered. It is shown that clustering with hypervolume prototypes can be formulated as the minimization of an objective function. Second, a heuristic cluster merging step is introduced where the similarity among the clusters is assessed during optimization. Starting with an overestimation of the number of clusters in the data, similar clusters are merged in order to obtain a suitable partitioning. An adaptive threshold for merging is proposed. The extensions proposed are applied to Gustafson–Kessel and fuzzy c-means algorithms, and the resulting extended algorithm is given. The properties of the new algorithm are illustrated by various examples

    FSL-BM: Fuzzy Supervised Learning with Binary Meta-Feature for Classification

    Full text link
    This paper introduces a novel real-time Fuzzy Supervised Learning with Binary Meta-Feature (FSL-BM) for big data classification task. The study of real-time algorithms addresses several major concerns, which are namely: accuracy, memory consumption, and ability to stretch assumptions and time complexity. Attaining a fast computational model providing fuzzy logic and supervised learning is one of the main challenges in the machine learning. In this research paper, we present FSL-BM algorithm as an efficient solution of supervised learning with fuzzy logic processing using binary meta-feature representation using Hamming Distance and Hash function to relax assumptions. While many studies focused on reducing time complexity and increasing accuracy during the last decade, the novel contribution of this proposed solution comes through integration of Hamming Distance, Hash function, binary meta-features, binary classification to provide real time supervised method. Hash Tables (HT) component gives a fast access to existing indices; and therefore, the generation of new indices in a constant time complexity, which supersedes existing fuzzy supervised algorithms with better or comparable results. To summarize, the main contribution of this technique for real-time Fuzzy Supervised Learning is to represent hypothesis through binary input as meta-feature space and creating the Fuzzy Supervised Hash table to train and validate model.Comment: FICC201

    Fuzzy Supernova Templates II: Parameter Estimation

    Full text link
    Wide field surveys will soon be discovering Type Ia supernovae (SNe) at rates of several thousand per year. Spectroscopic follow-up can only scratch the surface for such enormous samples, so these extensive data sets will only be useful to the extent that they can be characterized by the survey photometry alone. In a companion paper (Rodney and Tonry, 2009) we introduced the SOFT method for analyzing SNe using direct comparison to template light curves, and demonstrated its application for photometric SN classification. In this work we extend the SOFT method to derive estimates of redshift and luminosity distance for Type Ia SNe, using light curves from the SDSS and SNLS surveys as a validation set. Redshifts determined by SOFT using light curves alone are consistent with spectroscopic redshifts, showing a root-mean-square scatter in the residuals of RMS_z=0.051. SOFT can also derive simultaneous redshift and distance estimates, yielding results that are consistent with the currently favored Lambda-CDM cosmological model. When SOFT is given spectroscopic information for SN classification and redshift priors, the RMS scatter in Hubble diagram residuals is 0.18 mags for the SDSS data and 0.28 mags for the SNLS objects. Without access to any spectroscopic information, and even without any redshift priors from host galaxy photometry, SOFT can still measure reliable redshifts and distances, with an increase in the Hubble residuals to 0.37 mags for the combined SDSS and SNLS data set. Using Monte Carlo simulations we predict that SOFT will be able to improve constraints on time-variable dark energy models by a factor of 2-3 with each new generation of large-scale SN surveys.Comment: 20 pages, 7 figures, accepted to ApJ; paper 1 is arXiv:0910.370

    Self-growing neural network architecture using crisp and fuzzy entropy

    Get PDF
    The paper briefly describes the self-growing neural network algorithm, CID2, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results of a real-life recognition problem of distinguishing defects in a glass ribbon and of a benchmark problem of differentiating two spirals are shown and discussed
    corecore