637 research outputs found

    Combining Neuro-Fuzzy Classifiers for Improved Generalisation and Reliability

    Get PDF
    In this paper a combination of neuro-fuzzy classifiers for improved classification performance and reliability is considered. A general fuzzy min-max (GFMM) classifier with agglomerative learning algorithm is used as a main building block. An alternative approach to combining individual classifier decisions involving the combination at the classifier model level is proposed. The resulting classifier complexity and transparency is comparable with classifiers generated during a single crossvalidation procedure while the improved classification performance and reduced variance is comparable to the ensemble of classifiers with combined (averaged/voted) decisions. We also illustrate how combining at the model level can be used for speeding up the training of GFMM classifiers for large data sets

    Heegaard Floer homology and integer surgeries on links

    Full text link
    Let L be a link in an integral homology three-sphere. We give a description of the Heegaard Floer homology of integral surgeries on L in terms of some data associated to L, which we call a complete system of hyperboxes for L. Roughly, a complete systems of hyperboxes consists of chain complexes for (some versions of) the link Floer homology of L and all its sublinks, together with several chain maps between these complexes. Further, we introduce a way of presenting closed four-manifolds with b_2^+ > 1 by four-colored framed links in the three-sphere. Given a link presentation of this kind for a four-manifold X, we then describe the Ozsvath-Szabo mixed invariants of X in terms of a complete system of hyperboxes for the link. Finally, we explain how a grid diagram produces a particular complete system of hyperboxes for the corresponding link.Comment: 231 pages, 54 figures; major revision: we now work with one U variable for each w basepoint, rather than one per link component; we also added Section 4, with an overview of the main resul

    General fuzzy min-max neural network for clustering and classification

    Get PDF
    This paper describes a general fuzzy min-max (GFMM) neural network which is a generalization and extension of the fuzzy min-max clustering and classification algorithms of Simpson (1992, 1993). The GFMM method combines supervised and unsupervised learning in a single training algorithm. The fusion of clustering and classification resulted in an algorithm that can be used as pure clustering, pure classification, or hybrid clustering classification. It exhibits a property of finding decision boundaries between classes while clustering patterns that cannot be said to belong to any of existing classes. Similarly to the original algorithms, the hyperbox fuzzy sets are used as a representation of clusters and classes. Learning is usually completed in a few passes and consists of placing and adjusting the hyperboxes in the pattern space; this is an expansion-contraction process. The classification results can be crisp or fuzzy. New data can be included without the need for retraining. While retaining all the interesting features of the original algorithms, a number of modifications to their definition have been made in order to accommodate fuzzy input patterns in the form of lower and upper bounds, combine the supervised and unsupervised learning, and improve the effectiveness of operations. A detailed account of the GFMM neural network, its comparison with the Simpson's fuzzy min-max neural networks, a set of examples, and an application to the leakage detection and identification in water distribution systems are given

    Learning Hybrid Neuro-Fuzzy Classifier Models From Data: To Combine or Not to Combine?

    Get PDF
    To combine or not to combine? Though not a question of the same gravity as the Shakespeare’s to be or not to be, it is examined in this paper in the context of a hybrid neuro-fuzzy pattern classifier design process. A general fuzzy min-max neural network with its basic learning procedure is used within six different algorithm independent learning schemes. Various versions of cross-validation, resampling techniques and data editing approaches, leading to a generation of a single classifier or a multiple classifier system, are scrutinised and compared. The classification performance on unseen data, commonly used as a criterion for comparing different competing designs, is augmented by further four criteria attempting to capture various additional characteristics of classifier generation schemes. These include: the ability to estimate the true classification error rate, the classifier transparency, the computational complexity of the learning scheme and the potential for adaptation to changing environments and new classes of data. One of the main questions examined is whether and when to use a single classifier or a combination of a number of component classifiers within a multiple classifier system

    A global optimization approach to solve multi-aircraft routing problems

    Get PDF
    "This chapter appears in Computational Models, Software Engineering and Advanced Technologies in Air Transportation edited by Dr. Li Weigang and Dr. Alexandre G. de Barros. Chap.12 pp.237-259. Copyright 2009. Posted by permission of the publisher."This paper describes the formulation and solution of a multi-aircraft routing problem which is posed as a global optimization calculation. The paper extends previous work (involving a single aircraft using two dimensions) which established that the algorithm DIRECT is a suitable solution technique. The present work considers a number of ways of dealing with multiple routes using different problem decompositions. A further enhancement is the introduction of altitude to the problems so that full three-dimensional routes can be produced. Illustrative numerical results are presented involving up to three aircraft and including examples which feature routes over real-life terrain data

    Constructive spherical codes on layers of flat tori

    Full text link
    A new class of spherical codes is constructed by selecting a finite subset of flat tori from a foliation of the unit sphere S^{2L-1} of R^{2L} and designing a structured codebook on each torus layer. The resulting spherical code can be the image of a lattice restricted to a specific hyperbox in R^L in each layer. Group structure and homogeneity, useful for efficient storage and decoding, are inherited from the underlying lattice codebook. A systematic method for constructing such codes are presented and, as an example, the Leech lattice is used to construct a spherical code in R^{48}. Upper and lower bounds on the performance, the asymptotic packing density and a method for decoding are derived.Comment: 9 pages, 5 figures, submitted to IEEE Transactions on Information Theor

    Heegaard Floer homology of surgeries on two-bridge links

    Full text link
    We give an O(p2)O(p^{2}) time algorithm to compute the generalized Heegaard Floer complexes As1,s2−(L→)A_{s_{1},s_{2}}^{-}(\overrightarrow{L})'s for a two-bridge link L→=b(p,q)\overrightarrow{L}=b(p,q) by using nice diagrams. Using the link surgery formula of Manolescu-Ozsv\'{a}th, we also show that HF−{\bf HF}^{-} and their dd-invariants of all integer surgeries on two-bridge links are determined by As1,s2−(L→)A_{s_{1},s_{2}}^{-}(\overrightarrow{L})'s. We obtain a polynomial time algorithm to compute HF−{\bf HF}^{-} of all the surgeries on two-bridge links, with Z/2Z\mathbb{Z}/2\mathbb{Z} coefficients. In addition, we calculate some examples explicitly: HF−{\bf HF}^{-} and the dd-invariants of all integer surgeries on a family of hyperbolic two-bridge links including the Whitehead link.Comment: 58 pages, 14 figures. Abstract and introduction revised; Theorem 1.1, Theorem 5.12 (now 5.13) adapted; Section 5.1 adapted; Section 5.6 and Example 5.16 added; sign errors fixed; typos in statement of Proposition 6.8 (now 6.9) corrected and computations simplified; Definition 2.2 (now 2.3) corrected. arXiv admin note: text overlap with arXiv:1011.1317 by other author
    • 

    corecore