219 research outputs found

    Fuzzy ARTMAP: A Neural Network Architecture for Incremental Supervised Learning of Analog Multidimensional Maps

    Full text link
    A new neural network architecture is introduced for incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analog or binary input vectors. The architecture, called Fuzzy ARTMAP, achieves a synthesis of fuzzy logic and Adaptive Resonance Theory (ART) neural networks by exploiting a close formal similarity between the computations of fuzzy subsethood and ART category choice, resonance, and learning. Fuzzy ARTMAP also realizes a new Minimax Learning Rule that conjointly minimizes predictive error and maximizes code compression, or generalization. This is achieved by a match tracking process that increases the ART vigilance parameter by the minimum amount needed to correct a predictive error. As a result, the system automatically learns a minimal number of recognition categories, or "hidden units", to met accuracy criteria. Category proliferation is prevented by normalizing input vectors at a preprocessing stage. A normalization procedure called complement coding leads to a symmetric theory in which the MIN operator (Λ) and the MAX operator (v) of fuzzy logic play complementary roles. Complement coding uses on-cells and off-cells to represent the input pattern, and preserves individual feature amplitudes while normalizing the total on-cell/off-cell vector. Learning is stable because all adaptive weights can only decrease in time. Decreasing weights correspond to increasing sizes of category "boxes". Smaller vigilance values lead to larger category boxes. Improved prediction is achieved by training the system several times using different orderings of the input set. This voting strategy can also be used to assign probability estimates to competing predictions given small, noisy, or incomplete training sets. Four classes of simulations illustrate Fuzzy ARTMAP performance as compared to benchmark back propagation and genetic algorithm systems. These simulations include (i) finding points inside vs. outside a circle; (ii) learning to tell two spirals apart; (iii) incremental approximation of a piecewise continuous function; and (iv) a letter recognition database. The Fuzzy ARTMAP system is also compared to Salzberg's NGE system and to Simpson's FMMC system.British Petroleum (89-A-1204); Defense Advanced Research Projects Agency (90-0083); National Science Foundation (IRI 90-00530); Office of Naval Research (N00014-91-J-4100); Air Force Office of Scientific Research (90-0175

    Fuzzy ART

    Full text link
    Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART and supervised fuzzy ARTMAP synthesize fuzzy logic and ART networks by exploiting the formal similarity between the computations of fuzzy subsethood and the dynamics of ART category choice, search, and learning. Fuzzy ART self-organizes stable recognition categories in response to arbitrary sequences of analog or binary input patterns. It generalizes the binary ART 1 model, replacing the set-theoretic: intersection (∩) with the fuzzy intersection (∧), or component-wise minimum. A normalization procedure called complement coding leads to a symmetric: theory in which the fuzzy inter:>ec:tion and the fuzzy union (∨), or component-wise maximum, play complementary roles. Complement coding preserves individual feature amplitudes while normalizing the input vector, and prevents a potential category proliferation problem. Adaptive weights :otart equal to one and can only decrease in time. A geometric interpretation of fuzzy AHT represents each category as a box that increases in size as weights decrease. A matching criterion controls search, determining how close an input and a learned representation must be for a category to accept the input as a new exemplar. A vigilance parameter (p) sets the matching criterion and determines how finely or coarsely an ART system will partition inputs. High vigilance creates fine categories, represented by small boxes. Learning stops when boxes cover the input space. With fast learning, fixed vigilance, and an arbitrary input set, learning stabilizes after just one presentation of each input. A fast-commit slow-recode option allows rapid learning of rare events yet buffers memories against recoding by noisy inputs. Fuzzy ARTMAP unites two fuzzy ART networks to solve supervised learning and prediction problems. A Minimax Learning Rule controls ARTMAP category structure, conjointly minimizing predictive error and maximizing code compression. Low vigilance maximizes compression but may therefore cause very different inputs to make the same prediction. When this coarse grouping strategy causes a predictive error, an internal match tracking control process increases vigilance just enough to correct the error. ARTMAP automatically constructs a minimal number of recognition categories, or "hidden units," to meet accuracy criteria. An ARTMAP voting strategy improves prediction by training the system several times using different orderings of the input set. Voting assigns confidence estimates to competing predictions given small, noisy, or incomplete training sets. ARPA benchmark simulations illustrate fuzzy ARTMAP dynamics. The chapter also compares fuzzy ARTMAP to Salzberg's Nested Generalized Exemplar (NGE) and to Simpson's Fuzzy Min-Max Classifier (FMMC); and concludes with a summary of ART and ARTMAP applications.Advanced Research Projects Agency (ONR N00014-92-J-4015); National Science Foundation (IRI-90-00530); Office of Naval Research (N00014-91-J-4100

    Fuzzy ART: Fast Stable Learning and Categorization of Analog Patterns by an Adaptive Resonance System

    Full text link
    A Fuzzy ART model capable of rapid stable learning of recognition categories in response to arbitrary sequences of analog or binary input patterns is described. Fuzzy ART incorporates computations from fuzzy set theory into the ART 1 neural network, which learns to categorize only binary input patterns. The generalization to learning both analog and binary input patterns is achieved by replacing appearances of the intersection operator (n) in AHT 1 by the MIN operator (Λ) of fuzzy set theory. The MIN operator reduces to the intersection operator in the binary case. Category proliferation is prevented by normalizing input vectors at a preprocessing stage. A normalization procedure called complement coding leads to a symmetric theory in which the MIN operator (Λ) and the MAX operator (v) of fuzzy set theory play complementary roles. Complement coding uses on-cells and off-cells to represent the input pattern, and preserves individual feature amplitudes while normalizing the total on-cell/off-cell vector. Learning is stable because all adaptive weights can only decrease in time. Decreasing weights correspond to increasing sizes of category "boxes". Smaller vigilance values lead to larger category boxes. Learning stops when the input space is covered by boxes. With fast learning and a finite input set of arbitrary size and composition, learning stabilizes after just one presentation of each input pattern. A fast-commit slow-recode option combines fast learning with a forgetting rule that buffers system memory against noise. Using this option, rare events can be rapidly learned, yet previously learned memories are not rapidly erased in response to statistically unreliable input fluctuations.British Petroleum (89-A-1204); Defense Advanced Research Projects Agency (90-0083); National Science Foundation (IRI-90-00530); Air Force Office of Scientific Research (90-0175

    Prediction of Istanbul Securities Exchange composite index

    Get PDF
    Ankara : The Department of Management and the Graduate School of Business Administration of Bilkent Univ., 1993.Thesis (Master's) -- Bilkent University, 1993.Includes bibliographical references leaves 63-66.This study presents a software developed by using Nested Generalized Exemplars, for predicting Istanbul Securities Exchange Composite Index. Information reflected in the past values of frequently used monetary variables are used to predict stock returns. Daily returns of the composite index are predicted by using: Central Bank effective selling price of US Dollar and Deutsche Mark, Istanbul Tahtakale closing selling price of Turkish Republic gold coin and one ounce of gold, Commercial Banks (İş Bank, Akbank, Yapı Kredi Bank, and Ziraat Bank) 3-month average deposit rate and 3-month Government bond interest rates. Data prior to the dates on which the predictions are made are used to learn the forecasting power of variables on composite index and to generate the appropriate rules. The results reveal that the information reflected in the past prices of the variables have significant effects on the ISE composite index.Timur, MuratM.S

    Concept of a Robust & Training-free Probabilistic System for Real-time Intention Analysis in Teams

    Get PDF
    Die Arbeit beschäftigt sich mit der Analyse von Teamintentionen in Smart Environments (SE). Die fundamentale Aussage der Arbeit ist, dass die Entwicklung und Integration expliziter Modelle von Nutzeraufgaben einen wichtigen Beitrag zur Entwicklung mobiler und ubiquitärer Softwaresysteme liefern können. Die Arbeit sammelt Beschreibungen von menschlichem Verhalten sowohl in Gruppensituationen als auch Problemlösungssituationen. Sie untersucht, wie SE-Projekte die Aktivitäten eines Nutzers modellieren, und liefert ein Teamintentionsmodell zur Ableitung und Auswahl geplanten Teamaktivitäten mittels der Beobachtung mehrerer Nutzer durch verrauschte und heterogene Sensoren. Dazu wird ein auf hierarchischen dynamischen Bayes’schen Netzen basierender Ansatz gewählt

    Modelos híbridos de aprendizaje basados en instancias y reglas para Clasificación Monotónica

    Get PDF
    En los problemas de clasificación supervisada, el atributo respuesta depende de determinados atributos de entrada explicativos. En muchos problemas reales el atributo respuesta está representado por valores ordinales que deberían incrementarse cuando algunos de los atributos explicativos de entrada también lo hacen. Estos son los llamados problemas de clasificación con restricciones monotónicas. En esta Tesis, hemos revisado los clasificadores monotónicos propuestos en la literatura y hemos formalizado la teoría del aprendizaje basado en ejemplos anidados generalizados para abordar la clasificación monotónica. Propusimos dos algoritmos, un primer algoritmos voraz, que require de datos monotónicos y otro basado en algoritmos evolutivos, que es capaz de abordar datos imperfectos que presentan violaciones monotónicas entre las instancias. Ambos mejoran el acierto, el índice de no-monotonicidad de las predicciones y la simplicidad de los modelos sobre el estado-del-arte.In supervised prediction problems, the response attribute depends on certain explanatory attributes. Some real problems require the response attribute to represent ordinal values that should increase with some of the explaining attributes. They are called classification problems with monotonicity constraints. In this thesis, we have reviewed the monotonic classifiers proposed in the literature and we have formalized the nested generalized exemplar learning theory to tackle monotonic classification. Two algorithms were proposed, a first greedy one, which require monotonic data and an evolutionary based algorithm, which is able to address imperfect data with monotonic violations present among the instances. Both improve the accuracy, the non-monotinic index of predictions and the simplicity of models over the state-of-the-art.Tesis Univ. Jaén. Departamento INFORMÁTIC

    Example-Based Urban Modeling

    Get PDF
    The manual modeling of virtual cities or suburban regions is an extremely time-consuming task, which expects expert knowledge of different fields. Existing modeling tool-sets have a steep learning curve and may need special education skills to work with them productively. Existing automatic methods rely on rule sets and grammars to generate urban structures; however, their expressiveness is limited by the rule-sets. Expert skills are necessary to typeset rule sets successfully and, in many cases, new rule-sets need to be defined for every new building style or street network style. To enable non-expert users, the possibility to construct urban structures for individual experiments, this work proposes a portfolio of novel example-based synthesis algorithms and applications for the controlled generation of virtual urban environments. The notion example-based denotes here that new virtual urban environments are created by computer programs that re-use existing digitized real-world data serving as templates. The data, i.e., street networks, topography, layouts of building footprints, or even 3D building models, necessary to realize the envisioned task is already publicly available via online services. To enable the reuse of existing urban datasets, novel algorithms need to be developed by encapsulating expert knowledge and thus allow the controlled generation of virtual urban structures from sparse user input. The focus of this work is the automatic generation of three fundamental structures that are common in urban environments: road networks, city block, and individual buildings. In order to achieve this goal, the thesis proposes a portfolio of algorithms that are briefly summarized next. In a theoretical chapter, we propose a general optimization technique that allows formulating example-based synthesis as a general resource-constrained k-shortest path (RCKSP) problem. From an abstract problem specification and a database of exemplars carrying resource attributes, we construct an intermediate graph and employ a path-search optimization technique. This allows determining either the best or the k-best solutions. The resulting algorithm has a reduced complexity for the single constraint case when compared to other graph search-based techniques. For the generation of road networks, two different techniques are proposed. The first algorithm synthesizes a novel road network from user input, i.e., a desired arterial street skeleton, topography map, and a collection of hierarchical fragments extracted from real-world road networks. The algorithm recursively constructs a novel road network reusing these fragments. Candidate fragments are inserted into the current state of the road network, while shape differences will be compensated by warping. The second algorithm synthesizes road networks using generative adversarial networks (GANs), a recently introduced deep learning technique. A pre- and postprocessing pipeline allows using GANs for the generation of road networks. An in-depth evaluation shows that GANs faithfully learn the road structure present in the example network and that graph measures such as area, aspect ratio, and compactness, are maintained within the virtual road networks. To fill empty city blocks in road networks we propose two novel techniques. The first algorithm re-uses real-world city blocks and synthesizes building footprint layouts into empty city blocks by retrieving viable candidate blocks from a database. We evaluate the algorithm and synthesize a multitude of city block layouts reusing real-world building footprint arrangements from European and US-cities. In addition, we increase the realism of the synthesized layouts by performing example-based placement of 3D building models. This technique is evaluated by placing buildings onto challenging footprint layouts using different example building databases. The second algorithm computes a city block layout, resembling the style of a real-world city block. The original footprint layout is deformed to construct a textit{guidance map}, i.e., the original layout is transferred to a target city block using warping. This guidance map and the original footprints are used by an optimization technique that computes a novel footprint layout along the city block edges. We perform a detailed evaluation and show that using the guidance map allows transferring of the original layout, locally as well as globally, even when the source and target shapes drastically differ. To synthesize individual buildings, we use the general optimization technique described first and formulate the building generation process as a resource-constrained optimization problem. From an input database of annotated building parts, an abstract description of the building shape, and the specification of resource constraints such as length, area, or a number of architectural elements, a novel building is synthesized. We evaluate the technique by synthesizing a multitude of challenging buildings fulfilling several global and local resource constraints. Finally, we show how this technique can even be used to synthesize buildings having the shape of city blocks and might also be used to fill empty city blocks in virtual street networks. All algorithms presented in this work were developed to work with a small amount of user input. In most cases, simple sketches and the definition of constraints are enough to produce plausible results. Manual work is necessary to set up the building part databases and to download example data from mapping services available on the Internet

    Investigating Randomised Sphere Covers in Supervised Learning

    Get PDF
    c©This copy of the thesis has been supplied on condition that anyone who consults it is understood to recognise that its copyright rests with the author and that no quotation from the thesis, nor any information derived therefrom, may be published without the author’s prior, written consent. In this thesis, we thoroughly investigate a simple Instance Based Learning (IBL) classifier known as Sphere Cover. We propose a simple Randomized Sphere Cover Classifier (αRSC) and use several datasets in order to evaluate the classification performance of the αRSC classifier. In addition, we analyse the generalization error of the proposed classifier using bias/variance decomposition. A Sphere Cover Classifier may be described from the compression scheme which stipulates data compression as the reason for high generalization performance. We investigate the compression capacity of αRSC using a sample compression bound. The Compression Scheme prompted us to search new compressibility methods for αRSC. As such, we used a Gaussian kernel to investigate further data compression
    corecore