925 research outputs found

    Examples of Artificial Perceptions in Optical Character Recognition and Iris Recognition

    Full text link
    This paper assumes the hypothesis that human learning is perception based, and consequently, the learning process and perceptions should not be represented and investigated independently or modeled in different simulation spaces. In order to keep the analogy between the artificial and human learning, the former is assumed here as being based on the artificial perception. Hence, instead of choosing to apply or develop a Computational Theory of (human) Perceptions, we choose to mirror the human perceptions in a numeric (computational) space as artificial perceptions and to analyze the interdependence between artificial learning and artificial perception in the same numeric space, using one of the simplest tools of Artificial Intelligence and Soft Computing, namely the perceptrons. As practical applications, we choose to work around two examples: Optical Character Recognition and Iris Recognition. In both cases a simple Turing test shows that artificial perceptions of the difference between two characters and between two irides are fuzzy, whereas the corresponding human perceptions are, in fact, crisp.Comment: 5th Int. Conf. on Soft Computing and Applications (Szeged, HU), 22-24 Aug 201

    The posterity of Zadeh's 50-year-old paper: A retrospective in 101 Easy Pieces – and a Few More

    Get PDF
    International audienceThis article was commissioned by the 22nd IEEE International Conference of Fuzzy Systems (FUZZ-IEEE) to celebrate the 50th Anniversary of Lotfi Zadeh's seminal 1965 paper on fuzzy sets. In addition to Lotfi's original paper, this note itemizes 100 citations of books and papers deemed “important (significant, seminal, etc.)” by 20 of the 21 living IEEE CIS Fuzzy Systems pioneers. Each of the 20 contributors supplied 5 citations, and Lotfi's paper makes the overall list a tidy 101, as in “Fuzzy Sets 101”. This note is not a survey in any real sense of the word, but the contributors did offer short remarks to indicate the reason for inclusion (e.g., historical, topical, seminal, etc.) of each citation. Citation statistics are easy to find and notoriously erroneous, so we refrain from reporting them - almost. The exception is that according to Google scholar on April 9, 2015, Lotfi's 1965 paper has been cited 55,479 times

    An investigation into adaptive power reduction techniques for neural hardware

    No full text
    In light of the growing applicability of Artificial Neural Network (ANN) in the signal processing field [1] and the present thrust of the semiconductor industry towards lowpower SOCs for mobile devices [2], the power consumption of ANN hardware has become a very important implementation issue. Adaptability is a powerful and useful feature of neural networks. All current approaches for low-power ANN hardware techniques are ‘non-adaptive’ with respect to the power consumption of the network (i.e. power-reduction is not an objective of the adaptation/learning process). In the research work presented in this thesis, investigations on possible adaptive power reduction techniques have been carried out, which attempt to exploit the adaptability of neural networks in order to reduce the power consumption. Three separate approaches for such adaptive power reduction are proposed: adaptation of size, adaptation of network weights and adaptation of calculation precision. Initial case studies exhibit promising results with significantpower reduction

    Nature of the learning algorithms for feedforward neural networks

    Get PDF
    The neural network model (NN) comprised of relatively simple computing elements, operating in parallel, offers an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. Due to the amount of research developed in the area many types of networks have been defined. The one of interest here is the multi-layer perceptron as it is one of the simplest and it is considered a powerful representation tool whose complete potential has not been adequately exploited and whose limitations need yet to be specified in a formal and coherent framework. This dissertation addresses the theory of generalisation performance and architecture selection for the multi-layer perceptron; a subsidiary aim is to compare and integrate this model with existing data analysis techniques and exploit its potential by combining it with certain constructs from computational geometry creating a reliable, coherent network design process which conforms to the characteristics of a generative learning algorithm, ie. one including mechanisms for manipulating the connections and/or units that comprise the architecture in addition to the procedure for updating the weights of the connections. This means that it is unnecessary to provide an initial network as input to the complete training process.After discussing in general terms the motivation for this study, the multi-layer perceptron model is introduced and reviewed, along with the relevant supervised training algorithm, ie. backpropagation. More particularly, it is argued that a network developed employing this model can in general be trained and designed in a much better way by extracting more information about the domains of interest through the application of certain geometric constructs in a preprocessing stage, specifically by generating the Voronoi Diagram and Delaunav Triangulation [Okabe et al. 92] of the set of points comprising the training set and once a final architecture which performs appropriately on it has been obtained, Principal Component Analysis [Jolliffe 86] is applied to the outputs produced by the units in the network's hidden layer to eliminate the redundant dimensions of this space

    ReprĂ©sentation du niveau de croyance en lien avec la notion d’alĂ©a

    Get PDF
    In this technical note, we develop a belief representation that generalizes the vanillapossibility theory, extending it to a bi-dimensional representation where randomness is taken intoaccount. This way, we revisit and extend the usual quantitative implementation of belief level.We state the basic requirements and discuss the difference with the vanilla possibility theorybefore investigating the compatibility with Boolean and Kleene’s three-valued logic calculus. Wethen relate this representation to the probability theory, based on the assumption that necessityand possibility provide bounds for probability values, and finally discuss how to define a relevantprojection of real values onto an admissible representation.This provides the basic ingredients to implement such extension of belief representation in existingmechanisms using the basic representation. The operational part of this development is availableas open-source code.Dans cette note technique, nous dĂ©veloppons une reprĂ©sentation de la notionde niveau de croyance qui gĂ©nĂ©ralise la thĂ©orie des possibilitĂ©s standard, en l’étendant Ă  unereprĂ©sentation bidimensionnelle oĂč l’alĂ©atoire est pris en compte. De cette façon, nous revisitonset Ă©tendons l’implĂ©mentation numĂ©rique habituelle du niveau de croyance.Nous Ă©nonçons les contraintes de base et discutons de la diffĂ©rence avec la thĂ©orie des pos-sibilitĂ©s usuelle avant d’étudier la compatibilitĂ© avec le calcul logique boolĂ©en Ă  trois valeursde Kleene. Nous relions ensuite cette reprĂ©sentation Ă  la thĂ©orie des probabilitĂ©s, basĂ©e surl’hypothĂšse que la nĂ©cessitĂ© et la possibilitĂ© fournissent des bornes pour les valeurs de probabil-itĂ©, et discutons enfin de la maniĂšre de dĂ©finir une projection pertinente des valeurs rĂ©elles surune reprĂ©sentation admissible.Cela fournit les ingrĂ©dients de base pour mettre en Ɠuvre une telle extension de la reprĂ©sen-tation des croyances dans les mĂ©canismes existants utilisant la reprĂ©sentation de base. La partieopĂ©rationnelle de ce dĂ©veloppement est disponible en code open-source

    An Introduction to Machine Learning -2/E

    Get PDF

    Fuzzy Mathematics

    Get PDF
    This book provides a timely overview of topics in fuzzy mathematics. It lays the foundation for further research and applications in a broad range of areas. It contains break-through analysis on how results from the many variations and extensions of fuzzy set theory can be obtained from known results of traditional fuzzy set theory. The book contains not only theoretical results, but a wide range of applications in areas such as decision analysis, optimal allocation in possibilistics and mixed models, pattern classification, credibility measures, algorithms for modeling uncertain data, and numerical methods for solving fuzzy linear systems. The book offers an excellent reference for advanced undergraduate and graduate students in applied and theoretical fuzzy mathematics. Researchers and referees in fuzzy set theory will find the book to be of extreme value
    • 

    corecore