948,421 research outputs found

    A Fuzzy Petri Nets Model for Computing With Words

    Full text link
    Motivated by Zadeh's paradigm of computing with words rather than numbers, several formal models of computing with words have recently been proposed. These models are based on automata and thus are not well-suited for concurrent computing. In this paper, we incorporate the well-known model of concurrent computing, Petri nets, together with fuzzy set theory and thereby establish a concurrency model of computing with words--fuzzy Petri nets for computing with words (FPNCWs). The new feature of such fuzzy Petri nets is that the labels of transitions are some special words modeled by fuzzy sets. By employing the methodology of fuzzy reasoning, we give a faithful extension of an FPNCW which makes it possible for computing with more words. The language expressiveness of the two formal models of computing with words, fuzzy automata for computing with words and FPNCWs, is compared as well. A few small examples are provided to illustrate the theoretical development.Comment: double columns 14 pages, 8 figure

    Retraction and Generalized Extension of Computing with Words

    Full text link
    Fuzzy automata, whose input alphabet is a set of numbers or symbols, are a formal model of computing with values. Motivated by Zadeh's paradigm of computing with words rather than numbers, Ying proposed a kind of fuzzy automata, whose input alphabet consists of all fuzzy subsets of a set of symbols, as a formal model of computing with all words. In this paper, we introduce a somewhat general formal model of computing with (some special) words. The new features of the model are that the input alphabet only comprises some (not necessarily all) fuzzy subsets of a set of symbols and the fuzzy transition function can be specified arbitrarily. By employing the methodology of fuzzy control, we establish a retraction principle from computing with words to computing with values for handling crisp inputs and a generalized extension principle from computing with words to computing with all words for handling fuzzy inputs. These principles show that computing with values and computing with all words can be respectively implemented by computing with words. Some algebraic properties of retractions and generalized extensions are addressed as well.Comment: 13 double column pages; 3 figures; to be published in the IEEE Transactions on Fuzzy System

    Computing with Granular Words

    Get PDF
    Computational linguistics is a sub-field of artificial intelligence; it is an interdisciplinary field dealing with statistical and/or rule-based modeling of natural language from a computational perspective. Traditionally, fuzzy logic is used to deal with fuzziness among single linguistic terms in documents. However, linguistic terms may be related to other types of uncertainty. For instance, different users search ‘cheap hotel’ in a search engine, they may need distinct pieces of relevant hidden information such as shopping, transportation, weather, etc. Therefore, this research work focuses on studying granular words and developing new algorithms to process them to deal with uncertainty globally. To precisely describe the granular words, a new structure called Granular Information Hyper Tree (GIHT) is constructed. Furthermore, several technologies are developed to cooperate with computing with granular words in spam filtering and query recommendation. Based on simulation results, the GIHT-Bayesian algorithm can get more accurate spam filtering rate than conventional method Naive Bayesian and SVM; computing with granular word also generates better recommendation results based on users’ assessment when applied it to search engine

    A probabilistic model of computing with words

    Get PDF
    AbstractComputing in the traditional sense involves inputs with strings of numbers and symbols rather than words, where words mean probability distributions over input alphabet, and are different from the words in classical formal languages and automata theory. In this paper our goal is to deal with probabilistic finite automata (PFAs), probabilistic Turing machines (PTMs), and probabilistic context-free grammars (PCFGs) by inputting strings of words (probability distributions). Specifically, (i) we verify that PFAs computing strings of words can be implemented by means of calculating strings of symbols (Theorem 1); (ii) we elaborate on PTMs with input strings of words, and particularly demonstrate by describing Example 2 that PTMs computing strings of words may not be directly performed through only computing strings of symbols, i.e., Theorem 1 may not hold for PTMs; (iii) we study PCFGs and thus PRGs with input strings of words, and prove that Theorem 1 does hold for PCFRs and PRGs (Theorem 2); a characterization of PRGs in terms of PFAs, and the equivalence between PCFGs and their Chomsky and Greibach normal forms, in the sense that the inputs are strings of words, are also presented. Finally, the main results obtained are summarized, and a number of related issues for further study are raised

    Optical signal processing with a network of semiconductor optical amplifiers in the context of photonic reservoir computing

    Get PDF
    Photonic reservoir computing is a hardware implementation of the concept of reservoir computing which comes from the field of machine learning and artificial neural networks. This concept is very useful for solving all kinds of classification and recognition problems. Examples are time series prediction, speech and image recognition. Reservoir computing often competes with the state-of-the-art. Dedicated photonic hardware would offer advantages in speed and power consumption. We show that a network of coupled semiconductor optical amplifiers can be used as a reservoir by using it on a benchmark isolated words recognition task. The results are comparable to existing software implementations and fabrication tolerances can actually improve the robustness

    Representative Names of Computing Degree Programs Worldwide

    Get PDF
    Through the auspices of ACM and with support from the IEEE Computer Society, a task group charged to prepare the IT2017 report conducted an online international survey of computing faculty members about their undergraduate degree programs in computing. The purpose of this survey was to clarify the breadth of and disparities in nomenclature used by diverse communities in the computing field, where a word or phrase can mean different things in different computing communities. This paper examines the English-language words and phrases used to name the computing programs of almost six hundred survey respondents, and the countries in which those names are used. Over eight hundred program names analysed in this paper reveal six program names that together account for more than half of all program names. The paper goes on to consider possible correspondence between reported program names and the five areas of computing identified by the ACM. Names such as computer science and information technology appear to dominate, but with different meanings, while the names of other computing disciplines show clear geographic preferences. Convergence towards a very small number of highly representative program names in computing education worldwide might be deceptive. The paper calls for further examination and international collaborations to align program names with program curriculum content
    • …
    corecore