18,953 research outputs found
Examples of Artificial Perceptions in Optical Character Recognition and Iris Recognition
This paper assumes the hypothesis that human learning is perception based,
and consequently, the learning process and perceptions should not be
represented and investigated independently or modeled in different simulation
spaces. In order to keep the analogy between the artificial and human learning,
the former is assumed here as being based on the artificial perception. Hence,
instead of choosing to apply or develop a Computational Theory of (human)
Perceptions, we choose to mirror the human perceptions in a numeric
(computational) space as artificial perceptions and to analyze the
interdependence between artificial learning and artificial perception in the
same numeric space, using one of the simplest tools of Artificial Intelligence
and Soft Computing, namely the perceptrons. As practical applications, we
choose to work around two examples: Optical Character Recognition and Iris
Recognition. In both cases a simple Turing test shows that artificial
perceptions of the difference between two characters and between two irides are
fuzzy, whereas the corresponding human perceptions are, in fact, crisp.Comment: 5th Int. Conf. on Soft Computing and Applications (Szeged, HU), 22-24
Aug 201
A Fuzzy Petri Nets Model for Computing With Words
Motivated by Zadeh's paradigm of computing with words rather than numbers,
several formal models of computing with words have recently been proposed.
These models are based on automata and thus are not well-suited for concurrent
computing. In this paper, we incorporate the well-known model of concurrent
computing, Petri nets, together with fuzzy set theory and thereby establish a
concurrency model of computing with words--fuzzy Petri nets for computing with
words (FPNCWs). The new feature of such fuzzy Petri nets is that the labels of
transitions are some special words modeled by fuzzy sets. By employing the
methodology of fuzzy reasoning, we give a faithful extension of an FPNCW which
makes it possible for computing with more words. The language expressiveness of
the two formal models of computing with words, fuzzy automata for computing
with words and FPNCWs, is compared as well. A few small examples are provided
to illustrate the theoretical development.Comment: double columns 14 pages, 8 figure
Ontology and medical terminology: Why description logics are not enough
Ontology is currently perceived as the solution of first resort for all problems related to biomedical terminology, and the use of description logics is seen as a minimal requirement on adequate ontology-based systems. Contrary to common conceptions, however, description logics alone are not able to prevent incorrect representations; this is because they do not come with a theory indicating what is computed by using them, just as classical arithmetic does not tell us anything about the entities that are added or subtracted. In this paper we shall show that ontology is indeed an essential part of any solution to the problems of medical terminology – but only if it is understood in the right sort of way. Ontological engineering, we shall argue, should in every case go hand in hand with a sound ontological theory
A unified theory of granularity, vagueness and approximation
Abstract: We propose a view of vagueness as a semantic property of names and predicates. All entities are crisp, on this semantic view, but there are, for each vague name, multiple portions of reality that are equally good candidates for being its referent, and, for each vague predicate, multiple classes of objects that are equally good candidates for being its extension. We provide a new formulation of these ideas in terms of a theory of granular partitions. We show that this theory provides a general framework within which we can understand the relation between vague terms and concepts and the corresponding crisp portions of reality. We also sketch how it might be possible to formulate within this framework a theory of vagueness which dispenses with the notion of truth-value gaps and other artifacts of more
familiar approaches. Central to our approach is the idea that judgments about reality involve in every case (1) a separation of reality into foreground and background of attention and (2) the feature of granularity. On this basis we attempt to show that even vague judgments made in naturally occurring contexts are not marked by truth-value indeterminacy. We distinguish, in addition to crisp granular partitions, also vague partitions, and reference partitions, and we explain the role of the latter in the context of judgments that involve vagueness. We conclude by showing how reference partitions provide an effective means by which judging subjects are able to temper the vagueness of their judgments by means of approximations
Attentive Convolution: Equipping CNNs with RNN-style Attention Mechanisms
In NLP, convolutional neural networks (CNNs) have benefited less than
recurrent neural networks (RNNs) from attention mechanisms. We hypothesize that
this is because the attention in CNNs has been mainly implemented as attentive
pooling (i.e., it is applied to pooling) rather than as attentive convolution
(i.e., it is integrated into convolution). Convolution is the differentiator of
CNNs in that it can powerfully model the higher-level representation of a word
by taking into account its local fixed-size context in the input text t^x. In
this work, we propose an attentive convolution network, ATTCONV. It extends the
context scope of the convolution operation, deriving higher-level features for
a word not only from local context, but also information extracted from
nonlocal context by the attention mechanism commonly used in RNNs. This
nonlocal context can come (i) from parts of the input text t^x that are distant
or (ii) from extra (i.e., external) contexts t^y. Experiments on sentence
modeling with zero-context (sentiment analysis), single-context (textual
entailment) and multiple-context (claim verification) demonstrate the
effectiveness of ATTCONV in sentence representation learning with the
incorporation of context. In particular, attentive convolution outperforms
attentive pooling and is a strong competitor to popular attentive RNNs.Comment: Camera-ready for TACL. 16 page
Decision Analysis Linguistic Framework
Everyday human beings are faced with situations they should choose among different alternatives by means of reasoning and mental processes when solving a problem. Many of these decision problems are under uncertain environments including vague, imprecise and subjective information that is usually modeled by linguistic information due to the use of natural language and its relation to mental reasoning processes of the experts when expressing their judgments. In a decision process multiple criteria can be evaluated which involving multiple experts with different degrees of knowledge. Such process can be modeled by using Multi-granular Linguistic Information (MGLI) and Computing with Words (CW) processes to solve the related decision problems. Different methodologies and approaches have been proposed to accomplish this process in an accurate and interpretable way. In this paper we propose a useful Decision Analysis Framework to manage this kind of problems by using the Extended Linguistic Hierarchy (ELH), 2-tuples linguistic representation model and its computational method. The developed Framework has many advantages when dealing with a complex problem in a simple way and its capability of having easy and useful reasonably results.Sociedad Argentina de Informática e Investigación Operativ
Ontological theory for ontological engineering: Biomedical systems information integration
Software application ontologies have the potential to become the keystone in state-of-the-art information management techniques. It is expected that these ontologies will support the sort of reasoning power required to navigate large and complex terminologies correctly and efficiently. Yet, there is one problem in particular that continues to stand in our way. As these terminological structures increase in size and complexity, and the drive to integrate them inevitably swells, it is clear that the level of consistency required for such navigation will become correspondingly difficult to maintain. While descriptive semantic representations are certainly a necessary component to any adequate ontology-based system, so long as ontology engineers rely solely on semantic information, without a sound ontological theory informing their modeling decisions, this goal will surely remain out of reach. In this paper we describe how Language and Computing nv (L&C), along with The Institute for Formal Ontology and Medical Information Sciences (IFOMIS), are working towards developing and implementing just such a theory, combining the open
software architecture of L&C’s LinkSuiteTM with the philosophical rigor of IFOMIS’s Basic Formal Ontology. In this way we aim to move beyond the more or less simple controlled vocabularies that have dominated the industry to date
- …