3,254 research outputs found
Modeling of Phenomena and Dynamic Logic of Phenomena
Modeling of complex phenomena such as the mind presents tremendous
computational complexity challenges. Modeling field theory (MFT) addresses
these challenges in a non-traditional way. The main idea behind MFT is to match
levels of uncertainty of the model (also, problem or theory) with levels of
uncertainty of the evaluation criterion used to identify that model. When a
model becomes more certain, then the evaluation criterion is adjusted
dynamically to match that change to the model. This process is called the
Dynamic Logic of Phenomena (DLP) for model construction and it mimics processes
of the mind and natural evolution. This paper provides a formal description of
DLP by specifying its syntax, semantics, and reasoning system. We also outline
links between DLP and other logical approaches. Computational complexity issues
that motivate this work are presented using an example of polynomial models
Understanding Predication in Conceptual Spaces
We argue that a cognitive semantics has to take into account the possibly
partial information that a cognitive agent has of the world. After discussing
Gärdenfors's view of objects in conceptual spaces, we offer a number of viable
treatments of partiality of information and we formalize them by means of alternative
predicative logics. Our analysis shows that understanding the nature of simple
predicative sentences is crucial for a cognitive semantics
Aggregated fuzzy answer set programming
Fuzzy Answer Set programming (FASP) is an extension of answer set programming (ASP), based on fuzzy logic. It allows to encode continuous optimization problems in the same concise manner as ASP allows to model combinatorial problems. As a result of its inherent continuity, rules in FASP may be satisfied or violated to certain degrees. Rather than insisting that all rules are fully satisfied, we may only require that they are satisfied partially, to the best extent possible. However, most approaches that feature partial rule satisfaction limit themselves to attaching predefined weights to rules, which is not sufficiently flexible for most real-life applications. In this paper, we develop an alternative, based on aggregator functions that specify which (combination of) rules are most important to satisfy. We extend upon previous work by allowing aggregator expressions to define partially ordered preferences, and by the use of a fixpoint semantics
Contributions to artificial intelligence: the IIIA perspective
La intel·ligència artificial (IA) Ă©s un camp cientĂfic i tecnològic relativament nou dedicat a l'estudi de la intel·ligència mitjançant l'Ăşs d'ordinadors com a eines per produir comportament intel·ligent. Inicialment, l'objectiu era essencialment cientĂfic: assolir una millor comprensiĂł de la intel·ligència humana. Aquest objectiu ha estat, i encara Ă©s, el dels investigadors en ciència cognitiva. Dissortadament, aquest fascinant però ambiciĂłs objectiu Ă©s encara molt lluny de ser assolit i ni tan sols podem dir que ens hi haguem acostat significativament. Afortunadament, però, la IA tambĂ© persegueix un objectiu mĂ©s aplicat: construir sistemes que ens resultin Ăştils encara que la intel·ligència artificial de què estiguin dotats no tingui res a veure amb la intel·ligència humana i, per tant, aquests sistemes no ens proporcionarien necessĂ riament informaciĂł Ăştil sobre la naturalesa de la intel·ligència humana. Aquest objectiu, que s'emmarca mĂ©s aviat dins de l'Ă mbit de l'enginyeria, Ă©s actualment el que predomina entre els investigadors en IA i ja ha donat resultats impresionants, tan teòrics com aplicats, en moltĂssims dominis d'aplicaciĂł. A mĂ©s, avui dia, els productes i les aplicacions al voltant de la IA representen un mercat anual de desenes de milers de milions de dòlars. Aquest article resumeix les principals contribucions a la IA fetes pels investigadors de l'Institut d'InvestigaciĂł en Intel·ligència Artificial del Consell Superior d'Investigacions CientĂfiques durant els darrers cinc anys.Artificial intelligence is a relatively new scientific and technological field which studies the nature of intelligence by using computers to produce intelligent behaviour. Initially, the main goal was a purely scientific one, understanding human intelligence, and this remains the aim of cognitive scientists. Unfortunately, such an ambitious and fascinating goal is not only far from being achieved but has yet to be satisfactorily approached. Fortunately, however, artificial intelligence also has an engineering goal: building systems that are useful to people even if the intelligence of such systems has no relation whatsoever with human intelligence, and therefore being able to build them does not necessarily provide any insight into the nature of human intelligence. This engineering goal has become the predominant one among artificial intelligence researchers and has produced impressive results, ranging from knowledge-based systems to autonomous robots, that have been applied to many different domains. Furthermore, artificial intelligence products and services today represent an annual market of tens of billions of dollars worldwide. This article summarizes the main contributions to the field of artificial intelligence made at the IIIA-CSIC (Artificial Intelligence Research Institute of the Spanish Scientific Research Council) over the last five years
Depth-bounded Belief functions
This paper introduces and investigates Depth-bounded Belief functions, a logic-based representation of quantified uncertainty. Depth-bounded Belief functions are based on the framework of Depth-bounded Boolean logics [4], which provide a hierarchy of approximations to classical logic. Similarly, Depth-bounded Belief functions give rise to a hierarchy of increasingly tighter lower and upper bounds over classical measures of uncertainty. This has the rather welcome consequence that “higher logical abilities” lead to sharper uncertainty quantification. In particular, our main results identify the conditions under which Dempster-Shafer Belief functions and probability functions can be represented as a limit of a suitable sequence of Depth-bounded Belief functions
Depth-bounded Belief functions
This paper introduces and investigates Depth-bounded Belief functions, a logic-based representation of quantified uncertainty. Depth-bounded Belief functions are based on the framework of Depth-bounded Boolean logics [4], which provide a hierarchy of approximations to classical logic. Similarly, Depth-bounded Belief functions give rise to a hierarchy of increasingly tighter lower and upper bounds over classical measures of uncertainty. This has the rather welcome consequence that \u201chigher logical abilities\u201d lead to sharper uncertainty quantification. In particular, our main results identify the conditions under which Dempster-Shafer Belief functions and probability functions can be represented as a limit of a suitable sequence of Depth-bounded Belief functions
- …