116,577 research outputs found
Modeling of Phenomena and Dynamic Logic of Phenomena
Modeling of complex phenomena such as the mind presents tremendous
computational complexity challenges. Modeling field theory (MFT) addresses
these challenges in a non-traditional way. The main idea behind MFT is to match
levels of uncertainty of the model (also, problem or theory) with levels of
uncertainty of the evaluation criterion used to identify that model. When a
model becomes more certain, then the evaluation criterion is adjusted
dynamically to match that change to the model. This process is called the
Dynamic Logic of Phenomena (DLP) for model construction and it mimics processes
of the mind and natural evolution. This paper provides a formal description of
DLP by specifying its syntax, semantics, and reasoning system. We also outline
links between DLP and other logical approaches. Computational complexity issues
that motivate this work are presented using an example of polynomial models
A Dempster-Shafer theory inspired logic.
Issues of formalising and interpreting epistemic uncertainty have always played a prominent role in Artificial Intelligence. The Dempster-Shafer (DS) theory of partial beliefs is one of the most-well known formalisms to address the partial knowledge. Similarly to the DS theory, which is a generalisation of the classical probability theory, fuzzy logic provides an alternative reasoning apparatus as compared to Boolean logic.
Both theories are featured prominently within the Artificial Intelligence domain, but the unified framework accounting for all the aspects of imprecise knowledge is yet to be developed. Fuzzy logic apparatus is often used for reasoning based on vague information, and the beliefs are often processed with the aid of Boolean logic. The
situation clearly calls for the development of a logic formalism targeted specifically for the needs of the theory of beliefs. Several frameworks exist based on interpreting epistemic uncertainty through an appropriately defined modal operator. There is an epistemic problem with this kind of frameworks: while addressing uncertain information, they also allow for non-constructive proofs, and in this sense the number of true statements within these frameworks is too large.
In this work, it is argued that an inferential apparatus for the theory of beliefs should follow premises of Brouwer's intuitionism. A logic refuting tertium non daturìs constructed by defining a correspondence between the support functions representing beliefs in the DS theory and semantic models based on intuitionistic Kripke models with weighted nodes. Without addional constraints on the semantic models and without modal operators, the constructed logic is equivalent to the minimal intuitionistic logic. A number of possible constraints is considered resulting in additional axioms and making the proposed logic intermediate. Further analysis of the properties of the created framework shows that the approach preserves the Dempster-Shafer belief assignments and thus expresses modality through the belief assignments of the formulae within the developed logic
One for all, all for one---von Neumann, Wald, Rawls, and Pareto
Applications of the maximin criterion extend beyond economics to statistics,
computer science, politics, and operations research. However, the maximin
criterion---be it von Neumann's, Wald's, or Rawls'---draws fierce criticism due
to its extremely pessimistic stance. I propose a novel concept, dubbed the
optimin criterion, which is based on (Pareto) optimizing the worst-case payoffs
of tacit agreements. The optimin criterion generalizes and unifies results in
various fields: It not only coincides with (i) Wald's statistical
decision-making criterion when Nature is antagonistic, (ii) the core in
cooperative games when the core is nonempty, though it exists even if the core
is empty, but it also generalizes (iii) Nash equilibrium in -person
constant-sum games, (iv) stable matchings in matching models, and (v)
competitive equilibrium in the Arrow-Debreu economy. Moreover, every Nash
equilibrium satisfies the optimin criterion in an auxiliary game
Eddington & Uncertainty
Sir Arthur Eddington is considered one of the greatest astrophysicist of the
twentieth century and yet he gained a stigma when, in the 1930s, he embarked on
a quest to develop a unified theory of gravity and quantum mechanics. His
attempts ultimately proved fruitless and he was unfortunately partially shunned
by some physicists in the latter portion of his career. In addition some
historians have been less than kind to him regarding this portion of his work.
However, detailed analysis of how this work got started shows that Eddington's
theories were not as outlandish as they are often purported to be. His entire
theory rested on the use of quantum mechanical methods of uncertainty in the
reference frames of relativity. Though the work was ultimately not fruitful, in
hindsight it did foreshadow several later results in physics and his methods
were definitely rigorous. In addition, his philosophy regarding determinism and
uncertainty was actually fairly orthodox for his time. This work begins by
looking at Eddington's life and philosophy and uses this as a basis to explore
his work with uncertainty.Comment: new version to appear in Physics in Perspective (either Sept. or Dec.
issue
A literature review of expert problem solving using analogy
We consider software project cost estimation from a problem solving perspective. Taking a cognitive psychological approach, we argue that the algorithmic basis for CBR tools is not representative of human problem solving and this mismatch could account for inconsistent results. We describe the fundamentals of problem solving, focusing on experts solving ill-defined problems. This is supplemented by a systematic literature review of empirical studies of expert problem solving of non-trivial problems. We identified twelve studies. These studies suggest that analogical reasoning plays an important role in problem solving, but that CBR tools do not model this in a biologically plausible way. For example, the ability to induce structure and therefore find deeper analogies is widely seen as the hallmark of an expert. However, CBR tools fail to provide support for this type of reasoning for prediction. We conclude this mismatch between experts’ cognitive processes and software tools contributes to the erratic performance of analogy-based prediction
- …