52,195 research outputs found
Context Dependence, MOPs,WHIMs and procedures Recanati and Kaplan on Cognitive Aspects in Semantics
After presenting Kripke’s criticism to Frege’s ideas on context dependence of thoughts, I present two recent attempts of considering cognitive aspects of context dependent expressions inside a truth conditional pragmatics or semantics: Recanati’s non-descriptive modes of presentation (MOPs) and Kaplan’s ways of having in mind (WHIMs). After analysing the two attempts and verifying which answers they should give to the problem discussed by Kripke, I suggest a possible interpretation of these attempts: to insert a procedural or algorithmic level in semantic representations of indexicals. That a function may be computed by different procedures might suggest new possibilities of integrating contextual cognitive aspects in model theoretic semanti
The Mode of Computing
The Turing Machine is the paradigmatic case of computing machines, but there
are others, such as Artificial Neural Networks, Table Computing,
Relational-Indeterminate Computing and diverse forms of analogical computing,
each of which based on a particular underlying intuition of the phenomenon of
computing. This variety can be captured in terms of system levels,
re-interpreting and generalizing Newell's hierarchy, which includes the
knowledge level at the top and the symbol level immediately below it. In this
re-interpretation the knowledge level consists of human knowledge and the
symbol level is generalized into a new level that here is called The Mode of
Computing. Natural computing performed by the brains of humans and non-human
animals with a developed enough neural system should be understood in terms of
a hierarchy of system levels too. By analogy from standard computing machinery
there must be a system level above the neural circuitry levels and directly
below the knowledge level that is named here The mode of Natural Computing. A
central question for Cognition is the characterization of this mode. The Mode
of Computing provides a novel perspective on the phenomena of computing,
interpreting, the representational and non-representational views of cognition,
and consciousness.Comment: 35 pages, 8 figure
iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making
People are rated and ranked, towards algorithmic decision making in an
increasing number of applications, typically based on machine learning.
Research on how to incorporate fairness into such tasks has prevalently pursued
the paradigm of group fairness: giving adequate success rates to specifically
protected groups. In contrast, the alternative paradigm of individual fairness
has received relatively little attention, and this paper advances this less
explored direction. The paper introduces a method for probabilistically mapping
user records into a low-rank representation that reconciles individual fairness
and the utility of classifiers and rankings in downstream applications. Our
notion of individual fairness requires that users who are similar in all
task-relevant attributes such as job qualification, and disregarding all
potentially discriminating attributes such as gender, should have similar
outcomes. We demonstrate the versatility of our method by applying it to
classification and learning-to-rank tasks on a variety of real-world datasets.
Our experiments show substantial improvements over the best prior work for this
setting.Comment: Accepted at ICDE 2019. Please cite the ICDE 2019 proceedings versio
Probabilistic representations in perception: Are there any, and what would they be?
Nick Shea’s Representation in Cognitive Science commits
him to representations in perceptual processing that are
about probabilities. This commentary concerns how to
adjudicate between this view and an alternative that locates
the probabilities rather in the representational states’
associated “attitudes”. As background and motivation,
evidence for probabilistic representations in perceptual
processing is adduced, and it is shown how, on either
conception, one can address a specific challenge Ned Block
has raised to this evidence
Towards a Coherent Theory of Physics and Mathematics
As an approach to a Theory of Everything a framework for developing a
coherent theory of mathematics and physics together is described. The main
characteristic of such a theory is discussed: the theory must be valid and and
sufficiently strong, and it must maximally describe its own validity and
sufficient strength. The mathematical logical definition of validity is used,
and sufficient strength is seen to be a necessary and useful concept. The
requirement of maximal description of its own validity and sufficient strength
may be useful to reject candidate coherent theories for which the description
is less than maximal. Other aspects of a coherent theory discussed include
universal applicability, the relation to the anthropic principle, and possible
uniqueness. It is suggested that the basic properties of the physical and
mathematical universes are entwined with and emerge with a coherent theory.
Support for this includes the indirect reality status of properties of very
small or very large far away systems compared to moderate sized nearby systems.
Discussion of the necessary physical nature of language includes physical
models of language and a proof that the meaning content of expressions of any
axiomatizable theory seems to be independent of the algorithmic complexity of
the theory. G\"{o}del maps seem to be less useful for a coherent theory than
for purely mathematical theories because all symbols and words of any language
musthave representations as states of physical systems already in the domain of
a coherent theory.Comment: 38 pages, earlier version extensively revised and clarified. Accepted
for publication in Foundations of Physic
The role of appraisal in emotion
status: publishe
A literature review of expert problem solving using analogy
We consider software project cost estimation from a problem solving perspective. Taking a cognitive psychological approach, we argue that the algorithmic basis for CBR tools is not representative of human problem solving and this mismatch could account for inconsistent results. We describe the fundamentals of problem solving, focusing on experts solving ill-defined problems. This is supplemented by a systematic literature review of empirical studies of expert problem solving of non-trivial problems. We identified twelve studies. These studies suggest that analogical reasoning plays an important role in problem solving, but that CBR tools do not model this in a biologically plausible way. For example, the ability to induce structure and therefore find deeper analogies is widely seen as the hallmark of an expert. However, CBR tools fail to provide support for this type of reasoning for prediction. We conclude this mismatch between experts’ cognitive processes and software tools contributes to the erratic performance of analogy-based prediction
- …