36,281 research outputs found
Reasoning Biases, Non-Monotonic Logics and Belief Revision
A range of formal models of human reasoning have been proposed in a number of fields such as philosophy, logic, artificial intelligence, computer science, psychology, cognitive science, etc.: various logics (epistemic logics; non-monotonic logics), probabilistic systems (most notably, but not exclusively, Bayesian probability theory), belief revision systems, neural networks, among others. Now, it seems reasonable to require that formal models of human reasoning be (minimally) empirically adequate if they are to be viewed as models of the phenomena in question. How are formal models of human reasoning typically put to empirical test? One way to do so is to isolate a number of key principles of the system, and design experiments to gauge the extent to which participants do or do not follow them in reasoning tasks. Another way is to take relevant existing results and check whether a particular formal model predicts these results. The present investigation provides an illustration of the second kind of empirical testing by comparing two formal models for reasoning -namely the non-monotonic logic known as preferential logic; and a particular version of belief revision theories, screened belief revision -against the reasoning phenomenon known as belief bias in the psychology of reasoning literature: human reasoners typically seek to maintain the beliefs they already hold, and conversely to reject contradicting incoming information. The conclusion of our analysis will be that screened belief revision is more empirically adequate with respect to belief bias than preferential logic and non-monotonic logics in general, as what participants seem to be doing is above all a form of belief management on the basis of background knowledge. The upshot is thus that, while it may offer valuable insights into the nature of human reasoning, preferential logic (and non-monotonic logics in general) is ultimately inadequate as a formal model of the phenomena in question
Dimensions of Neural-symbolic Integration - A Structured Survey
Research on integrated neural-symbolic systems has made significant progress
in the recent past. In particular the understanding of ways to deal with
symbolic knowledge within connectionist systems (also called artificial neural
networks) has reached a critical mass which enables the community to strive for
applicable implementations and use cases. Recent work has covered a great
variety of logics used in artificial intelligence and provides a multitude of
techniques for dealing with them within the context of artificial neural
networks. We present a comprehensive survey of the field of neural-symbolic
integration, including a new classification of system according to their
architectures and abilities.Comment: 28 page
Recommended from our members
Fewer epistemological challenges for connectionism
Seventeen years ago, John McCarthy wrote the note Epistemological challenges for connectionism as a response to Paul Smolenskyâs paper 'On the proper treatment of connectionism'. I will discuss the extent to which the four key challenges put forward by McCarthy have been solved, and what are the new challenges ahead. I argue that there are fewer epistemological challenges for connectionism, but progress has been slow. Nevertheless, there is now strong indication that neural-symbolic integration can provide effective systems of expressive reasoning and robust learning due to the recent developments in the field
Recommended from our members
Neurons and symbols: a manifesto
We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of
neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty
Bounded Rationality and Heuristics in Humans and in Artificial Cognitive Systems
In this paper I will present an analysis of the impact that the notion of âbounded rationalityâ,
introduced by Herbert Simon in his book âAdministrative Behaviorâ, produced in the
field of Artificial Intelligence (AI). In particular, by focusing on the field of Automated
Decision Making (ADM), I will show how the introduction of the cognitive dimension into
the study of choice of a rational (natural) agent, indirectly determined - in the AI field - the
development of a line of research aiming at the realisation of artificial systems whose decisions
are based on the adoption of powerful shortcut strategies (known as heuristics) based
on âsatisficingâ - i.e. non optimal - solutions to problem solving. I will show how the
âheuristic approachâ to problem solving allowed, in AI, to face problems of combinatorial
complexity in real-life situations and still represents an important strategy for the design
and implementation of intelligent systems
- âŚ