1,027 research outputs found
How much of commonsense and legal reasoning is formalizable? A review of conceptual obstacles
Fifty years of effort in artificial intelligence (AI) and the formalization of legal reasoning have produced both successes and failures. Considerable success in organizing and displaying evidence and its interrelationships has been accompanied by failure to achieve the original ambition of AI as applied to law: fully automated legal decision-making. The obstacles to formalizing legal reasoning have proved to be the same ones that make the formalization of commonsense reasoning so difficult, and are most evident where legal reasoning has to meld with the vast web of ordinary human knowledge of the world. Underlying many of the problems is the mismatch between the discreteness of symbol manipulation and the continuous nature of imprecise natural language, of degrees of similarity and analogy, and of probabilities
Typicality, graded membership, and vagueness
This paper addresses theoretical problems arising from the vagueness of language terms, and intuitions of the vagueness of the concepts to which they refer. It is argued that the central intuitions of prototype theory are sufficient to account for both typicality phenomena and psychological intuitions about degrees of membership in vaguely defined classes. The first section explains the importance of the relation between degrees of membership and typicality (or goodness of example) in conceptual categorization. The second and third section address arguments advanced by Osherson and Smith (1997), and Kamp and Partee (1995), that the two notions of degree of membership and typicality must relate to fundamentally different aspects of conceptual representations. A version of prototype theory—the Threshold Model—is proposed to counter these arguments and three possible solutions to the problems of logical selfcontradiction and tautology for vague categorizations are outlined. In the final section graded membership is related to the social construction of conceptual boundaries maintained through language use
Concepts and Their Dynamics: A Quantum-Theoretic Modeling of Human Thought
We analyze different aspects of our quantum modeling approach of human
concepts, and more specifically focus on the quantum effects of contextuality,
interference, entanglement and emergence, illustrating how each of them makes
its appearance in specific situations of the dynamics of human concepts and
their combinations. We point out the relation of our approach, which is based
on an ontology of a concept as an entity in a state changing under influence of
a context, with the main traditional concept theories, i.e. prototype theory,
exemplar theory and theory theory. We ponder about the question why quantum
theory performs so well in its modeling of human concepts, and shed light on
this question by analyzing the role of complex amplitudes, showing how they
allow to describe interference in the statistics of measurement outcomes, while
in the traditional theories statistics of outcomes originates in classical
probability weights, without the possibility of interference. The relevance of
complex numbers, the appearance of entanglement, and the role of Fock space in
explaining contextual emergence, all as unique features of the quantum
modeling, are explicitly revealed in this paper by analyzing human concepts and
their dynamics.Comment: 31 pages, 5 figure
Recommended from our members
The Rumsfeld Effect: The unknown unknown
A set of studies tested whether people can use awareness of ignorance to provide enhanced test consistency over time if they are allowed to place uncertain items into a “don’t know” category. For factual knowledge this did occur, but for a range of other forms of knowledge relating to conceptual knowledge and personal identity, no such effect was seen. Known unknowns would appear to be largely restricted to factual kinds of knowledge
Nomic Vagueness
If there are fundamental laws of nature, can they fail to be exact? In this
paper, I consider the possibility that some fundamental laws are vague. I call this phenomenon nomic vagueness. I propose to characterize nomic vagueness as the
existence of borderline lawful worlds. The existence of nomic vagueness raises
interesting questions about the mathematical expressibility and metaphysical
status of fundamental laws.
For a case study, we turn to the Past Hypothesis, a postulate that (partially)
explains the direction of time in our world. We have reasons to take it seriously
as a candidate fundamental law of nature. Yet it is vague: it admits borderline
(nomologically) possible worlds. An exact version would lead to an untraceable
arbitrariness absent in any other fundamental laws. However, the dilemma
between nomic vagueness and untraceable arbitrariness is dissolved in a new
quantum theory of time’s arrow
Nomic Vagueness
If there are fundamental laws of nature, can they fail to be exact? In this
paper, I consider the possibility that some fundamental laws are vague. I call this phenomenon nomic vagueness. I propose to characterize nomic vagueness as the
existence of borderline lawful worlds. The existence of nomic vagueness raises
interesting questions about the mathematical expressibility and metaphysical
status of fundamental laws.
For a case study, we turn to the Past Hypothesis, a postulate that (partially)
explains the direction of time in our world. We have reasons to take it seriously
as a candidate fundamental law of nature. Yet it is vague: it admits borderline
(nomologically) possible worlds. An exact version would lead to an untraceable
arbitrariness absent in any other fundamental laws. However, the dilemma
between nomic vagueness and untraceable arbitrariness is dissolved in a new
quantum theory of time’s arrow
Fundamental Nomic Vagueness
If there are fundamental laws of nature, can they fail to be exact? In this paper, I consider the possibility that some fundamental laws are vague. I call this phenomenon 'fundamental nomic vagueness.' I characterize fundamental nomic vagueness as the existence of borderline lawful worlds and the presence of several other accompanying features. Under certain assumptions, such vagueness prevents the fundamental physical theory from being completely expressible in the mathematical language. Moreover, I suggest that such vagueness can be regarded as 'vagueness in the world.' For a case study, we turn to the Past Hypothesis, a postulate that (partially) explains the direction of time in our world. We have reasons to take it seriously as a candidate fundamental law of nature. Yet it is vague: it admits borderline (nomologically) possible worlds. An exact version would lead to an untraceable arbitrariness absent in any other fundamental laws. However, the dilemma between fundamental nomic vagueness and untraceable arbitrariness is dissolved in a new quantum theory of time's arrow
- …