13,649 research outputs found
Incompatible Multiple Consistent Sets of Histories and Measures of Quantumness
In the consistent histories (CH) approach to quantum theory probabilities are
assigned to histories subject to a consistency condition of negligible
interference. The approach has the feature that a given physical situation
admits multiple sets of consistent histories that cannot in general be united
into a single consistent set, leading to a number of counter-intuitive or
contrary properties if propositions from different consistent sets are combined
indiscriminately. An alternative viewpoint is proposed in which multiple
consistent sets are classified according to whether or not there exists any
unifying probability for combinations of incompatible sets which replicates the
consistent histories result when restricted to a single consistent set. A
number of examples are exhibited in which this classification can be made, in
some cases with the assistance of the Bell, CHSH or Leggett-Garg inequalities
together with Fine's theorem. When a unifying probability exists logical
deductions in different consistent sets can in fact be combined, an extension
of the "single framework rule". It is argued that this classification coincides
with intuitive notions of the boundary between classical and quantum regimes
and in particular, the absence of a unifying probability for certain
combinations of consistent sets is regarded as a measure of the "quantumness"
of the system. The proposed approach and results are closely related to recent
work on the classification of quasi-probabilities and this connection is
discussed.Comment: 29 pages. Second revised version with discussion of the sample space
and non-uniqueness of the unifying probability and small errors correcte
Re-visions of rationality?
Empirical evidence suggests proponents of the ‘adaptive toolbox’ framework of human judgment need to rethink their vision of rationality
Unifying an Introduction to Artificial Intelligence Course through Machine Learning Laboratory Experiences
This paper presents work on a collaborative project funded by the National Science Foundation that incorporates machine learning as a unifying theme to teach fundamental concepts typically covered in the introductory Artificial Intelligence courses. The project involves the development of an adaptable framework for the presentation of core AI topics. This is accomplished through the development, implementation, and testing of a suite of adaptable, hands-on laboratory projects that can be closely integrated into the AI course. Through the design and implementation of learning systems that enhance commonly-deployed applications, our model acknowledges that intelligent systems are best taught through their application to challenging problems. The goals of the project are to (1) enhance the student learning experience in the AI course, (2) increase student interest and motivation to learn AI by providing a framework for the presentation of the major AI topics that emphasizes the strong connection between AI and computer science and engineering, and (3) highlight the bridge that machine learning provides between AI technology and modern software engineering
Fuzzy Logic in Clinical Practice Decision Support Systems
Computerized clinical guidelines can provide significant benefits to health outcomes and costs, however, their effective implementation presents significant problems. Vagueness and ambiguity inherent in natural (textual) clinical guidelines is not readily amenable to formulating automated alerts or advice. Fuzzy logic allows us to formalize the treatment of vagueness in a decision support architecture. This paper discusses sources of fuzziness in clinical practice guidelines. We consider how fuzzy logic can be applied and give a set of heuristics for the clinical guideline knowledge engineer for addressing uncertainty in practice guidelines. We describe the specific applicability of fuzzy logic to the decision support behavior of Care Plan On-Line, an intranet-based chronic care planning system for General Practitioners
Default Logic in a Coherent Setting
In this talk - based on the results of a forthcoming paper (Coletti,
Scozzafava and Vantaggi 2002), presented also by one of us at the Conference on
"Non Classical Logic, Approximate Reasoning and Soft-Computing" (Anacapri,
Italy, 2001) - we discuss the problem of representing default rules by means of
a suitable coherent conditional probability, defined on a family of conditional
events. An event is singled-out (in our approach) by a proposition, that is a
statement that can be either true or false; a conditional event is consequently
defined by means of two propositions and is a 3-valued entity, the third value
being (in this context) a conditional probability
Language-based Abstractions for Dynamical Systems
Ordinary differential equations (ODEs) are the primary means to modelling
dynamical systems in many natural and engineering sciences. The number of
equations required to describe a system with high heterogeneity limits our
capability of effectively performing analyses. This has motivated a large body
of research, across many disciplines, into abstraction techniques that provide
smaller ODE systems while preserving the original dynamics in some appropriate
sense. In this paper we give an overview of a recently proposed
computer-science perspective to this problem, where ODE reduction is recast to
finding an appropriate equivalence relation over ODE variables, akin to
classical models of computation based on labelled transition systems.Comment: In Proceedings QAPL 2017, arXiv:1707.0366
- …