31,773 research outputs found
A half century of progress towards a unified neural theory of mind and brain with applications to autonomous adaptive agents and mental disorders
Invited article for the book
Artificial Intelligence in the Age of
Neural Networks and Brain Computing
R. Kozma, C. Alippi, Y. Choe, and F. C. Morabito, Eds.
Cambridge, MA: Academic PressThis article surveys some of the main design principles, mechanisms, circuits, and architectures that have been discovered during a half century of systematic research aimed at developing a unified theory that links mind and brain, and shows how psychological functions arise as emergent properties of brain mechanisms. The article describes a theoretical method that has enabled such a theory to be developed in stages by carrying out a kind of conceptual evolution. It also describes revolutionary computational paradigms like Complementary Computing and Laminar Computing that constrain the kind of unified theory that can describe the autonomous adaptive intelligence that emerges from advanced brains. Adaptive Resonance Theory, or ART, is one of the core models that has been discovered in this way. ART proposes how advanced brains learn to attend, recognize, and predict objects and events in a changing world that is filled with unexpected events. ART is not, however, a “theory of everything” if only because, due to Complementary Computing, different matching and learning laws tend to support perception and cognition on the one hand, and spatial representation and action on the other. The article mentions why a theory of this kind may be useful in the design of autonomous adaptive agents in engineering and technology. It also notes how the theory has led to new mechanistic insights about mental disorders such as autism, medial temporal amnesia, Alzheimer’s disease, and schizophrenia, along with mechanistically informed proposals about how their symptoms may be ameliorated
The Dark Side of a Patchwork Universe
While observational cosmology has recently progressed fast, it revealed a
serious dilemma called dark energy: an unknown source of exotic energy with
negative pressure driving a current accelerating phase of the universe. All
attempts so far to find a convincing theoretical explanation have failed, so
that one of the last hopes is the yet to be developed quantum theory of
gravity. In this article, loop quantum gravity is considered as a candidate,
with an emphasis on properties which might play a role for the dark energy
problem. Its basic feature is the discrete structure of space, often associated
with quantum theories of gravity on general grounds. This gives rise to
well-defined matter Hamiltonian operators and thus sheds light on conceptual
questions related to the cosmological constant problem. It also implies typical
quantum geometry effects which, from a more phenomenological point of view, may
result in dark energy. In particular the latter scenario allows several
non-trivial tests which can be made more precise by detailed observations in
combination with a quantitative study of numerical quantum gravity. If the
speculative possibility of a loop quantum gravitational origin of dark energy
turns out to be realized, a program as outlined here will help to hammer out
our ideas for a quantum theory of gravity, and at the same time allow
predictions for the distant future of our universe.Comment: 24 pages, 2 figures, Contribution to the special issue on Dark Energy
by Gen. Rel. Gra
What Is a Macrostate? Subjective Observations and Objective Dynamics
We consider the question of whether thermodynamic macrostates are objective
consequences of dynamics, or subjective reflections of our ignorance of a
physical system. We argue that they are both; more specifically, that the set
of macrostates forms the unique maximal partition of phase space which 1) is
consistent with our observations (a subjective fact about our ability to
observe the system) and 2) obeys a Markov process (an objective fact about the
system's dynamics). We review the ideas of computational mechanics, an
information-theoretic method for finding optimal causal models of stochastic
processes, and argue that macrostates coincide with the ``causal states'' of
computational mechanics. Defining a set of macrostates thus consists of an
inductive process where we start with a given set of observables, and then
refine our partition of phase space until we reach a set of states which
predict their own future, i.e. which are Markovian. Macrostates arrived at in
this way are provably optimal statistical predictors of the future values of
our observables.Comment: 15 pages, no figure
Explanatory Challenges in Metaethics
There are several important arguments in metaethics that rely on explanatory considerations. Gilbert Harman has presented a challenge to the existence of moral facts that depends on the claim that the best explanation of our moral beliefs does not involve moral facts. The Reliability Challenge against moral realism depends on the claim that moral realism is incompatible with there being a satisfying explanation of our reliability about moral truths. The purpose of this chapter is to examine these and related arguments. In particular, this chapter will discuss four kinds of arguments – Harman’s Challenge, evolutionary debunking arguments, irrelevant influence arguments, and the Reliability Challenge – understood as arguments against moral realism. The main goals of this chapter are (i) to articulate the strongest version of these arguments; (ii) to present and assess the central epistemological principles underlying these arguments; and (iii) to determine what a realist would have to do to adequately respond to these arguments
Beyond revealed preference: choice-theoretic foundations for behavioral welfare economics
We propose a broad generalization of standard choice-theoretic welfare economics that encompasses a wide variety of nonstandard behavioral models. Our approach exploits the coherent aspects of choice that those positive models typically attempt to capture. It replaces the standard revealed preference relation with an unambiguous choice relation: roughly, x is (strictly) unambiguously chosen over y (written xP*y) iff y is never chosen when x is available. Under weak assumptions, P* is acyclic and therefore suitable for welfare analysis; it is also the most discerning welfare criterion that never overrules choice. The resulting framework generates natural counterparts for the standard tools of applied welfare economics and is easily applied in the context of specific behavioral theories, with novel implications. Though not universally discerning, it lends itself to principled refinements
Soft Contract Verification
Behavioral software contracts are a widely used mechanism for governing the
flow of values between components. However, run-time monitoring and enforcement
of contracts imposes significant overhead and delays discovery of faulty
components to run-time.
To overcome these issues, we present soft contract verification, which aims
to statically prove either complete or partial contract correctness of
components, written in an untyped, higher-order language with first-class
contracts. Our approach uses higher-order symbolic execution, leveraging
contracts as a source of symbolic values including unknown behavioral values,
and employs an updatable heap of contract invariants to reason about
flow-sensitive facts. We prove the symbolic execution soundly approximates the
dynamic semantics and that verified programs can't be blamed.
The approach is able to analyze first-class contracts, recursive data
structures, unknown functions, and control-flow-sensitive refinements of
values, which are all idiomatic in dynamic languages. It makes effective use of
an off-the-shelf solver to decide problems without heavy encodings. The
approach is competitive with a wide range of existing tools---including type
systems, flow analyzers, and model checkers---on their own benchmarks.Comment: ICFP '14, September 1-6, 2014, Gothenburg, Swede
- …