13,686 research outputs found
ToyArchitecture: Unsupervised Learning of Interpretable Models of the World
Research in Artificial Intelligence (AI) has focused mostly on two extremes:
either on small improvements in narrow AI domains, or on universal theoretical
frameworks which are usually uncomputable, incompatible with theories of
biological intelligence, or lack practical implementations. The goal of this
work is to combine the main advantages of the two: to follow a big picture
view, while providing a particular theory and its implementation. In contrast
with purely theoretical approaches, the resulting architecture should be usable
in realistic settings, but also form the core of a framework containing all the
basic mechanisms, into which it should be easier to integrate additional
required functionality.
In this paper, we present a novel, purposely simple, and interpretable
hierarchical architecture which combines multiple different mechanisms into one
system: unsupervised learning of a model of the world, learning the influence
of one's own actions on the world, model-based reinforcement learning,
hierarchical planning and plan execution, and symbolic/sub-symbolic integration
in general. The learned model is stored in the form of hierarchical
representations with the following properties: 1) they are increasingly more
abstract, but can retain details when needed, and 2) they are easy to
manipulate in their local and symbolic-like form, thus also allowing one to
observe the learning process at each level of abstraction. On all levels of the
system, the representation of the data can be interpreted in both a symbolic
and a sub-symbolic manner. This enables the architecture to learn efficiently
using sub-symbolic methods and to employ symbolic inference.Comment: Revision: changed the pdftitl
Learning Models for Following Natural Language Directions in Unknown Environments
Natural language offers an intuitive and flexible means for humans to
communicate with the robots that we will increasingly work alongside in our
homes and workplaces. Recent advancements have given rise to robots that are
able to interpret natural language manipulation and navigation commands, but
these methods require a prior map of the robot's environment. In this paper, we
propose a novel learning framework that enables robots to successfully follow
natural language route directions without any previous knowledge of the
environment. The algorithm utilizes spatial and semantic information that the
human conveys through the command to learn a distribution over the metric and
semantic properties of spatially extended environments. Our method uses this
distribution in place of the latent world model and interprets the natural
language instruction as a distribution over the intended behavior. A novel
belief space planner reasons directly over the map and behavior distributions
to solve for a policy using imitation learning. We evaluate our framework on a
voice-commandable wheelchair. The results demonstrate that by learning and
performing inference over a latent environment model, the algorithm is able to
successfully follow natural language route directions within novel, extended
environments.Comment: ICRA 201
Electrodynamics of a Cosmic Dark Fluid
Cosmic Dark Fluid is considered as a non-stationary medium, in which
electromagnetic waves propagate, and magneto-electric field structures emerge
and evolve. A medium - type representation of the Dark Fluid allows us to
involve into analysis the concepts and mathematical formalism elaborated in the
framework of classical covariant electrodynamics of continua, and to
distinguish dark analogs of well-known medium-effects, such as optical
activity, pyro-electricity, piezo-magnetism, electro- and magneto-striction and
dynamo-optical activity. The Dark Fluid is assumed to be formed by a duet of a
Dark Matter (a pseudoscalar axionic constituent) and Dark Energy (a scalar
element); respectively, we distinguish electrodynamic effects induced by these
two constituents of the Dark Fluid. The review contains discussions of ten
models, which describe electrodynamic effects induced by Dark Matter and/or
Dark Energy. The models are accompanied by examples of exact solutions to the
master equations, correspondingly extended; applications are considered for
cosmology and space-times with spherical and pp-wave symmetries. In these
applications we focused the attention on three main electromagnetic phenomena
induced by the Dark Fluid: first, emergence of Longitudinal Magneto-Electric
Clusters; second, generation of anomalous electromagnetic responses; third,
formation of Dark Epochs in the Universe history.Comment: 39 pages, 0 figures, replaced by the version published in MDPI
Journal "Symmetry" (Special Issue: Symmetry: Feature Papers 2016); typos
correcte
Collaborative design : managing task interdependencies and multiple perspectives
This paper focuses on two characteristics of collaborative design with
respect to cooperative work: the importance of work interdependencies linked to
the nature of design problems; and the fundamental function of design
cooperative work arrangement which is the confrontation and combination of
perspectives. These two intrinsic characteristics of the design work stress
specific cooperative processes: coordination processes in order to manage task
interdependencies, establishment of common ground and negotiation mechanisms in
order to manage the integration of multiple perspectives in design
Untenable nonstationarity: An assessment of the fitness for purpose of trend tests in hydrology
The detection and attribution of long-term patterns in hydrological time series have been important research topics for decades. A significant portion of the literature regards such patterns as ‘deterministic components’ or ‘trends’ even though the complexity of hydrological systems does not allow easy deterministic explanations and attributions. Consequently, trend estimation techniques have been developed to make and justify statements about tendencies in the historical data, which are often used to predict future events. Testing trend hypothesis on observed time series is widespread in the hydro-meteorological literature mainly due to the interest in detecting consequences of human activities on the hydrological cycle. This analysis usually relies on the application of some null hypothesis significance tests (NHSTs) for slowly-varying and/or abrupt changes, such as Mann-Kendall, Pettitt, or similar, to summary statistics of hydrological time series (e.g., annual averages, maxima, minima, etc.). However, the reliability of this application has seldom been explored in detail. This paper discusses misuse, misinterpretation, and logical flaws of NHST for trends in the analysis of hydrological data from three different points of view: historic-logical, semantic-epistemological, and practical. Based on a review of NHST rationale, and basic statistical definitions of stationarity, nonstationarity, and ergodicity, we show that even if the empirical estimation of trends in hydrological time series is always feasible from a numerical point of view, it is uninformative and does not allow the inference of nonstationarity without assuming a priori additional information on the underlying stochastic process, according to deductive reasoning. This prevents the use of trend NHST outcomes to support nonstationary frequency analysis and modeling. We also show that the correlation structures characterizing hydrological time series might easily be underestimated, further compromising the attempt to draw conclusions about trends spanning the period of records. Moreover, even though adjusting procedures accounting for correlation have been developed, some of them are insufficient or are applied only to some tests, while some others are theoretically flawed but still widely applied. In particular, using 250 unimpacted stream flow time series across the conterminous United States (CONUS), we show that the test results can dramatically change if the sequences of annual values are reproduced starting from daily stream flow records, whose larger sizes enable a more reliable assessment of the correlation structures
Insolubility Theorems and EPR Argument
I wish to thank in particular Arthur Fine for very perceptive comments on a previous draft of this paper. Many thanks also to Theo Nieuwenhuizen for inspiration, to Max Schlosshauer for correspondence, to two anonymous referees for shrewd observations, and to audiences at Aberdeen, Cagliari and Oxford (in particular to Harvey Brown, Elise Crull, Simon Saunders, Chris Timpson and David Wallace) for stimulating questions. This paper was written during my tenure of a Leverhulme Grant on ‘The Einstein Paradox’: The Debate on Nonlocality and Incompleteness in 1935 (Project Grant nr. F/00 152/AN), and it was revised for publication during my tenure of a Visiting Professorship in the Doctoral School of Philosophy and Epistemology, University of Cagliari (Contract nr. 268/21647).Peer reviewedPostprin
Advancing functional connectivity research from association to causation
Cognition and behavior emerge from brain network interactions, such that investigating causal interactions should be central to the study of brain function. Approaches that characterize statistical associations among neural time series-functional connectivity (FC) methods-are likely a good starting point for estimating brain network interactions. Yet only a subset of FC methods ('effective connectivity') is explicitly designed to infer causal interactions from statistical associations. Here we incorporate best practices from diverse areas of FC research to illustrate how FC methods can be refined to improve inferences about neural mechanisms, with properties of causal neural interactions as a common ontology to facilitate cumulative progress across FC approaches. We further demonstrate how the most common FC measures (correlation and coherence) reduce the set of likely causal models, facilitating causal inferences despite major limitations. Alternative FC measures are suggested to immediately start improving causal inferences beyond these common FC measures
A framework for the local information dynamics of distributed computation in complex systems
The nature of distributed computation has often been described in terms of
the component operations of universal computation: information storage,
transfer and modification. We review the first complete framework that
quantifies each of these individual information dynamics on a local scale
within a system, and describes the manner in which they interact to create
non-trivial computation where "the whole is greater than the sum of the parts".
We describe the application of the framework to cellular automata, a simple yet
powerful model of distributed computation. This is an important application,
because the framework is the first to provide quantitative evidence for several
important conjectures about distributed computation in cellular automata: that
blinkers embody information storage, particles are information transfer agents,
and particle collisions are information modification events. The framework is
also shown to contrast the computations conducted by several well-known
cellular automata, highlighting the importance of information coherence in
complex computation. The results reviewed here provide important quantitative
insights into the fundamental nature of distributed computation and the
dynamics of complex systems, as well as impetus for the framework to be applied
to the analysis and design of other systems.Comment: 44 pages, 8 figure
- …