317,438 research outputs found
Intrinsic Motivation Systems for Autonomous Mental Development
Exploratory activities seem to be intrinsically rewarding
for children and crucial for their cognitive development.
Can a machine be endowed with such an intrinsic motivation
system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development.The complexity of the robotâs activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations
which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology.
Key words: Active learning, autonomy, behavior, complexity,
curiosity, development, developmental trajectory, epigenetic
robotics, intrinsic motivation, learning, reinforcement learning,
values
Including Limited Partners in the Diversity Jurisdiction Analysis
This paper presents the results of the Dynamic Pricing Challenge, held on the occasion of the 17th INFORMS Revenue Management and Pricing Section Conference on June 29â30, 2017 in Amsterdam, The Netherlands. For this challenge, participants submitted algorithms for pricing and demand learning of which the numerical performance was analyzed in simulated market environments. This allows consideration of market dynamics that are not analytically tractable or can not be empirically analyzed due to practical complications. Our findings implicate that the relative performance of algorithms varies substantially across different market dynamics, which confirms the intrinsic complexity of pricing and learning in the presence of competition
Image patch analysis and clustering of sunspots: a dimensionality reduction approach
Sunspots, as seen in white light or continuum images, are associated with
regions of high magnetic activity on the Sun, visible on magnetogram images.
Their complexity is correlated with explosive solar activity and so classifying
these active regions is useful for predicting future solar activity. Current
classification of sunspot groups is visually based and suffers from bias.
Supervised learning methods can reduce human bias but fail to optimally
capitalize on the information present in sunspot images. This paper uses two
image modalities (continuum and magnetogram) to characterize the spatial and
modal interactions of sunspot and magnetic active region images and presents a
new approach to cluster the images. Specifically, in the framework of image
patch analysis, we estimate the number of intrinsic parameters required to
describe the spatial and modal dependencies, the correlation between the two
modalities and the corresponding spatial patterns, and examine the phenomena at
different scales within the images. To do this, we use linear and nonlinear
intrinsic dimension estimators, canonical correlation analysis, and
multiresolution analysis of intrinsic dimension.Comment: 5 pages, 7 figures, accepted to ICIP 201
Intrinsic Dimension Estimation
It has long been thought that high-dimensional data encountered in many
practical machine learning tasks have low-dimensional structure, i.e., the
manifold hypothesis holds. A natural question, thus, is to estimate the
intrinsic dimension of a given population distribution from a finite sample. We
introduce a new estimator of the intrinsic dimension and provide finite sample,
non-asymptotic guarantees. We then apply our techniques to get new sample
complexity bounds for Generative Adversarial Networks (GANs) depending only on
the intrinsic dimension of the data
- âŠ