102 research outputs found
The Maryland Virtual Demonstrator Environment for Robot Imitation Learning
Robot imitation learning, where a robot autonomously generates actions
required to accomplish a task demonstrated by a human, has emerged as a
potential replacement for a more conventional hand-coded approach to
programming robots. Many past studies in imitation learning have human
demonstrators perform tasks in the real world. However, this approach is
generally expensive and requires high-quality image processing and
complex human motion understanding. To address this issue, we developed
a simulated environment for imitation learning, where visual properties
of objects are simplified to lower the barriers of image processing. The
user is provided with a graphical user interface (GUI) to demonstrate
tasks by manipulating objects in the environment, from which a simulated
robot in the same environment can learn. We hypothesize that in many
situations, imitation learning can be significantly simplified while
being more effective when based solely on objects being manipulated
rather than the demonstrator's body and motions. For this reason, the
demonstrator in the environment is not embodied, and a demonstration as
seen by the robot consists of sequences of object movements. A
programming interface in Matlab is provided for researchers and
developers to write code that controls the robot's behaviors. An XML
interface is also provided to generate objects that form task-specific
scenarios. This report describes the features and usages of the
software
SMILE: Simulator for Maryland Imitation Learning Environment
As robot imitation learning is beginning to replace conventional
hand-coded approaches in programming robot behaviors, much work is
focusing on learning from the actions of demonstrators. We hypothesize
that in many situations, procedural tasks can be learned more
effectively by observing object behaviors while completely ignoring the
demonstrator's motions. To support studying this hypothesis and robot
imitation learning in general, we built a software system named SMILE
that is a simulated 3D environment. In this virtual environment, both a
simulated robot and a user-controlled demonstrator can manipulate
various objects on a tabletop. The demonstrator is not embodied in
SMILE, and therefore a recorded demonstration appears as if the objects
move on their own. In addition to recording demonstrations, SMILE also
allows programing the simulated robot via Matlab scripts, as well as
creating highly customizable objects for task scenarios via XML. This
report describes the features and usages of SMILE
Development of a Large-Scale Integrated Neurocognitive Architecture - Part 2: Design and Architecture
In Part 1 of this report, we outlined a framework for creating an intelligent agent
based upon modeling the large-scale functionality of the human brain. Building on
those results, we begin Part 2 by specifying the behavioral requirements of a
large-scale neurocognitive architecture. The core of our long-term approach remains
focused on creating a network of neuromorphic regions that provide the mechanisms
needed to meet these requirements. However, for the short term of the next few years,
it is likely that optimal results will be obtained by using a hybrid design that
also includes symbolic methods from AI/cognitive science and control processes from the
field of artificial life. We accordingly propose a three-tiered architecture that
integrates these different methods, and describe an ongoing computational study of a
prototype 'mini-Roboscout' based on this architecture. We also examine the implications
of some non-standard computational methods for developing a neurocognitive agent.
This examination included computational experiments assessing the effectiveness of
genetic programming as a design tool for recurrent neural networks for sequence
processing, and experiments measuring the speed-up obtained for adaptive neural
networks when they are executed on a graphical processing unit (GPU) rather than a
conventional CPU. We conclude that the implementation of a large-scale neurocognitive
architecture is feasible, and outline a roadmap for achieving this goal
Two-dimensional wave patterns of spreading depolarization: retracting, re-entrant, and stationary waves
We present spatio-temporal characteristics of spreading depolarizations (SD)
in two experimental systems: retracting SD wave segments observed with
intrinsic optical signals in chicken retina, and spontaneously occurring
re-entrant SD waves that repeatedly spread across gyrencephalic feline cortex
observed by laser speckle flowmetry. A mathematical framework of
reaction-diffusion systems with augmented transmission capabilities is
developed to explain the emergence and transitions between these patterns. Our
prediction is that the observed patterns are reaction-diffusion patterns
controlled and modulated by weak nonlocal coupling. The described
spatio-temporal characteristics of SD are of important clinical relevance under
conditions of migraine and stroke. In stroke, the emergence of re-entrant SD
waves is believed to worsen outcome. In migraine, retracting SD wave segments
cause neurological symptoms and transitions to stationary SD wave patterns may
cause persistent symptoms without evidence from noninvasive imaging of
infarction
Mirror Neurons, Prediction and Hemispheric Coordination; The Prioritizing of Intersubjectivity over 'Intrasubjectivity'
We observe that approaches to intersubjectivity, involving mirror neurons and involving emulation
and prediction, have eclipsed discussion of those same mechanisms for achieving coordination between the two hemispheres of the human brain. We explore some of the implications of the suggestion that the mutual modelling of the two situated hemispheres (each hemisphere ‘second guessing’ the other) is a productive place to start in understanding the phylogenetic and ontogenetic development of cognition and of intersubjectivity
- …