23 research outputs found

    The Maryland Virtual Demonstrator Environment for Robot Imitation Learning

    Get PDF
    Robot imitation learning, where a robot autonomously generates actions required to accomplish a task demonstrated by a human, has emerged as a potential replacement for a more conventional hand-coded approach to programming robots. Many past studies in imitation learning have human demonstrators perform tasks in the real world. However, this approach is generally expensive and requires high-quality image processing and complex human motion understanding. To address this issue, we developed a simulated environment for imitation learning, where visual properties of objects are simplified to lower the barriers of image processing. The user is provided with a graphical user interface (GUI) to demonstrate tasks by manipulating objects in the environment, from which a simulated robot in the same environment can learn. We hypothesize that in many situations, imitation learning can be significantly simplified while being more effective when based solely on objects being manipulated rather than the demonstrator's body and motions. For this reason, the demonstrator in the environment is not embodied, and a demonstration as seen by the robot consists of sequences of object movements. A programming interface in Matlab is provided for researchers and developers to write code that controls the robot's behaviors. An XML interface is also provided to generate objects that form task-specific scenarios. This report describes the features and usages of the software

    SMILE: Simulator for Maryland Imitation Learning Environment

    Get PDF
    As robot imitation learning is beginning to replace conventional hand-coded approaches in programming robot behaviors, much work is focusing on learning from the actions of demonstrators. We hypothesize that in many situations, procedural tasks can be learned more effectively by observing object behaviors while completely ignoring the demonstrator's motions. To support studying this hypothesis and robot imitation learning in general, we built a software system named SMILE that is a simulated 3D environment. In this virtual environment, both a simulated robot and a user-controlled demonstrator can manipulate various objects on a tabletop. The demonstrator is not embodied in SMILE, and therefore a recorded demonstration appears as if the objects move on their own. In addition to recording demonstrations, SMILE also allows programing the simulated robot via Matlab scripts, as well as creating highly customizable objects for task scenarios via XML. This report describes the features and usages of SMILE

    Integration of Gravitational Torques in Cerebellar Pathways Allows for the Dynamic Inverse Computation of Vertical Pointing Movements of a Robot Arm

    Get PDF
    Several authors suggested that gravitational forces are centrally represented in the brain for planning, control and sensorimotor predictions of movements. Furthermore, some studies proposed that the cerebellum computes the inverse dynamics (internal inverse model) whereas others suggested that it computes sensorimotor predictions (internal forward model).This study proposes a model of cerebellar pathways deduced from both biological and physical constraints. The model learns the dynamic inverse computation of the effect of gravitational torques from its sensorimotor predictions without calculating an explicit inverse computation. By using supervised learning, this model learns to control an anthropomorphic robot arm actuated by two antagonists McKibben artificial muscles. This was achieved by using internal parallel feedback loops containing neural networks which anticipate the sensorimotor consequences of the neural commands. The artificial neural networks architecture was similar to the large-scale connectivity of the cerebellar cortex. Movements in the sagittal plane were performed during three sessions combining different initial positions, amplitudes and directions of movements to vary the effects of the gravitational torques applied to the robotic arm. The results show that this model acquired an internal representation of the gravitational effects during vertical arm pointing movements.This is consistent with the proposal that the cerebellar cortex contains an internal representation of gravitational torques which is encoded through a learning process. Furthermore, this model suggests that the cerebellum performs the inverse dynamics computation based on sensorimotor predictions. This highlights the importance of sensorimotor predictions of gravitational torques acting on upper limb movements performed in the gravitational field

    Functional near-infrared spectroscopy-based correlates of prefrontal cortical dynamics during a cognitive-motor executive adaptation task

    No full text
    This study investigated changes in brain hemodynamics, as measured by functional near infrared spectroscopy (fNIR), during performance of a cognitive-motor adaptation task. The adaptation task involved the learning of a novel visuo-motor transformation (a 60 degree counterclockwise screen-cursor rotation), which required inhibition of a pre-potent visuo-motor response. A control group experienced a familiar transformation and thus, did not face any executive challenge. Analysis of the experimental group hemodynamic responses revealed that the performance enhancement was associated with a monotonic reduction in the oxygenation level in the prefrontal cortex. This finding confirms and extends functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) studies of visuo-motor adaptation and learning. The changes in prefrontal brain activation suggest an initial recruitment of frontal executive functioning to inhibit pre-potent visuo-motor mappings followed by a progressive de-recruitment of the same prefrontal regions. The prefrontal hemodynamic changes observed in the experimental group translated into enhanced motor performance revealed by a reduction in movement time, movement extent, root mean square error and the directional error. These kinematic adaptations are consistent with the acquisition of an internal model of the novel visuo-motor transformation. No comparable change was observed in the control group for either the hemodynamics or for the kinematics. This study 1) extends our understanding of the frontal executive processes from the cognitive to the cognitive-motor domain and 2) suggests that optical brain imaging can be employed to provide hemodynamic based-biomarkers to assess and monitor the level of adaptive cognitive-motor performance

    Functional near-infrared spectroscopy-based correlates of prefrontal cortical dynamics during a cognitive-motor executive adaptation task

    Get PDF
    This study investigated changes in brain hemodynamics, as measured by functional near infrared spectroscopy, during performance of a cognitive-motor adaptation task. The adaptation task involved the learning of a novel visuomotor transformation (a 60Β° counterclockwise screen-cursor rotation), which required inhibition of a prepotent visuomotor response. A control group experienced a familiar transformation and thus, did not face any executive challenge. Analysis of the experimental group hemodynamic responses revealed that the performance enhancement was associated with a monotonic reduction in the oxygenation level in the prefrontal cortex. This finding confirms and extends functional magnetic resonance imaging and electroencephalography studies of visuomotor adaptation and learning. The changes in prefrontal brain activation suggest an initial recruitment of frontal executive functioning to inhibit prepotent visuomotor mappings followed by a progressive de-recruitment of the same prefrontal regions. The prefrontal hemodynamic changes observed in the experimental group translated into enhanced motor performance revealed by a reduction in movement time, movement extent, root mean square error and the directional error. These kinematic adaptations are consistent with the acquisition of an internal model of the novel visuomotor transformation. No comparable change was observed in the control group for either the hemodynamics or for the kinematics. This study (1) extends our understanding of the frontal executive processes from the cognitive to the cognitive-motor domain and (2) suggests that optical brain imaging can be employed to provide hemodynamic based-biomarkers to assess and monitor the level of adaptive cognitive-motor performance

    Pointing errors for simulation 2.

    No full text
    <p>Average RMSE<sub>D</sub> (D) and RMSE<sub>S</sub> (S) for each mass condition for the session I (SI), II (SII) and III (SIII). Training: Training set. Iep (inter- and extrapolated positions): test set. M<sub>i</sub>_T (0≀i≀5): masses used during the training set. M<sub>i</sub>_Iep (0≀i≀5): masses used during the test set. Average Iep: RMSE values for the test set averaged across SI, SII and SIII. Aver M0–5_T, Aver M0–5_T; Aver M0–5_Iep: RMSE<sub>D</sub> and RMSE<sub>S</sub> values for the training (_T) and test set (_Iep) averaged across the different mass conditions, respectively.</p

    Comparison between simulated and robot movements.

    No full text
    <p>Distribution of the RMSE<sub>D</sub> (left column) and RMSE<sub>S</sub> (right column) for the three sessions for the simulation 1 (A) and the robotic experiment (B). Both type of error are represented as a function of movement amplitudes during the session I (i.e. intra- and extrapolated positions) and of initial positions and movement amplitudes during respectively the session II and III.</p
    corecore