77 research outputs found

    The Maryland Virtual Demonstrator Environment for Robot Imitation Learning

    Get PDF
    Robot imitation learning, where a robot autonomously generates actions required to accomplish a task demonstrated by a human, has emerged as a potential replacement for a more conventional hand-coded approach to programming robots. Many past studies in imitation learning have human demonstrators perform tasks in the real world. However, this approach is generally expensive and requires high-quality image processing and complex human motion understanding. To address this issue, we developed a simulated environment for imitation learning, where visual properties of objects are simplified to lower the barriers of image processing. The user is provided with a graphical user interface (GUI) to demonstrate tasks by manipulating objects in the environment, from which a simulated robot in the same environment can learn. We hypothesize that in many situations, imitation learning can be significantly simplified while being more effective when based solely on objects being manipulated rather than the demonstrator's body and motions. For this reason, the demonstrator in the environment is not embodied, and a demonstration as seen by the robot consists of sequences of object movements. A programming interface in Matlab is provided for researchers and developers to write code that controls the robot's behaviors. An XML interface is also provided to generate objects that form task-specific scenarios. This report describes the features and usages of the software

    SMILE: Simulator for Maryland Imitation Learning Environment

    Get PDF
    As robot imitation learning is beginning to replace conventional hand-coded approaches in programming robot behaviors, much work is focusing on learning from the actions of demonstrators. We hypothesize that in many situations, procedural tasks can be learned more effectively by observing object behaviors while completely ignoring the demonstrator's motions. To support studying this hypothesis and robot imitation learning in general, we built a software system named SMILE that is a simulated 3D environment. In this virtual environment, both a simulated robot and a user-controlled demonstrator can manipulate various objects on a tabletop. The demonstrator is not embodied in SMILE, and therefore a recorded demonstration appears as if the objects move on their own. In addition to recording demonstrations, SMILE also allows programing the simulated robot via Matlab scripts, as well as creating highly customizable objects for task scenarios via XML. This report describes the features and usages of SMILE

    Two-dimensional wave patterns of spreading depolarization: retracting, re-entrant, and stationary waves

    Full text link
    We present spatio-temporal characteristics of spreading depolarizations (SD) in two experimental systems: retracting SD wave segments observed with intrinsic optical signals in chicken retina, and spontaneously occurring re-entrant SD waves that repeatedly spread across gyrencephalic feline cortex observed by laser speckle flowmetry. A mathematical framework of reaction-diffusion systems with augmented transmission capabilities is developed to explain the emergence and transitions between these patterns. Our prediction is that the observed patterns are reaction-diffusion patterns controlled and modulated by weak nonlocal coupling. The described spatio-temporal characteristics of SD are of important clinical relevance under conditions of migraine and stroke. In stroke, the emergence of re-entrant SD waves is believed to worsen outcome. In migraine, retracting SD wave segments cause neurological symptoms and transitions to stationary SD wave patterns may cause persistent symptoms without evidence from noninvasive imaging of infarction

    Learning Competition and Cooperation

    No full text
    • …
    corecore