8,739 research outputs found

    Attraction of Spiral Waves by Localized Inhomogeneities with Small-World Connections in Excitable Media

    Full text link
    Trapping and un-trapping of spiral tips in a two-dimensional homogeneous excitable medium with local small-world connections is studied by numerical simulation. In a homogeneous medium which can be simulated with a lattice of regular neighborhood connections, the spiral wave is in the meandering regime. When changing the topology of a small region from regular connections to small-world connections, the tip of a spiral waves is attracted by the small-world region, where the average path length declines with the introduction of long distant connections. The "trapped" phenomenon also occurs in regular lattices where the diffusion coefficient of the small region is increased. The above results can be explained by the eikonal equation and the relation between core radius and diffusion coefficient.Comment: 5 pages, 4 figure

    Robot Composite Learning and the Nunchaku Flipping Challenge

    Full text link
    Advanced motor skills are essential for robots to physically coexist with humans. Much research on robot dynamics and control has achieved success on hyper robot motor capabilities, but mostly through heavily case-specific engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous manner, robot learning from human demonstration (LfD) has achieved great progress, but still has limitations handling dynamic skills and compound actions. In this paper, we present a composite learning scheme which goes beyond LfD and integrates robot learning from human definition, demonstration, and evaluation. The method tackles advanced motor skills that require dynamic time-critical maneuver, complex contact control, and handling partly soft partly rigid objects. We also introduce the "nunchaku flipping challenge", an extreme test that puts hard requirements to all these three aspects. Continued from our previous presentations, this paper introduces the latest update of the composite learning scheme and the physical success of the nunchaku flipping challenge

    Multiform Adaptive Robot Skill Learning from Humans

    Full text link
    Object manipulation is a basic element in everyday human lives. Robotic manipulation has progressed from maneuvering single-rigid-body objects with firm grasping to maneuvering soft objects and handling contact-rich actions. Meanwhile, technologies such as robot learning from demonstration have enabled humans to intuitively train robots. This paper discusses a new level of robotic learning-based manipulation. In contrast to the single form of learning from demonstration, we propose a multiform learning approach that integrates additional forms of skill acquisition, including adaptive learning from definition and evaluation. Moreover, going beyond state-of-the-art technologies of handling purely rigid or soft objects in a pseudo-static manner, our work allows robots to learn to handle partly rigid partly soft objects with time-critical skills and sophisticated contact control. Such capability of robotic manipulation offers a variety of new possibilities in human-robot interaction.Comment: Accepted to 2017 Dynamic Systems and Control Conference (DSCC), Tysons Corner, VA, October 11-1

    Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition

    Full text link
    Motion representation plays a vital role in human action recognition in videos. In this study, we introduce a novel compact motion representation for video action recognition, named Optical Flow guided Feature (OFF), which enables the network to distill temporal information through a fast and robust approach. The OFF is derived from the definition of optical flow and is orthogonal to the optical flow. The derivation also provides theoretical support for using the difference between two frames. By directly calculating pixel-wise spatiotemporal gradients of the deep feature maps, the OFF could be embedded in any existing CNN based video action recognition framework with only a slight additional cost. It enables the CNN to extract spatiotemporal information, especially the temporal information between frames simultaneously. This simple but powerful idea is validated by experimental results. The network with OFF fed only by RGB inputs achieves a competitive accuracy of 93.3% on UCF-101, which is comparable with the result obtained by two streams (RGB and optical flow), but is 15 times faster in speed. Experimental results also show that OFF is complementary to other motion modalities such as optical flow. When the proposed method is plugged into the state-of-the-art video action recognition framework, it has 96:0% and 74:2% accuracy on UCF-101 and HMDB-51 respectively. The code for this project is available at https://github.com/kevin-ssy/Optical-Flow-Guided-Feature.Comment: CVPR 2018. code available at https://github.com/kevin-ssy/Optical-Flow-Guided-Featur
    • …
    corecore