980 research outputs found

    Advanced Mobile Robotics: Volume 3

    Get PDF
    Mobile robotics is a challenging field with great potential. It covers disciplines including electrical engineering, mechanical engineering, computer science, cognitive science, and social science. It is essential to the design of automated robots, in combination with artificial intelligence, vision, and sensor technologies. Mobile robots are widely used for surveillance, guidance, transportation and entertainment tasks, as well as medical applications. This Special Issue intends to concentrate on recent developments concerning mobile robots and the research surrounding them to enhance studies on the fundamental problems observed in the robots. Various multidisciplinary approaches and integrative contributions including navigation, learning and adaptation, networked system, biologically inspired robots and cognitive methods are welcome contributions to this Special Issue, both from a research and an application perspective

    Computer-aided position planning of miniplates to treat facial bone defects

    Full text link
    In this contribution, a software system for computer-aided position planning of miniplates to treat facial bone defects is proposed. The intra-operatively used bone plates have to be passively adapted on the underlying bone contours for adequate bone fragment stabilization. However, this procedure can lead to frequent intra-operatively performed material readjustments especially in complex surgical cases. Our approach is able to fit a selection of common implant models on the surgeon's desired position in a 3D computer model. This happens with respect to the surrounding anatomical structures, always including the possibility of adjusting both the direction and the position of the used osteosynthesis material. By using the proposed software, surgeons are able to pre-plan the out coming implant in its form and morphology with the aid of a computer-visualized model within a few minutes. Further, the resulting model can be stored in STL file format, the commonly used format for 3D printing. Using this technology, surgeons are able to print the virtual generated implant, or create an individually designed bending tool. This method leads to adapted osteosynthesis materials according to the surrounding anatomy and requires further a minimum amount of money and time.Comment: 19 pages, 13 Figures, 2 Table

    Translating Videos to Commands for Robotic Manipulation with Deep Recurrent Neural Networks

    Full text link
    We present a new method to translate videos to commands for robotic manipulation using Deep Recurrent Neural Networks (RNN). Our framework first extracts deep features from the input video frames with a deep Convolutional Neural Networks (CNN). Two RNN layers with an encoder-decoder architecture are then used to encode the visual features and sequentially generate the output words as the command. We demonstrate that the translation accuracy can be improved by allowing a smooth transaction between two RNN layers and using the state-of-the-art feature extractor. The experimental results on our new challenging dataset show that our approach outperforms recent methods by a fair margin. Furthermore, we combine the proposed translation module with the vision and planning system to let a robot perform various manipulation tasks. Finally, we demonstrate the effectiveness of our framework on a full-size humanoid robot WALK-MAN

    Head-mounted Sensory Augmentation Device: Comparing Haptic and Audio Modality

    Get PDF
    This paper investigates and compares the effectiveness of haptic and audio modality for navigation in low visibility environment using a sensory augmentation device. A second generation head-mounted vibrotactile interface as a sensory augmentation prototype was developed to help users to navigate in such environments. In our experiment, a subject navigates along a wall relying on the haptic or audio feedbacks as navigation commands. Haptic/audio feedback is presented to the subjects according to the information measured from the walls to a set of 12 ultrasound sensors placed around a helmet and a classification algorithm by using multilayer perceptron neural network. Results showed the haptic modality leads to significantly lower route deviation in navigation compared to auditory feedback. Furthermore, the NASA TLX questionnaire showed that subjects reported lower cognitive workload with haptic modality although both modalities were able to navigate the users along the wall

    A Developmental Organization for Robot Behavior

    Get PDF
    This paper focuses on exploring how learning and development can be structured in synthetic (robot) systems. We present a developmental assembler for constructing reusable and temporally extended actions in a sequence. The discussion adopts the traditions of dynamic pattern theory in which behavior is an artifact of coupled dynamical systems with a number of controllable degrees of freedom. In our model, the events that delineate control decisions are derived from the pattern of (dis)equilibria on a working subset of sensorimotor policies. We show how this architecture can be used to accomplish sequential knowledge gathering and representation tasks and provide examples of the kind of developmental milestones that this approach has already produced in our lab

    Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals

    Full text link
    High sample complexity has long been a challenge for RL. On the other hand, humans learn to perform tasks not only from interaction or demonstrations, but also by reading unstructured text documents, e.g., instruction manuals. Instruction manuals and wiki pages are among the most abundant data that could inform agents of valuable features and policies or task-specific environmental dynamics and reward structures. Therefore, we hypothesize that the ability to utilize human-written instruction manuals to assist learning policies for specific tasks should lead to a more efficient and better-performing agent. We propose the Read and Reward framework. Read and Reward speeds up RL algorithms on Atari games by reading manuals released by the Atari game developers. Our framework consists of a QA Extraction module that extracts and summarizes relevant information from the manual and a Reasoning module that evaluates object-agent interactions based on information from the manual. An auxiliary reward is then provided to a standard A2C RL agent, when interaction is detected. Experimentally, various RL algorithms obtain significant improvement in performance and training speed when assisted by our design
    corecore