107,707 research outputs found

    Object-Oriented Dynamics Learning through Multi-Level Abstraction

    Full text link
    Object-based approaches for learning action-conditioned dynamics has demonstrated promise for generalization and interpretability. However, existing approaches suffer from structural limitations and optimization difficulties for common environments with multiple dynamic objects. In this paper, we present a novel self-supervised learning framework, called Multi-level Abstraction Object-oriented Predictor (MAOP), which employs a three-level learning architecture that enables efficient object-based dynamics learning from raw visual observations. We also design a spatial-temporal relational reasoning mechanism for MAOP to support instance-level dynamics learning and handle partial observability. Our results show that MAOP significantly outperforms previous methods in terms of sample efficiency and generalization over novel environments for learning environment models. We also demonstrate that learned dynamics models enable efficient planning in unseen environments, comparable to true environment models. In addition, MAOP learns semantically and visually interpretable disentangled representations.Comment: Accepted to the Thirthy-Fourth AAAI Conference On Artificial Intelligence (AAAI), 202

    Deep Visual Foresight for Planning Robot Motion

    Full text link
    A key challenge in scaling up robot learning to many skills and environments is removing the need for human supervision, so that robots can collect their own data and improve their own performance without being limited by the cost of requesting human feedback. Model-based reinforcement learning holds the promise of enabling an agent to learn to predict the effects of its actions, which could provide flexible predictive models for a wide range of tasks and environments, without detailed human supervision. We develop a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data. Our approach does not require a calibrated camera, an instrumented training set-up, nor precise sensing and actuation. Our results show that our method enables a real robot to perform nonprehensile manipulation -- pushing objects -- and can handle novel objects not seen during training.Comment: ICRA 2017. Supplementary video: https://sites.google.com/site/robotforesight

    Back to the basis - observations support spherically closed dynamic space

    Full text link
    A holistic view of the cosmological appearance and development of space is obtained by studying space as a spherically closed surface of a 4-sphere in a zero energy balance between motion and gravitation. Such an approach re-establishes Einstein's original view of the cosmological structure of the universe but instead of forcing space to be static with a cosmology constant, it lets it contract or expand while constantly maintaining a balance between the energies of motion and gravitation within the structure. In spherically closed dynamic space the fourth dimension is purely metric in its nature; time can be treated as a universal scalar, and the line element cdt in the fourth dimension gets the meaning of the distance that space moves at velocity c in time differential dt. The rest energy of matter appears as the energy of motion due to the motion of space in the direction of the 4-radius of the structure. All velocities in space are related to the 4-velocity of space, and the local state of rest appears as a property of the local energy system rather than as the state of an observer. Relativistic phenomena and cosmological predictions can be derived in closed mathematical form and the picture of cosmology is cleared; the Euclidean appearance of distant space is predicted and no dark energy or free parameters are needed to explain the magnitude/redshift relations of distant objects.Comment: 19 pages, 13 Figures, presented in the 1st Crisis in Cosmology Conference (CCC-I) in Moncao, Portugal, June 23-25, 200

    Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems

    Full text link
    Predicting the future location of vehicles is essential for safety-critical applications such as advanced driver assistance systems (ADAS) and autonomous driving. This paper introduces a novel approach to simultaneously predict both the location and scale of target vehicles in the first-person (egocentric) view of an ego-vehicle. We present a multi-stream recurrent neural network (RNN) encoder-decoder model that separately captures both object location and scale and pixel-level observations for future vehicle localization. We show that incorporating dense optical flow improves prediction results significantly since it captures information about motion as well as appearance change. We also find that explicitly modeling future motion of the ego-vehicle improves the prediction accuracy, which could be especially beneficial in intelligent and automated vehicles that have motion planning capability. To evaluate the performance of our approach, we present a new dataset of first-person videos collected from a variety of scenarios at road intersections, which are particularly challenging moments for prediction because vehicle trajectories are diverse and dynamic.Comment: To appear on ICRA 201

    Learning Particle Dynamics for Manipulating Rigid Bodies, Deformable Objects, and Fluids

    Full text link
    Real-life control tasks involve matters of various substances---rigid or soft bodies, liquid, gas---each with distinct physical behaviors. This poses challenges to traditional rigid-body physics engines. Particle-based simulators have been developed to model the dynamics of these complex scenes; however, relying on approximation techniques, their simulation often deviates from real-world physics, especially in the long term. In this paper, we propose to learn a particle-based simulator for complex control tasks. Combining learning with particle-based systems brings in two major benefits: first, the learned simulator, just like other particle-based systems, acts widely on objects of different materials; second, the particle-based representation poses strong inductive bias for learning: particles of the same type have the same dynamics within. This enables the model to quickly adapt to new environments of unknown dynamics within a few observations. We demonstrate robots achieving complex manipulation tasks using the learned simulator, such as manipulating fluids and deformable foam, with experiments both in simulation and in the real world. Our study helps lay the foundation for robot learning of dynamic scenes with particle-based representations.Comment: Accepted to ICLR 2019. Project Page: http://dpi.csail.mit.edu Video: https://www.youtube.com/watch?v=FrPpP7aW3L
    • …
    corecore