237 research outputs found

    TossingBot: Learning to Throw Arbitrary Objects with Residual Physics

    Full text link
    We investigate whether a robot arm can learn to pick and throw arbitrary objects into selected boxes quickly and accurately. Throwing has the potential to increase the physical reachability and picking speed of a robot arm. However, precisely throwing arbitrary objects in unstructured settings presents many challenges: from acquiring reliable pre-throw conditions (e.g. initial pose of object in manipulator) to handling varying object-centric properties (e.g. mass distribution, friction, shape) and dynamics (e.g. aerodynamics). In this work, we propose an end-to-end formulation that jointly learns to infer control parameters for grasping and throwing motion primitives from visual observations (images of arbitrary objects in a bin) through trial and error. Within this formulation, we investigate the synergies between grasping and throwing (i.e., learning grasps that enable more accurate throws) and between simulation and deep learning (i.e., using deep networks to predict residuals on top of control parameters predicted by a physics simulator). The resulting system, TossingBot, is able to grasp and throw arbitrary objects into boxes located outside its maximum reach range at 500+ mean picks per hour (600+ grasps per hour with 85% throwing accuracy); and generalizes to new objects and target locations. Videos are available at https://tossingbot.cs.princeton.eduComment: Summary Video: https://youtu.be/f5Zn2Up2RjQ Project webpage: https://tossingbot.cs.princeton.ed

    Financial sentiment analysis using FinBERT with application in predicting stock movement

    Full text link
    We apply sentiment analysis in financial context using FinBERT, and build a deep neural network model based on LSTM to predict the movement of financial market movement. We apply this model on stock news dataset, and compare its effectiveness to BERT, LSTM and classical ARIMA model. We find that sentiment is an effective factor in predicting market movement. We also propose several method to improve the model.Comment: CS224U projec

    Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning

    Full text link
    Skilled robotic manipulation benefits from complex synergies between non-prehensile (e.g. pushing) and prehensile (e.g. grasping) actions: pushing can help rearrange cluttered objects to make space for arms and fingers; likewise, grasping can help displace objects to make pushing movements more precise and collision-free. In this work, we demonstrate that it is possible to discover and learn these synergies from scratch through model-free deep reinforcement learning. Our method involves training two fully convolutional networks that map from visual observations to actions: one infers the utility of pushes for a dense pixel-wise sampling of end effector orientations and locations, while the other does the same for grasping. Both networks are trained jointly in a Q-learning framework and are entirely self-supervised by trial and error, where rewards are provided from successful grasps. In this way, our policy learns pushing motions that enable future grasps, while learning grasps that can leverage past pushes. During picking experiments in both simulation and real-world scenarios, we find that our system quickly learns complex behaviors amid challenging cases of clutter, and achieves better grasping success rates and picking efficiencies than baseline alternatives after only a few hours of training. We further demonstrate that our method is capable of generalizing to novel objects. Qualitative results (videos), code, pre-trained models, and simulation environments are available at http://vpg.cs.princeton.eduComment: To appear at the International Conference On Intelligent Robots and Systems (IROS) 2018. Project webpage: http://vpg.cs.princeton.edu Summary video: https://youtu.be/-OkyX7Zlhi

    Variable Selection for Case-Cohort Studies with Failure Time Outcome

    Get PDF
    Case-cohort designs are widely used in large cohort studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large, so an efficient variable selection method is necessary. In this paper, we study the properties of variable selection using the smoothly clipped absolute deviation penalty in a case-cohort design with a diverging number of parameters. We establish the consistency and asymptotic normality of the maximum penalized pseudo-partial likelihood estimator, and show that the proposed variable selection procedure is consistent and has an asymptotic oracle property. Simulation studies compare the finite sample performance of the procedure with Akaike information criterion- and Bayesian information criterion-based tuning parameter selection methods. We make recommendations for use of the procedures in case-cohort studies, and apply them to the Busselton Health Study

    Im2Pano3D: Extrapolating 360 Structure and Semantics Beyond the Field of View

    Full text link
    We present Im2Pano3D, a convolutional neural network that generates a dense prediction of 3D structure and a probability distribution of semantic labels for a full 360 panoramic view of an indoor scene when given only a partial observation (<= 50%) in the form of an RGB-D image. To make this possible, Im2Pano3D leverages strong contextual priors learned from large-scale synthetic and real-world indoor scenes. To ease the prediction of 3D structure, we propose to parameterize 3D surfaces with their plane equations and train the model to predict these parameters directly. To provide meaningful training supervision, we use multiple loss functions that consider both pixel level accuracy and global context consistency. Experiments demon- strate that Im2Pano3D is able to predict the semantics and 3D structure of the unobserved scene with more than 56% pixel accuracy and less than 0.52m average distance error, which is significantly better than alternative approaches.Comment: Video summary: https://youtu.be/Au3GmktK-S

    Fundamental trends within falling match rates: Insights from the past decade of Canadian residency matching data

    Get PDF
    Background: The number of unmatched Canadian Medical Graduates (CMGs) has risen dramatically over the last decade. To identify long-term solutions to this problem, an understanding of the factors contributing to these rising unmatched rates is critical.&nbsp; Methods: Using match and electives data from 2009-2019, we employed machine learning algorithms to identify three clusters of disciplines with distinct trends in match and electives behaviours. We assessed the relationships between unmatched rates, competitiveness, rates of parallel planning, and program selection practices at a discipline level.&nbsp; Results: Across Canada, growth in CMGs has outpaced growth in residency seats, narrowing the seat-to-applicant ratio. Yet not all disciplines have been affected equally - a subset of surgical disciplines experienced a consistent decline in residency seats over time. Applicants to these disciplines are also at disproportionate risk of becoming unmatched, and this is associated with lower rates of parallel planning as quantified through clinical electives and match applications. This, in turn, is associated with the program selection practices of these disciplines.&nbsp; Conclusion: Long term solutions to the unmatched CMG crisis require more nuance than indiscriminately increasing residency seats and should consider cluster specific match ratios as well as regulations around clinical electives and program selection practices

    Rearrangement Planning for General Part Assembly

    Full text link
    Most successes in autonomous robotic assembly have been restricted to single target or category. We propose to investigate general part assembly, the task of creating novel target assemblies with unseen part shapes. As a fundamental step to a general part assembly system, we tackle the task of determining the precise poses of the parts in the target assembly, which we we term ``rearrangement planning''. We present General Part Assembly Transformer (GPAT), a transformer-based model architecture that accurately predicts part poses by inferring how each part shape corresponds to the target shape. Our experiments on both 3D CAD models and real-world scans demonstrate GPAT's generalization abilities to novel and diverse target and part shapes.Comment: Project website: https://general-part-assembly.github.io

    VIRDO++: Real-World, Visuo-tactile Dynamics and Perception of Deformable Objects

    Full text link
    Deformable objects manipulation can benefit from representations that seamlessly integrate vision and touch while handling occlusions. In this work, we present a novel approach for, and real-world demonstration of, multimodal visuo-tactile state-estimation and dynamics prediction for deformable objects. Our approach, VIRDO++, builds on recent progress in multimodal neural implicit representations for deformable object state-estimation [1] via a new formulation for deformation dynamics and a complementary state-estimation algorithm that (i) maintains a belief over deformations, and (ii) enables practical real-world application by removing the need for privileged contact information. In the context of two real-world robotic tasks, we show:(i) high-fidelity cross-modal state-estimation and prediction of deformable objects from partial visuo-tactile feedback, and (ii) generalization to unseen objects and contact formations
    corecore