16,254 research outputs found

    Imprecise dynamic walking with time-projection control

    Get PDF
    We present a new walking foot-placement controller based on 3LP, a 3D model of bipedal walking that is composed of three pendulums to simulate falling, swing and torso dynamics. Taking advantage of linear equations and closed-form solutions of the 3LP model, our proposed controller projects intermediate states of the biped back to the beginning of the phase for which a discrete LQR controller is designed. After the projection, a proper control policy is generated by this LQR controller and used at the intermediate time. This control paradigm reacts to disturbances immediately and includes rules to account for swing dynamics and leg-retraction. We apply it to a simulated Atlas robot in position-control, always commanded to perform in-place walking. The stance hip joint in our robot keeps the torso upright to let the robot naturally fall, and the swing hip joint tracks the desired footstep location. Combined with simple Center of Pressure (CoP) damping rules in the low-level controller, our foot-placement enables the robot to recover from strong pushes and produce periodic walking gaits when subject to persistent sources of disturbance, externally or internally. These gaits are imprecise, i.e., emergent from asymmetry sources rather than precisely imposing a desired velocity to the robot. Also in extreme conditions, restricting linearity assumptions of the 3LP model are often violated, but the system remains robust in our simulations. An extensive analysis of closed-loop eigenvalues, viable regions and sensitivity to push timings further demonstrate the strengths of our simple controller

    Motion sequence analysis in the presence of figural cues

    Full text link
    Published in final edited form as: Neurocomputing. 2015 January 5, 147: 485–491The perception of 3-D structure in dynamic sequences is believed to be subserved primarily through the use of motion cues. However, real-world sequences contain many figural shape cues besides the dynamic ones. We hypothesize that if figural cues are perceptually significant during sequence analysis, then inconsistencies in these cues over time would lead to percepts of non-rigidity in sequences showing physically rigid objects in motion. We develop an experimental paradigm to test this hypothesis and present results with two patients with impairments in motion perception due to focal neurological damage, as well as two control subjects. Consistent with our hypothesis, the data suggest that figural cues strongly influence the perception of structure in motion sequences, even to the extent of inducing non-rigid percepts in sequences where motion information alone would yield rigid structures. Beyond helping to probe the issue of shape perception, our experimental paradigm might also serve as a possible perceptual assessment tool in a clinical setting.The authors wish to thank all observers who participated in the experiments reported here. This research and the preparation of this manuscript was supported by the National Institutes of Health RO1 NS064100 grant to LMV. (RO1 NS064100 - National Institutes of Health)Accepted manuscrip

    Learning to Prevent Monocular SLAM Failure using Reinforcement Learning

    Full text link
    Monocular SLAM refers to using a single camera to estimate robot ego motion while building a map of the environment. While Monocular SLAM is a well studied problem, automating Monocular SLAM by integrating it with trajectory planning frameworks is particularly challenging. This paper presents a novel formulation based on Reinforcement Learning (RL) that generates fail safe trajectories wherein the SLAM generated outputs do not deviate largely from their true values. Quintessentially, the RL framework successfully learns the otherwise complex relation between perceptual inputs and motor actions and uses this knowledge to generate trajectories that do not cause failure of SLAM. We show systematically in simulations how the quality of the SLAM dramatically improves when trajectories are computed using RL. Our method scales effectively across Monocular SLAM frameworks in both simulation and in real world experiments with a mobile robot.Comment: Accepted at the 11th Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP) 2018 More info can be found at the project page at https://robotics.iiit.ac.in/people/vignesh.prasad/SLAMSafePlanner.html and the supplementary video can be found at https://www.youtube.com/watch?v=420QmM_Z8v

    Flight Dynamics-based Recovery of a UAV Trajectory using Ground Cameras

    Get PDF
    We propose a new method to estimate the 6-dof trajectory of a flying object such as a quadrotor UAV within a 3D airspace monitored using multiple fixed ground cameras. It is based on a new structure from motion formulation for the 3D reconstruction of a single moving point with known motion dynamics. Our main contribution is a new bundle adjustment procedure which in addition to optimizing the camera poses, regularizes the point trajectory using a prior based on motion dynamics (or specifically flight dynamics). Furthermore, we can infer the underlying control input sent to the UAV's autopilot that determined its flight trajectory. Our method requires neither perfect single-view tracking nor appearance matching across views. For robustness, we allow the tracker to generate multiple detections per frame in each video. The true detections and the data association across videos is estimated using robust multi-view triangulation and subsequently refined during our bundle adjustment procedure. Quantitative evaluation on simulated data and experiments on real videos from indoor and outdoor scenes demonstrates the effectiveness of our method

    How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV

    Full text link
    This work explores the feasibility of steering a drone with a (recurrent) neural network, based on input from a forward looking camera, in the context of a high-level navigation task. We set up a generic framework for training a network to perform navigation tasks based on imitation learning. It can be applied to both aerial and land vehicles. As a proof of concept we apply it to a UAV (Unmanned Aerial Vehicle) in a simulated environment, learning to cross a room containing a number of obstacles. So far only feedforward neural networks (FNNs) have been used to train UAV control. To cope with more complex tasks, we propose the use of recurrent neural networks (RNN) instead and successfully train an LSTM (Long-Short Term Memory) network for controlling UAVs. Vision based control is a sequential prediction problem, known for its highly correlated input data. The correlation makes training a network hard, especially an RNN. To overcome this issue, we investigate an alternative sampling method during training, namely window-wise truncated backpropagation through time (WW-TBPTT). Further, end-to-end training requires a lot of data which often is not available. Therefore, we compare the performance of retraining only the Fully Connected (FC) and LSTM control layers with networks which are trained end-to-end. Performing the relatively simple task of crossing a room already reveals important guidelines and good practices for training neural control networks. Different visualizations help to explain the behavior learned.Comment: 12 pages, 30 figure

    On Offline Evaluation of Vision-based Driving Models

    Get PDF
    Autonomous driving models should ideally be evaluated by deploying them on a fleet of physical vehicles in the real world. Unfortunately, this approach is not practical for the vast majority of researchers. An attractive alternative is to evaluate models offline, on a pre-collected validation dataset with ground truth annotation. In this paper, we investigate the relation between various online and offline metrics for evaluation of autonomous driving models. We find that offline prediction error is not necessarily correlated with driving quality, and two models with identical prediction error can differ dramatically in their driving performance. We show that the correlation of offline evaluation with driving quality can be significantly improved by selecting an appropriate validation dataset and suitable offline metrics. The supplementary video can be viewed at https://www.youtube.com/watch?v=P8K8Z-iF0cYComment: Published at the ECCV 2018 conferenc

    Evolution of spiral and scroll waves of excitation in a mathematical model of ischaemic border zone

    Get PDF
    Abnormal electrical activity from the boundaries of ischemic cardiac tissue is recognized as one of the major causes in generation of ischemia-reperfusion arrhythmias. Here we present theoretical analysis of the waves of electrical activity that can rise on the boundary of cardiac cell network upon its recovery from ischaemia-like conditions. The main factors included in our analysis are macroscopic gradients of the cell-to-cell coupling and cell excitability and microscopic heterogeneity of individual cells. The interplay between these factors allows one to explain how spirals form, drift together with the moving boundary, get transiently pinned to local inhomogeneities, and finally penetrate into the bulk of the well-coupled tissue where they reach macroscopic scale. The asymptotic theory of the drift of spiral and scroll waves based on response functions provides explanation of the drifts involved in this mechanism, with the exception of effects due to the discreteness of cardiac tissue. In particular, this asymptotic theory allows an extrapolation of 2D events into 3D, which has shown that cells within the border zone can give rise to 3D analogues of spirals, the scroll waves. When and if such scroll waves escape into a better coupled tissue, they are likely to collapse due to the positive filament tension. However, our simulations have shown that such collapse of newly generated scrolls is not inevitable and that under certain conditions filament tension becomes negative, leading to scroll filaments to expand and multiply leading to a fibrillation-like state within small areas of cardiac tissue.Comment: 26 pages, 13 figures, appendix and 2 movies, as accepted to PLoS ONE 2011/08/0

    Push recovery with stepping strategy based on time-projection control

    Get PDF
    In this paper, we present a simple control framework for on-line push recovery with dynamic stepping properties. Due to relatively heavy legs in our robot, we need to take swing dynamics into account and thus use a linear model called 3LP which is composed of three pendulums to simulate swing and torso dynamics. Based on 3LP equations, we formulate discrete LQR controllers and use a particular time-projection method to adjust the next footstep location on-line during the motion continuously. This adjustment, which is found based on both pelvis and swing foot tracking errors, naturally takes the swing dynamics into account. Suggested adjustments are added to the Cartesian 3LP gaits and converted to joint-space trajectories through inverse kinematics. Fixed and adaptive foot lift strategies also ensure enough ground clearance in perturbed walking conditions. The proposed structure is robust, yet uses very simple state estimation and basic position tracking. We rely on the physical series elastic actuators to absorb impacts while introducing simple laws to compensate their tracking bias. Extensive experiments demonstrate the functionality of different control blocks and prove the effectiveness of time-projection in extreme push recovery scenarios. We also show self-produced and emergent walking gaits when the robot is subject to continuous dragging forces. These gaits feature dynamic walking robustness due to relatively soft springs in the ankles and avoiding any Zero Moment Point (ZMP) control in our proposed architecture.Comment: 20 pages journal pape
    • …
    corecore