4,496 research outputs found

    Stable Torque Optimization for Redundant Robots Using a Short Preview

    Get PDF
    We consider the known phenomenon of torque oscillations and motion instabilities that occur in redundant robots during the execution of sufficiently long Cartesian trajectories when the joint torque is instantaneously minimized. In the framework of online local redundancy resolution methods, we propose basic variations of the minimum torque scheme to address this issue. Either the joint torque norm is minimized over two successive discrete-time samples using a short preview window, or we minimize the norm of the difference with respect to a desired momentum-damping joint torque, or the two schemes are combined together. The resulting local control methods are all formulated as well-posed linear quadratic problems, and their closed-form solutions also generate low joint velocities while addressing the primary torque optimization objectives. Stable and consistent behaviors are obtained along short or long Cartesian position trajectories, as illustrated with simulations on a 3R planar arm and with experiments on a 7R KUKA LWR robot

    Geometry-aware Manipulability Learning, Tracking and Transfer

    Full text link
    Body posture influences human and robots performance in manipulation tasks, as appropriate poses facilitate motion or force exertion along different axes. In robotics, manipulability ellipsoids arise as a powerful descriptor to analyze, control and design the robot dexterity as a function of the articulatory joint configuration. This descriptor can be designed according to different task requirements, such as tracking a desired position or apply a specific force. In this context, this paper presents a novel \emph{manipulability transfer} framework, a method that allows robots to learn and reproduce manipulability ellipsoids from expert demonstrations. The proposed learning scheme is built on a tensor-based formulation of a Gaussian mixture model that takes into account that manipulability ellipsoids lie on the manifold of symmetric positive definite matrices. Learning is coupled with a geometry-aware tracking controller allowing robots to follow a desired profile of manipulability ellipsoids. Extensive evaluations in simulation with redundant manipulators, a robotic hand and humanoids agents, as well as an experiment with two real dual-arm systems validate the feasibility of the approach.Comment: Accepted for publication in the Intl. Journal of Robotics Research (IJRR). Website: https://sites.google.com/view/manipulability. Code: https://github.com/NoemieJaquier/Manipulability. 24 pages, 20 figures, 3 tables, 4 appendice

    Optimal redundancy control for robot manipulators

    Get PDF
    Optimal control for kinematically redundant robots is addressed for two different optimization problems. In the first optimization problem, we consider the minimization of the transfer time along a given Cartesian path for a redundant robot. This problem can be solved in two steps, by separating the generation of a joint path associated to the Cartesian path from the exact minimization of motion time under kinematic/dynamic bounds along the obtained parametrized joint path. In this thesis, multiple sub-optimal solutions can be found, depending on how redundancy is locally resolved in the joint space within the first step. A solution method that works at the acceleration level is proposed, by using weighted pseudoinversion, optimizing an inertia-related criterion, and including null-space damping. The obtained results demonstrate consistently good behaviors and definitely faster motion times in comparison with related methods proposed in the literature. The motion time obtained with the proposed method is close to the global time-optimal solution along the same Cartesian path. Furthermore, a reasonable tracking control performance is obtained on the experimental executed motions. In the second optimization problem, we consider the known phenomenon of torque oscillations and motion instabilities that occur in redundant robots during the execution of sufficiently long Cartesian trajectories when the joint torque is instantaneously minimized. In the framework of on-line local redundancy resolution methods, we propose basic variations of the minimum torque scheme to address this issue. Either the joint torque norm is minimized over two successive discrete-time samples using a short preview window, or we minimize the norm of the difference with respect to a desired momentum-damping joint torque, or the two schemes are combined together. The resulting local control methods are all formulated as well-posed linear-quadratic problems, and their closed-form solutions generate also low joint velocities while addressing the primary torque optimization objectives. Stable and consistent behaviors are obtained along short or long Cartesian position trajectories. For the two addressed optimization problems in this thesis, the results are obtained using three different robot systems, namely a 3R planar arm, a 6R Universal Robots UR10, and a 7R KUKA LWR robot

    A Parallelized and Layered Model for the Shallow-Water Equations

    Get PDF
    An energy- and enstrophy-conserving and optimally-dispersive numerical scheme for the shallow- water equations is accelerated through implementation in the GPU environment. Previous research showed the viability of the numerical scheme under standard shallow-water test cases, but was limited in applications by computation time constraints. We overcome these limitations by paral- lelizing the numerical computation in the GPU environment. We also extend the capabilities of the implementation to support not just a single shallow-water layer, but multiple. These improvements significantly expand the range of tests that can be used to exercise the model, and enable better understanding of the power of the numerical scheme at large scales

    Multi-GPU Acceleration of Iterative X-ray CT Image Reconstruction

    Get PDF
    X-ray computed tomography is a widely used medical imaging modality for screening and diagnosing diseases and for image-guided radiation therapy treatment planning. Statistical iterative reconstruction (SIR) algorithms have the potential to significantly reduce image artifacts by minimizing a cost function that models the physics and statistics of the data acquisition process in X-ray CT. SIR algorithms have superior performance compared to traditional analytical reconstructions for a wide range of applications including nonstandard geometries arising from irregular sampling, limited angular range, missing data, and low-dose CT. The main hurdle for the widespread adoption of SIR algorithms in multislice X-ray CT reconstruction problems is their slow convergence rate and associated computational time. We seek to design and develop fast parallel SIR algorithms for clinical X-ray CT scanners. Each of the following approaches is implemented on real clinical helical CT data acquired from a Siemens Sensation 16 scanner and compared to the straightforward implementation of the Alternating Minimization (AM) algorithm of O’Sullivan and Benac [1]. We parallelize the computationally expensive projection and backprojection operations by exploiting the massively parallel hardware architecture of 3 NVIDIA TITAN X Graphical Processing Unit (GPU) devices with CUDA programming tools and achieve an average speedup of 72X over a straightforward CPU implementation. We implement a multi-GPU based voxel-driven multislice analytical reconstruction algorithm called Feldkamp-Davis-Kress (FDK) [2] and achieve an average overall speedup of 1382X over the baseline CPU implementation by using 3 TITAN X GPUs. Moreover, we propose a novel adaptive surrogate-function based optimization scheme for the AM algorithm, resulting in more aggressive update steps in every iteration. On average, we double the convergence rate of our baseline AM algorithm and also improve image quality by using the adaptive surrogate function. We extend the multi-GPU and adaptive surrogate-function based acceleration techniques to dual-energy reconstruction problems as well. Furthermore, we design and develop a GPU-based deep Convolutional Neural Network (CNN) to denoise simulated low-dose X-ray CT images. Our experiments show significant improvements in the image quality with our proposed deep CNN-based algorithm against some widely used denoising techniques including Block Matching 3-D (BM3D) and Weighted Nuclear Norm Minimization (WNNM). Overall, we have developed novel fast, parallel, computationally efficient methods to perform multislice statistical reconstruction and image-based denoising on clinically-sized datasets
    • …
    corecore