7 research outputs found

    Hand posture prediction using neural networks within a biomechanical model

    Get PDF
    This paper proposes the use of artificial neural networks (ANNs) in the framework of a biomechanical hand model for grasping. ANNs enhance the model capabilities as they substitute estimated data for the experimental inputs required by the grasping algorithm used. These inputs are the tentative grasping posture and the most open posture during grasping. As a consequence, more realistic grasping postures are predicted by the grasping algorithm, along with the contact information required by the dynamic biomechanical model (contact points and normals). Several neural network architectures are tested and compared in terms of prediction errors, leading to encouraging results. The performance of the overall proposal is also shown through simulation, where a grasping experiment is replicated and compared to the real grasping data collected by a data glove device. 

    Grasp modelling with a biomechanical model of the hand

    Get PDF
    The use of a biomechanical model for human grasp modelling is presented. A previously validated biomechanical model of the hand has been used. The equilibrium of the grasped object was added to the model through the consideration of a soft contact model. A grasping posture generation algorithm was also incorporated into the model. All the geometry was represented using a spherical extension of polytopes (s-topes) for efficient collision detection. The model was used to simulate an experiment in which a subject was asked to grasp two cylinders of different diameters and weights. Different objective functions were checked to solve the indeterminate problem. The normal finger forces estimated by the model were compared to those experimentally measured. The popular objective function sum of the squared muscle stresses was shown not suitable for the grasping simulation, requiring at least being complemented by task-dependent grasp quality measures

    Predictive and Multi-rate Sensor-Based Planning under Uncertainty

    Get PDF
    Email Print Request Permissions In this paper, a general formulation of a predictive and multirate (MR) reactive planning method for intelligent vehicles (IVs) is introduced. The method handles path planning and trajectory planning for IVs in dynamic environments with uncertainty, in which the kinodynamic vehicle constraints are also taken into account. It is based on the potential field projection method (PFP), which combines the classical potential field (PF) method with the MR Kalman filter estimation. PFP takes into account the future object trajectories and their associated uncertainties, which makes it different from other look-ahead approaches. Here, a new PF is included in the Lagrange-Euler formulation in a natural way, accounting for the vehicle dynamics. The resulting accelerations are translated into control inputs that are considered in the estimation process. This leads to the generation of a local trajectory in real time (RT) that fully meets the constraints imposed by the kinematic and dynamic models of the IV. The properties of the method are demonstrated by simulation with MATLAB and C++ applications. Very good performance and execution times are achieved, even in challenging situations. In a scenario with 100 obstacles, a local trajectory is obtained in less than 1 s, which is suitable for RT applications

    Distance computation between non-holonomic motions with constant accelerations

    Full text link
    A method for computing the distance between two moving robots or between a mobile robot and a dynamic obstacle with linear or arc-like motions and with constant accelerations is presented in this paper. This distance is obtained without stepping or discretizing the motions of the robots or obstacles. The robots and obstacles are modelled by convex hulls. This technique obtains the future instant in time when two moving objects will be at their minimum translational distance - i.e., at their minimum separation or maximum penetration (if they will collide). This distance and the future instant in time are computed in parallel. This method is intended to be run each time new information from the world is received and, consequently, it can be used for generating collision-free trajectories for non-holonomic mobile robots.This work was partially funded by the Spanish government CICYT projects: DPI2010-20814-C02-02, and DPI2011-28507-C02-01.Bernabeu Soler, EJ.; Valera Fernández, Á.; Gómez Moreno, J. (2013). Distance computation between non-holonomic motions with constant accelerations. International Journal of Advanced Robotic Systems. 10:1-15. doi:10.5772/56760S11510Urmson, C., Anhalt, J., Bagnell, D., Baker, C., Bittner, R., Clark, M. N., … Ferguson, D. (2008). Autonomous driving in urban environments: Boss and the Urban Challenge. Journal of Field Robotics, 25(8), 425-466. doi:10.1002/rob.20255Redon, S., Kheddar, A., & Coquillart, S. (2002). Fast Continuous Collision Detection between Rigid Bodies. Computer Graphics Forum, 21(3), 279-287. doi:10.1111/1467-8659.t01-1-00587Canny, J. (1986). Collision Detection for Moving Polyhedra. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-8(2), 200-209. doi:10.1109/tpami.1986.4767773Buss, S. R. (2005). Collision detection with relative screw motion. The Visual Computer, 21(1-2), 41-58. doi:10.1007/s00371-004-0269-8Fiorini, P., & Shiller, Z. (1998). Motion Planning in Dynamic Environments Using Velocity Obstacles. The International Journal of Robotics Research, 17(7), 760-772. doi:10.1177/027836499801700706Gilbert, E. G., Johnson, D. W., & Keerthi, S. S. (1988). A fast procedure for computing the distance between complex objects in three-dimensional space. IEEE Journal on Robotics and Automation, 4(2), 193-203. doi:10.1109/56.2083Bernabeu, E. J., & Tornero, J. (2002). Hough transform for distance computation and collision avoidance. IEEE Transactions on Robotics and Automation, 18(3), 393-398. doi:10.1109/tra.2002.1019476Simon, D. (2006). Optimal State Estimation. doi:10.1002/047004534

    Towards a Realistic and Self-Contained Biomechanical Model of the Hand

    Get PDF

    Implementation of a real time Hough transform using FPGA technology

    Get PDF
    This thesis is concerned with the modelling, design and implementation of efficient architectures for performing the Hough Transform (HT) on mega-pixel resolution real-time images using Field Programmable Gate Array (FPGA) technology. Although the HT has been around for many years and a number of algorithms have been developed it still remains a significant bottleneck in many image processing applications. Even though, the basic idea of the HT is to locate curves in an image that can be parameterized: e.g. straight lines, polynomials or circles, in a suitable parameter space, the research presented in this thesis will focus only on location of straight lines on binary images. The HT algorithm uses an accumulator array (accumulator bins) to detect the existence of a straight line on an image. As the image needs to be binarized, a novel generic synchronization circuit for windowing operations was designed to perform edge detection. An edge detection method of special interest, the canny method, is used and the design and implementation of it in hardware is achieved in this thesis. As each image pixel can be implemented independently, parallel processing can be performed. However, the main disadvantage of the HT is the large storage and computational requirements. This thesis presents new and state-of-the-art hardware implementations for the minimization of the computational cost, using the Hybrid-Logarithmic Number System (Hybrid-LNS) for calculating the HT for fixed bit-width architectures. It is shown that using the Hybrid-LNS the computational cost is minimized, while the precision of the HT algorithm is maintained. Advances in FPGA technology now make it possible to implement functions as the HT in reconfigurable fabrics. Methods for storing large arrays on FPGA’s are presented, where data from a 1024 x 1024 pixel camera at a rate of up to 25 frames per second are processed
    corecore