2,382 research outputs found

    A quantitative taxonomy of human hand grasps

    Get PDF
    Background: A proper modeling of human grasping and of hand movements is fundamental for robotics, prosthetics, physiology and rehabilitation. The taxonomies of hand grasps that have been proposed in scientific literature so far are based on qualitative analyses of the movements and thus they are usually not quantitatively justified. Methods: This paper presents to the best of our knowledge the first quantitative taxonomy of hand grasps based on biomedical data measurements. The taxonomy is based on electromyography and kinematic data recorded from 40 healthy subjects performing 20 unique hand grasps. For each subject, a set of hierarchical trees are computed for several signal features. Afterwards, the trees are combined, first into modality-specific (i.e. muscular and kinematic) taxonomies of hand grasps and then into a general quantitative taxonomy of hand movements. The modality-specific taxonomies provide similar results despite describing different parameters of hand movements, one being muscular and the other kinematic. Results: The general taxonomy merges the kinematic and muscular description into a comprehensive hierarchical structure. The obtained results clarify what has been proposed in the literature so far and they partially confirm the qualitative parameters used to create previous taxonomies of hand grasps. According to the results, hand movements can be divided into five movement categories defined based on the overall grasp shape, finger positioning and muscular activation. Part of the results appears qualitatively in accordance with previous results describing kinematic hand grasping synergies. Conclusions: The taxonomy of hand grasps proposed in this paper clarifies with quantitative measurements what has been proposed in the field on a qualitative basis, thus having a potential impact on several scientific fields

    Enhancing Generalizable 6D Pose Tracking of an In-Hand Object with Tactile Sensing

    Full text link
    While holding and manipulating an object, humans track the object states through vision and touch so as to achieve complex tasks. However, nowadays the majority of robot research perceives object states just from visual signals, hugely limiting the robotic manipulation abilities. This work presents a tactile-enhanced generalizable 6D pose tracking design named TEG-Track to track previously unseen in-hand objects. TEG-Track extracts tactile kinematic cues of an in-hand object from consecutive tactile sensing signals. Such cues are incorporated into a geometric-kinematic optimization scheme to enhance existing generalizable visual trackers. To test our method in real scenarios and enable future studies on generalizable visual-tactile tracking, we collect a real visual-tactile in-hand object pose tracking dataset. Experiments show that TEG-Track significantly improves state-of-the-art generalizable 6D pose trackers in both synthetic and real cases

    Physics-based motion planning for grasping and manipulation

    Get PDF
    This thesis develops a series of knowledge-oriented physics-based motion planning algorithms for grasping and manipulation in cluttered an uncertain environments. The main idea is to use high-level knowledge-based reasoning to define the manipulation constraints that define the way how robot should interact with the objects in the environment. These interactions are modeled by incorporating the physics-based model of rigid body dynamics in planning. The first part of the thesis is focused on the techniques to integrate the knowledge with physics-based motion planning. The knowledge is represented in terms of ontologies, a prologbased knowledge inference process is introduced that defines the manipulation constraints. These constraints are used in the state validation procedure of sampling-based kinodynamic motion planners. The state propagator of the motion planner is replaced by a physics-engine that takes care of the kinodynamic and physics-based constraints. To make the interaction humanlike, a low-level physics-based reasoning process is introduced that dynamically varies the control bounds by evaluating the physical properties of the objects. As a result, power efficient motion plans are obtained. Furthermore, a framework has been presented to incorporate linear temporal logic within physics-based motion planning to handle complex temporal goals. The second part of this thesis develops physics-based motion planning approaches to plan in cluttered and uncertain environments. The uncertainty is considered in 1) objects’ poses due to sensing and due to complex robot-object or object-object interactions; 2) uncertainty in the contact dynamics (such as friction coefficient); 3) uncertainty in robot controls. The solution is framed with sampling-based kinodynamic motion planners that solve the problem in open-loop, i.e., it considers uncertainty while planning and computes the solution in such a way that it successfully moves the robot from the start to the goal configuration even if there is uncertainty in the system. To implement the above stated approaches, a knowledge-oriented physics-based motion planning tool is presented. It is developed by extending The Kautham Project, a C++ based tool for sampling-based motion planning. Finally, the current research challenges and future research directions to extend the above stated approaches are discussedEsta tesis desarrolla una serie de algoritmos de planificación del movimientos para la aprehensión y la manipulación de objetos en entornos desordenados e inciertos, basados en la física y el conocimiento. La idea principal es utilizar el razonamiento de alto nivel basado en el conocimiento para definir las restricciones de manipulación que definen la forma en que el robot debería interactuar con los objetos en el entorno. Estas interacciones se modelan incorporando en la planificación el modelo dinámico de los sólidos rígidos. La primera parte de la tesis se centra en las técnicas para integrar el conocimiento con la planificación del movimientos basada en la física. Para ello, se representa el conocimiento mediante ontologías y se introduce un proceso de razonamiento basado en Prolog para definir las restricciones de manipulación. Estas restricciones se usan en los procedimientos de validación del estado de los algoritmos de planificación basados en muestreo, cuyo propagador de estado se susituye por un motor basado en la física que tiene en cuenta las restricciones físicas y kinodinámicas. Además se ha implementado un proceso de razonamiento de bajo nivel que permite adaptar los límites de los controles aplicados a las propiedades físicas de los objetos. Complementariamente, se introduce un marco de desarrollo para la inclusión de la lógica temporal lineal en la planificación de movimientos basada en la física. La segunda parte de esta tesis extiende el enfoque a planificación del movimiento basados en la física en entornos desordenados e inciertos. La incertidumbre se considera en 1) las poses de los objetos debido a la medición y a las interacciones complejas robot-objeto y objeto-objeto; 2) incertidumbre en la dinámica de los contactos (como el coeficiente de fricción); 3) incertidumbre en los controles del robot. La solución se enmarca en planificadores kinodinámicos basados en muestro que solucionan el problema en lazo abierto, es decir que consideran la incertidumbre en la planificación para calcular una solución robusta que permita mover al robot de la configuración inicial a la final a pesar de la incertidumbre. Para implementar los enfoques mencionados anteriormente, se presenta una herramienta de planificación del movimientos basada en la física y guiada por el conocimiento, desarrollada extendiendo The Kautham Project, una herramienta implementada en C++ para la planificación de movimientos basada en muestreo. Finalmente, de discute los retos actuales y las futuras lineas de investigación a seguir para extender los enfoques presentados

    Advancing Brain-Computer Interface System Performance in Hand Trajectory Estimation with NeuroKinect

    Full text link
    Brain-computer interface (BCI) technology enables direct communication between the brain and external devices, allowing individuals to control their environment using brain signals. However, existing BCI approaches face three critical challenges that hinder their practicality and effectiveness: a) time-consuming preprocessing algorithms, b) inappropriate loss function utilization, and c) less intuitive hyperparameter settings. To address these limitations, we present \textit{NeuroKinect}, an innovative deep-learning model for accurate reconstruction of hand kinematics using electroencephalography (EEG) signals. \textit{NeuroKinect} model is trained on the Grasp and Lift (GAL) tasks data with minimal preprocessing pipelines, subsequently improving the computational efficiency. A notable improvement introduced by \textit{NeuroKinect} is the utilization of a novel loss function, denoted as LStat\mathcal{L}_{\text{Stat}}. This loss function addresses the discrepancy between correlation and mean square error in hand kinematics prediction. Furthermore, our study emphasizes the scientific intuition behind parameter selection to enhance accuracy. We analyze the spatial and temporal dynamics of the motor movement task by employing event-related potential and brain source localization (BSL) results. This approach provides valuable insights into the optimal parameter selection, improving the overall performance and accuracy of the \textit{NeuroKinect} model. Our model demonstrates strong correlations between predicted and actual hand movements, with mean Pearson correlation coefficients of 0.92 (±\pm0.015), 0.93 (±\pm0.019), and 0.83 (±\pm0.018) for the X, Y, and Z dimensions. The precision of \textit{NeuroKinect} is evidenced by low mean squared errors (MSE) of 0.016 (±\pm0.001), 0.015 (±\pm0.002), and 0.017 (±\pm0.005) for the X, Y, and Z dimensions, respectively

    Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network

    Get PDF
    It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees of freedom. The current study shows that the predictive coding (PC) and active inference (AIF) frameworks, which employ a generative model, can develop better generalization by learning a prior distribution in a low dimensional latent state space representing probabilistic structures extracted from well habituated sensory-motor trajectories. In our proposed model, learning is carried out by inferring optimal latent variables as well as synaptic weights for maximizing the evidence lower bound, while goal-directed planning is accomplished by inferring latent variables for maximizing the estimated lower bound. Our proposed model was evaluated with both simple and complex robotic tasks in simulation, which demonstrated sufficient generalization in learning with limited training data by setting an intermediate value for a regularization coefficient. Furthermore, comparative simulation results show that the proposed model outperforms a conventional forward model in goal-directed planning, due to the learned prior confining the search of motor plans within the range of habituated trajectories.Comment: 30 pages, 19 figure

    Dynamics of neurological and behavioural recovery after stroke

    Get PDF

    Image Understanding at the GRASP Laboratory

    Get PDF
    Research in the GRASP Laboratory has two main themes, parameterized multi-dimensional segmentation and robust decision making under uncertainty. The multi-dimensional approach interweaves segmentation with representation. The data is explained as a best fit in view of parametric primitives. These primitives are based on physical and geometric properties of objects and are limited in number. We use primitives at the volumetric level, the surface level, and the occluding contour level, and combine the results. The robust decision making allows us to combine data from multiple sensors. Sensor measurements have bounds based on the physical limitations of the sensors. We use this information without making a priori assumptions of distributions within the intervals or a priori assumptions of the probability of a given result
    corecore