79 research outputs found
Design and experimental evaluation of a new modular underactuated multi-fingered robot hand
© IMechE 2020. In this paper, a modular underactuated multi-fingered robot hand is proposed. The robot hand can be freely configured with different number and configuration of modular fingers according to the work needs. Driving motion is achieved by the rigid structure of the screw and the connecting rod. A finger-connecting mechanism is designed on the palm of the robot hand to meet the needs of modular finger’s installation, drive, rotation, and sensor connections. The fingertips are made of hollow rubber to enhance the stability of grasping. Details about the design of the robot hand and analysis of the robot kinematics and grasping process are described. Last, a prototype is developed, and a grab test is carried out. Experimental results demonstrate that the structure of proposed modular robot hand is reasonable, which enables the adaptability and flexibility of the modular robot hand to meet the requirements of various grasping modes in practice
Recommended from our members
Biomimetic grasp planning for cortical control of a robotic hand
In this paper we outline a grasp planning system designed to augment the cortical control of a prosthetic arm and hand. A key aspect of this system it the ability to combine online user input and autonomous planning to enable the execution of stable grasping tasks. While user input can ultimately be of any modality, the system is being designed to adapt to partial or noisy information obtained from grasp-related activity in the primate motor cortex. First, principal component analysis is applied to the observed kinematics of physiologic grasping to reduce the dimensionality of hand posture space and simplify the planning task for on-line use. The planner then accepts control input in this reduced-dimensionality space, and uses it as a seed for a hand posture optimization algorithm based on simulated annealing. We present two applications of this algorithm, using data collected from both primate and human subjects during grasping, to demonstrate its ability to synthesize stable grasps using partial control input in real or near-real time
Humanoid Robots
For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion
Periscope: A Robotic Camera System to Support Remote Physical Collaboration
We investigate how robotic camera systems can offer new capabilities to
computer-supported cooperative work through the design, development, and
evaluation of a prototype system called Periscope. With Periscope, a local
worker completes manipulation tasks with guidance from a remote helper who
observes the workspace through a camera mounted on a semi-autonomous robotic
arm that is co-located with the worker. Our key insight is that the helper, the
worker, and the robot should all share responsibility of the camera view--an
approach we call shared camera control. Using this approach, we present a set
of modes that distribute the control of the camera between the human
collaborators and the autonomous robot depending on task needs. We demonstrate
the system's utility and the promise of shared camera control through a
preliminary study where 12 dyads collaboratively worked on assembly tasks.
Finally, we discuss design and research implications of our work for future
robotic camera systems that facilitate remote collaboration.Comment: This is a pre-print of the article accepted for publication in PACM
HCI and will be presented at CSCW 202
ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics
This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation
Reconstruction and recognition of confusable models using three-dimensional perception
Perception is one of the key topics in robotics research. It is about the processing
of external sensor data and its interpretation. The necessity of fully autonomous
robots makes it crucial to help them to perform tasks more reliably, flexibly, and
efficiently. As these platforms obtain more refined manipulation capabilities, they
also require expressive and comprehensive environment models: for manipulation
and affordance purposes, their models have to involve each one of the objects
present in the world, coincidentally with their location, pose, shape and other aspects.
The aim of this dissertation is to provide a solution to several of these challenges
that arise when meeting the object grasping problem, with the aim of improving
the autonomy of the mobile manipulator robot MANFRED-2. By the analysis
and interpretation of 3D perception, this thesis covers in the first place the
localization of supporting planes in the scenario. As the environment will contain
many other things apart from the planar surface, the problem within cluttered
scenarios has been solved by means of Differential Evolution, which is a particlebased
evolutionary algorithm that evolves in time to the solution that yields the
cost function lowest value.
Since the final purpose of this thesis is to provide with valuable information for
grasping applications, a complete model reconstructor has been developed. The
proposed method holdsmany features such as robustness against abrupt rotations,
multi-dimensional optimization, feature extensibility, compatible with other scan
matching techniques, management of uncertain information and an initialization
process to reduce convergence timings. It has been designed using a evolutionarybased
scan matching optimizer that takes into account surface features of the object,
global form and also texture and color information.
The last tackled challenge regards the recognition problem. In order to procure
with worthy information about the environment to the robot, a meta classifier that discerns efficiently the observed objects has been implemented. It is capable
of distinguishing between confusable objects, such as mugs or dishes with similar
shapes but different size or color.
The contributions presented in this thesis have been fully implemented and
empirically evaluated in the platform. A continuous grasping pipeline covering
from perception to grasp planning including visual object recognition for confusable
objects has been developed. For that purpose, an indoor environment with
several objects on a table is presented in the nearby of the robot. Items are recognized
from a database and, if one is chosen, the robot will calculate how to grasp
it taking into account the kinematic restrictions associated to the anthropomorphic
hand and the 3D model for this particular object. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------La percepción es uno de los temas más relevantes en el mundo de la investigaci
ón en robótica. Su objetivo es procesar e interpretar los datos recibidos por
un sensor externo. La gran necesidad de desarrollar robots autónomos hace imprescindible
proporcionar soluciones que les permita realizar tareas más precisas,
flexibles y eficientes. Dado que estas plataformas cada día adquieren mejores capacidades
para manipular objetos, también necesitarán modelos expresivos y comprensivos:
para realizar tareas de manipulación y prensión, sus modelos han de
tener en cuenta cada uno de los objetos presentes en su entorno, junto con su localizaci
ón, orientación, forma y otros aspectos.
El objeto de la presente tesis doctoral es proponer soluciones a varios de los
retos que surgen al enfrentarse al problema del agarre, con el propósito final de
aumentar la capacidad de autonomía del robot manipulador MANFRED-2. Mediante
el análisis e interpretación de la percepción tridimensional, esta tesis cubre
en primer lugar la localización de planos de soporte en sus alrededores. Dado que
el entorno contendrá muchos otros elementos aparte de la superficie de apoyo buscada, el problema en entornos abarrotados ha sido solucionado mediante Evolución
Diferencial, que es un algoritmo evolutivo basado en partículas que evoluciona
temporalmente a la solución que contempla el menor resultado en la función de
coste.
Puesto que el propósito final de este trabajo de investigación es proveer de información valiosa a las aplicaciones de prensión, se ha desarrollado un reconstructor
de modelos completos. El método propuesto posee diferentes características
como robustez a giros abruptos, optimización multidimensional, extensión a otras
características, compatibilidad con otras técnicas de reconstrucción, manejo de incertidumbres
y un proceso de inicialización para reducir el tiempo de convergencia. Ha sido diseñado usando un registro optimizado mediante técnicas evolutivas
que tienen en cuenta las particularidades de la superficie del objeto, su forma
global y la información relativa a la textura.
El último problema abordado está relacionado con el reconocimiento de objetos. Con la intención de abastecer al robot con la mayor información posible sobre el entorno, se ha implementado un meta clasificador que diferencia de manera eficaz los objetos observados. Ha sido capacitado para distinguir objetos confundibles como tazas o platos con formas similares pero con diferentes colores o tamaños.
Las contribuciones presentes en esta tesis han sido completamente implementadas y probadas de manera empírica en la plataforma. Se ha desarrollado un sistema que cubre el problema de agarre desde la percepción al cálculo de la trayectoria
incluyendo el sistema de reconocimiento de objetos confundibles. Para ello, se ha presentado una mesa con objetos en un entorno cerrado cercano al robot. Los elementos son comparados con una base de datos y si se desea agarrar uno de ellos,
el robot estimará cómo cogerlo teniendo en cuenta las restricciones cinemáticas asociadas a una mano antropomórfica y el modelo tridimensional generado del objeto en cuestión
Robot manipulation in human environments
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 211-228).Human environments present special challenges for robot manipulation. They are often dynamic, difficult to predict, and beyond the control of a robot engineer. Fortunately, many characteristics of these settings can be used to a robot's advantage. Human environments are typically populated by people, and a robot can rely on the guidance and assistance of a human collaborator. Everyday objects exhibit common, task-relevant features that reduce the cognitive load required for the object's use. Many tasks can be achieved through the detection and control of these sparse perceptual features. And finally, a robot is more than a passive observer of the world. It can use its body to reduce its perceptual uncertainty about the world. In this thesis we present advances in robot manipulation that address the unique challenges of human environments. We describe the design of a humanoid robot named Domo, develop methods that allow Domo to assist a person in everyday tasks, and discuss general strategies for building robots that work alongside people in their homes and workplaces.by Aaron Ladd Edsinger.Ph.D
A Comprehensive Review of Data-Driven Co-Speech Gesture Generation
Gestures that accompany speech are an essential part of natural and efficient
embodied human communication. The automatic generation of such co-speech
gestures is a long-standing problem in computer animation and is considered an
enabling technology in film, games, virtual social spaces, and for interaction
with social robots. The problem is made challenging by the idiosyncratic and
non-periodic nature of human co-speech gesture motion, and by the great
diversity of communicative functions that gestures encompass. Gesture
generation has seen surging interest recently, owing to the emergence of more
and larger datasets of human gesture motion, combined with strides in
deep-learning-based generative models, that benefit from the growing
availability of data. This review article summarizes co-speech gesture
generation research, with a particular focus on deep generative models. First,
we articulate the theory describing human gesticulation and how it complements
speech. Next, we briefly discuss rule-based and classical statistical gesture
synthesis, before delving into deep learning approaches. We employ the choice
of input modalities as an organizing principle, examining systems that generate
gestures from audio, text, and non-linguistic input. We also chronicle the
evolution of the related training data sets in terms of size, diversity, motion
quality, and collection method. Finally, we identify key research challenges in
gesture generation, including data availability and quality; producing
human-like motion; grounding the gesture in the co-occurring speech in
interaction with other speakers, and in the environment; performing gesture
evaluation; and integration of gesture synthesis into applications. We
highlight recent approaches to tackling the various key challenges, as well as
the limitations of these approaches, and point toward areas of future
development.Comment: Accepted for EUROGRAPHICS 202
- …