163 research outputs found

    A Multi-Sensorial Hybrid Control for Robotic Manipulation in Human-Robot Workspaces

    Get PDF
    Autonomous manipulation in semi-structured environments where human operators can interact is an increasingly common task in robotic applications. This paper describes an intelligent multi-sensorial approach that solves this issue by providing a multi-robotic platform with a high degree of autonomy and the capability to perform complex tasks. The proposed sensorial system is composed of a hybrid visual servo control to efficiently guide the robot towards the object to be manipulated, an inertial motion capture system and an indoor localization system to avoid possible collisions between human operators and robots working in the same workspace, and a tactile sensor algorithm to correctly manipulate the object. The proposed controller employs the whole multi-sensorial system and combines the measurements of each one of the used sensors during two different phases considered in the robot task: a first phase where the robot approaches the object to be grasped, and a second phase of manipulation of the object. In both phases, the unexpected presence of humans is taken into account. This paper also presents the successful results obtained in several experimental setups which verify the validity of the proposed approach

    Control of Redundant Joint Structures Using Image Information During the Tracking of Non-Smooth Trajectories

    Get PDF
    Visual information is increasingly being used in a great number of applications in order to perform the guidance of joint structures. This paper proposes an image-based controller which allows the joint structure guidance when its number of degrees of freedom is greater than the required for the developed task. In this case, the controller solves the redundancy combining two different tasks: the primary task allows the correct guidance using image information, and the secondary task determines the most adequate joint structure posture solving the possible joint redundancy regarding the performed task in the image space. The method proposed to guide the joint structure also employs a smoothing Kalman filter not only to determine the moment when abrupt changes occur in the tracked trajectory, but also to estimate and compensate these changes using the proposed filter. Furthermore, a direct visual control approach is proposed which integrates the visual information provided by this smoothing Kalman filter. This last aspect permits the correct tracking when noisy measurements are obtained. All the contributions are integrated in an application which requires the tracking of the faces of Asperger children

    Haptic Device Design and Teleoperation Control Algorithms for Mobile Manipulators

    Get PDF
    The increasing need of teleoperated robotic systems implies more and more often to use, as slave devices, mobile platforms (terrestrial, aerial or underwater) with integrated manipulation capabilities, provided e.g. by robotic arms with proper grasping/manipulation tools. Despite this, the research activity in teleoperation of robotic systems has mainly focused on the control of either fixed-base manipulators or mobile robots, non considering the integration of these two types of systems in a single device. Such a combined robotic devices are usually referred to as mobile manipulators: systems composed by both a robotic manipulator and a mobile platform (on which the arm is mounted) whose purpose is to enlarge the manipulator’s workspace. The combination of a mobile platform and a serial manipulator creates redundancy: a particular point in the space can be reached by moving the manipulator, by moving the mobile platform, or by a combined motion of both. A synchronized motion of both devices need then to be addressed. Although specific haptic devices explicitly oriented to the control of mobile manipulators need to be designed, there are no commercial solution yet. For this reason it is often necessary to control such as combined systems with traditional haptic devices not specifically oriented to the control of mobile manipulators. The research activity presented in this Ph.D. thesis focuses in the first place on the design of a teleoperation control scheme which allows the simultaneous control of both the manipulator and the mobile platform by means of a single haptic device characterized by fixed base and an open kinematic chain. Secondly the design of a novel cable-drive haptic devices has been faced. Investigating the use of twisted strings actuation in force rendering is the most interesting challenge of the latter activity

    Dynamic visual servo control of a 4-axis joint tool to track image trajectories during machining complex shapes

    Get PDF
    A large part of the new generation of computer numerical control systems has adopted an architecture based on robotic systems. This architecture improves the implementation of many manufacturing processes in terms of flexibility, efficiency, accuracy and velocity. This paper presents a 4-axis robot tool based on a joint structure whose primary use is to perform complex machining shapes in some non-contact processes. A new dynamic visual controller is proposed in order to control the 4-axis joint structure, where image information is used in the control loop to guide the robot tool in the machining task. In addition, this controller eliminates the chaotic joint behavior which appears during tracking of the quasi-repetitive trajectories required in machining processes. Moreover, this robot tool can be coupled to a manipulator robot in order to form a multi-robot platform for complex manufacturing tasks. Therefore, the robot tool could perform a machining task using a piece grasped from the workspace by a manipulator robot. This manipulator robot could be guided by using visual information given by the robot tool, thereby obtaining an intelligent multi-robot platform controlled by only one camera.This work was funded by the Ministry of Science and Innovation of Spain Government through the research project DPI2011-22766 and DPI2012-32390

    Conceptual development from the perspective of a brain-inspired robotic architecture

    Get PDF
    Concepts are central to reasoning and intelligent behaviour. Scientific evidence shows that conceptual development is fundamental for the emergence of high-cognitive phenomena. Here, we model such phenomena in a brain-inspired cognitive robotic model and examine how the robot can learn, categorise, and abstract concepts to voluntary control behaviour. The paper argues that such competence arises with sufficient conceptual content from physical and social experience. Hence, senses, motor abilities and language, all contribute to a robot's intelligent behaviour. To this aim, we devised a method for attaining concepts, which computationally reproduces the steps of the inductive thinking strategy of the Concept Attainment Model (CAM). Initially, the robot is tutor-guided through socio-centric cues to attain concepts and is then tested consistently to use these concepts to solve complex tasks. We demonstrate how the robot uses language to create new categories by abstraction in response to human language-directed instructions. Linguistic stimuli also change the representations of the robot's experiences and generate more complex representations for further concepts. Most notably, this work shows that this competence emerges by the robot's ability to understand the concepts similarly to human understanding. Such understanding was also maintained when concepts were expressed in multilingual lexicalisations showing that labels represent concepts that allowed the model to adapt to unfamiliar contingencies in which it did not have directly related experiences. The work concludes that language is an essential component of conceptual development, which scaffolds the cognitive continuum of a robot from low-to-high cognitive skills, including its skill to understand

    Robot Manipulators

    Get PDF
    Robot manipulators are developing more in the direction of industrial robots than of human workers. Recently, the applications of robot manipulators are spreading their focus, for example Da Vinci as a medical robot, ASIMO as a humanoid robot and so on. There are many research topics within the field of robot manipulators, e.g. motion planning, cooperation with a human, and fusion with external sensors like vision, haptic and force, etc. Moreover, these include both technical problems in the industry and theoretical problems in the academic fields. This book is a collection of papers presenting the latest research issues from around the world

    Progettazione e Controllo di Mani Robotiche

    Get PDF
    The application of dexterous robotic hands out of research laboratories has been limited by the intrinsic complexity that these devices present. This is directly reflected as an economically unreasonable cost and a low overall reliability. Within the research reported in this thesis it is shown how the problem of complexity in the design of robotic hands can be tackled, taking advantage of modern technologies (i.e. rapid prototyping), leading to innovative concepts for the design of the mechanical structure, the actuation and sensory systems. The solutions adopted drastically reduce the prototyping and production costs and increase the reliability, reducing the number of parts required and averaging their single reliability factors. In order to get guidelines for the design process, the problem of robotic grasp and manipulation by a dual arm/hand system has been reviewed. In this way, the requirements that should be fulfilled at hardware level to guarantee successful execution of the task has been highlighted. The contribution of this research from the manipulation planning side focuses on the redundancy resolution that arise in the execution of the task in a dexterous arm/hand system. In literature the problem of coordination of arm and hand during manipulation of an object has been widely analyzed in theory but often experimentally demonstrated in simplified robotic setup. Our aim is to cover the lack in the study of this topic and experimentally evaluate it in a complex system as a anthropomorphic arm hand system

    Object Handovers: a Review for Robotics

    Full text link
    This article surveys the literature on human-robot object handovers. A handover is a collaborative joint action where an agent, the giver, gives an object to another agent, the receiver. The physical exchange starts when the receiver first contacts the object held by the giver and ends when the giver fully releases the object to the receiver. However, important cognitive and physical processes begin before the physical exchange, including initiating implicit agreement with respect to the location and timing of the exchange. From this perspective, we structure our review into the two main phases delimited by the aforementioned events: 1) a pre-handover phase, and 2) the physical exchange. We focus our analysis on the two actors (giver and receiver) and report the state of the art of robotic givers (robot-to-human handovers) and the robotic receivers (human-to-robot handovers). We report a comprehensive list of qualitative and quantitative metrics commonly used to assess the interaction. While focusing our review on the cognitive level (e.g., prediction, perception, motion planning, learning) and the physical level (e.g., motion, grasping, grip release) of the handover, we briefly discuss also the concepts of safety, social context, and ergonomics. We compare the behaviours displayed during human-to-human handovers to the state of the art of robotic assistants, and identify the major areas of improvement for robotic assistants to reach performance comparable to human interactions. Finally, we propose a minimal set of metrics that should be used in order to enable a fair comparison among the approaches.Comment: Review paper, 19 page
    • …
    corecore