1,302 research outputs found

    Intuitive human interactive with an arm robot for severely handicapped people - A one click approach.

    Get PDF
    International audienceAssistance to disabled people is still a domain in which a lot of progress needs to be done. The more severe the handicap is, more complex are the devices, implying increased efforts to simplify the interactions between man and these devices. In this document we propose a solution to reduce the interaction between a user and a robotic arm. The system is equipped with two cameras. One is fixed on the top of the wheelchair (eye-to-hand) and the other one is mounted on the end effector of the robotic arm (eye-in-hand). The two cameras cooperate to reduce the grasping task to one click. The method is generic, it does not require marks on the object, geometrical model or the database. It thus provides a tool applicable to any kind of graspable object. The paper first gives an overview of the existing grasping tools for disabled people and proposes a novel approach toward an intuitive human machine interaction

    Real-Time Mapping Using Stereoscopic Vision Optimization

    Get PDF
    This research focuses on efficient methods of generating 2D maps from stereo vision in real-time. Instead of attempting to locate edges between objects, we make the assumption that the representative surfaces of objects in a view provide enough information to generate a map while taking less time to locate during processing. Since all real-time vision processing endeavors are extremely computationally intensive, numerous optimization techniques are applied to allow for a real-time application: horizontal spike smoothing for post-disparity noise, masks to focus on close-proximity objects, melding for object synthesis, and rectangular fitting for object extraction under a planar assumption. Additionally, traditional image transformation mechanisms such as rotation, translation, and scaling are integrated. Results from our research are an encouraging 10Hz with no vision post processing and accuracy up to 11 feet. Finally, vision mapping results are compared to simultaneously collected sonar data in three unique experimental settings

    Image-Guided Robot-Assisted Techniques with Applications in Minimally Invasive Therapy and Cell Biology

    Get PDF
    There are several situations where tasks can be performed better robotically rather than manually. Among these are situations (a) where high accuracy and robustness are required, (b) where difficult or hazardous working conditions exist, and (c) where very large or very small motions or forces are involved. Recent advances in technology have resulted in smaller size robots with higher accuracy and reliability. As a result, robotics is fi nding more and more applications in Biomedical Engineering. Medical Robotics and Cell Micro-Manipulation are two of these applications involving interaction with delicate living organs at very di fferent scales.Availability of a wide range of imaging modalities from ultrasound and X-ray fluoroscopy to high magni cation optical microscopes, makes it possible to use imaging as a powerful means to guide and control robot manipulators. This thesis includes three parts focusing on three applications of Image-Guided Robotics in biomedical engineering, including: Vascular Catheterization: a robotic system was developed to insert a catheter through the vasculature and guide it to a desired point via visual servoing. The system provides shared control with the operator to perform a task semi-automatically or through master-slave control. The system provides control of a catheter tip with high accuracy while reducing X-ray exposure to the clinicians and providing a more ergonomic situation for the cardiologists. Cardiac Catheterization: a master-slave robotic system was developed to perform accurate control of a steerable catheter to touch and ablate faulty regions on the inner walls of a beating heart in order to treat arrhythmia. The system facilitates touching and making contact with a target point in a beating heart chamber through master-slave control with coordinated visual feedback. Live Neuron Micro-Manipulation: a microscope image-guided robotic system was developed to provide shared control over multiple micro-manipulators to touch cell membranes in order to perform patch clamp electrophysiology. Image-guided robot-assisted techniques with master-slave control were implemented for each case to provide shared control between a human operator and a robot. The results show increased accuracy and reduced operation time in all three cases

    Robot Assisted Object Manipulation for Minimally Invasive Surgery

    Get PDF
    Robotic systems have an increasingly important role in facilitating minimally invasive surgical treatments. In robot-assisted minimally invasive surgery, surgeons remotely control instruments from a console to perform operations inside the patient. However, despite the advanced technological status of surgical robots, fully autonomous systems, with decision-making capabilities, are not yet available. In 2017, a structure to classify the research efforts toward autonomy achievable with surgical robots was proposed by Yang et al. Six different levels were identified: no autonomy, robot assistance, task autonomy, conditional autonomy, high autonomy, and full autonomy. All the commercially available platforms in robot-assisted surgery is still in level 0 (no autonomy). Despite increasing the level of autonomy remains an open challenge, its adoption could potentially introduce multiple benefits, such as decreasing surgeons’ workload and fatigue and pursuing a consistent quality of procedures. Ultimately, allowing the surgeons to interpret the ample and intelligent information from the system will enhance the surgical outcome and positively reflect both on patients and society. Three main aspects are required to introduce automation into surgery: the surgical robot must move with high precision, have motion planning capabilities and understand the surgical scene. Besides these main factors, depending on the type of surgery, there could be other aspects that might play a fundamental role, to name some compliance, stiffness, etc. This thesis addresses three technological challenges encountered when trying to achieve the aforementioned goals, in the specific case of robot-object interaction. First, how to overcome the inaccuracy of cable-driven systems when executing fine and precise movements. Second, planning different tasks in dynamically changing environments. Lastly, how the understanding of a surgical scene can be used to solve more than one manipulation task. To address the first challenge, a control scheme relying on accurate calibration is implemented to execute the pick-up of a surgical needle. Regarding the planning of surgical tasks, two approaches are explored: one is learning from demonstration to pick and place a surgical object, and the second is using a gradient-based approach to trigger a smoother object repositioning phase during intraoperative procedures. Finally, to improve scene understanding, this thesis focuses on developing a simulation environment where multiple tasks can be learned based on the surgical scene and then transferred to the real robot. Experiments proved that automation of the pick and place task of different surgical objects is possible. The robot was successfully able to autonomously pick up a suturing needle, position a surgical device for intraoperative ultrasound scanning and manipulate soft tissue for intraoperative organ retraction. Despite automation of surgical subtasks has been demonstrated in this work, several challenges remain open, such as the capabilities of the generated algorithm to generalise over different environment conditions and different patients

    Visual Perception For Robotic Spatial Understanding

    Get PDF
    Humans understand the world through vision without much effort. We perceive the structure, objects, and people in the environment and pay little direct attention to most of it, until it becomes useful. Intelligent systems, especially mobile robots, have no such biologically engineered vision mechanism to take for granted. In contrast, we must devise algorithmic methods of taking raw sensor data and converting it to something useful very quickly. Vision is such a necessary part of building a robot or any intelligent system that is meant to interact with the world that it is somewhat surprising we don\u27t have off-the-shelf libraries for this capability. Why is this? The simple answer is that the problem is extremely difficult. There has been progress, but the current state of the art is impressive and depressing at the same time. We now have neural networks that can recognize many objects in 2D images, in some cases performing better than a human. Some algorithms can also provide bounding boxes or pixel-level masks to localize the object. We have visual odometry and mapping algorithms that can build reasonably detailed maps over long distances with the right hardware and conditions. On the other hand, we have robots with many sensors and no efficient way to compute their relative extrinsic poses for integrating the data in a single frame. The same networks that produce good object segmentations and labels in a controlled benchmark still miss obvious objects in the real world and have no mechanism for learning on the fly while the robot is exploring. Finally, while we can detect pose for very specific objects, we don\u27t yet have a mechanism that detects pose that generalizes well over categories or that can describe new objects efficiently. We contribute algorithms in four of the areas mentioned above. First, we describe a practical and effective system for calibrating many sensors on a robot with up to 3 different modalities. Second, we present our approach to visual odometry and mapping that exploits the unique capabilities of RGB-D sensors to efficiently build detailed representations of an environment. Third, we describe a 3-D over-segmentation technique that utilizes the models and ego-motion output in the previous step to generate temporally consistent segmentations with camera motion. Finally, we develop a synthesized dataset of chair objects with part labels and investigate the influence of parts on RGB-D based object pose recognition using a novel network architecture we call PartNet

    Autonomous model building using vision and manipulation

    Get PDF
    It is often the case that robotic systems require models, in order to successfully control themselves, and to interact with the world. Models take many forms and include kinematic models to plan motions, dynamics models to understand the interaction of forces, and models of 3D geometry to check for collisions, to name but a few. Traditionally, models are provided to the robotic system by the designers that build the system. However, for long-term autonomy it becomes important for the robot to be able to build and maintain models of itself, and of objects it might encounter. In this thesis, the argument for enabling robotic systems to autonomously build models is advanced and explored. The main contribution of this research is to show how a layered approach can be taken to building models. Thus a robot, starting with a limited amount of information, can autonomously build a number of models, including a kinematic model, which describes the robot’s body, and allows it to plan and perform future movements. Key to the incremental, autonomous approach is the use of exploratory actions. These are actions that the robot can perform in order to gain some more information, either about itself, or about an object with which it is interacting. A method is then presented whereby a robot, after being powered on, can home its joints using just vision, i.e. traditional methods such as absolute encoders, or limit switches are not required. The ability to interact with objects in order to extract information is one of the main advantages that a robotic system has over a purely passive system, when attempting to learn about or build models of objects. In light of this, the next contribution of this research is to look beyond the robot’s body and to present methods with which a robot can autonomously build models of objects in the world around it. The first class of objects examined are flat pack cardboard boxes, a class of articulated objects with a number of interesting properties. It is shown how exploratory actions can be used to build a model of a flat pack cardboard box and to locate any hinges the box may have. Specifically, it is shown how when interacting with an object, a robot can combine haptic feedback from force sensors, with visual feedback from a camera to get more information from an object than would be possible using just a single sensor modality. The final contribution of this research is to present a series of exploratory actions for a robotic text reading system that allow text to be found and read from an object. The text reading system highlights how models of objects can take many forms, from a representation of their physical extents, to the text that is written on them

    Interactions Between Humans and Robots

    Get PDF

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers
    • …
    corecore