1,340 research outputs found

    Virtual Sensor for Kinematic Estimation of Flexible Links in Parallel Robots

    Get PDF
    The control of flexible link parallel manipulators is still an open area of research, endpoint trajectory tracking being one of the main challenges in this type of robot. The flexibility and deformations of the limbs make the estimation of the Tool Centre Point (TCP) position a challenging one. Authors have proposed different approaches to estimate this deformation and deduce the location of the TCP. However, most of these approaches require expensive measurement systems or the use of high computational cost integration methods. This work presents a novel approach based on a virtual sensor which can not only precisely estimate the deformation of the flexible links in control applications (less than 2% error), but also its derivatives (less than 6% error in velocity and 13% error in acceleration) according to simulation results. The validity of the proposed Virtual Sensor is tested in a Delta Robot, where the position of the TCP is estimated based on the Virtual Sensor measurements with less than a 0.03% of error in comparison with the flexible approach developed in ADAMS Multibody Software.This work was supported in part by the Spanish Ministry of Economy and Competitiveness under grant BES-2013-066142, UPV/EHU's PPG17/56 projects, Spanish Ministry of Economy and Competitiveness' MINECO & FEDER inside DPI-2012-32882 project and the Basque Country Government's (GV/EJ) under PRE-2014-1-152 and BFI-2012-223 grants and under recognized research group IT914-16

    Towards the development of safe, collaborative robotic freehand ultrasound

    Get PDF
    The use of robotics in medicine is of growing importance for modern health services, as robotic systems have the capacity to improve upon human tasks, thereby enhancing the treatment ability of a healthcare provider. In the medical sector, ultrasound imaging is an inexpensive approach without the high radiation emissions often associated with other modalities, especially when compared to MRI and CT imaging respectively. Over the past two decades, considerable effort has been invested into freehand ultrasound robotics research and development. However, this research has focused on the feasibility of the application, not the robotic fundamentals, such as motion control, calibration, and contextual awareness. Instead, much of the work is concentrated on custom designed robots, ultrasound image generation and visual servoing, or teleoperation. Research based on these topics often suffer from important limitations that impede their use in an adaptable, scalable, and real-world manner. Particularly, while custom robots may be designed for a specific application, commercial collaborative robots are a more robust and economical solution. Otherwise, various robotic ultrasound studies have shown the feasibility of using basic force control, but rarely explore controller tuning in the context of patient safety and deformable skin in an unstructured environment. Moreover, many studies evaluate novel visual servoing approaches, but do not consider the practicality of relying on external measurement devices for motion control. These studies neglect the importance of robot accuracy and calibration, which allow a system to safely navigate its environment while reducing the imaging errors associated with positioning. Hence, while the feasibility of robotic ultrasound has been the focal point in previous studies, there is a lack of attention to what occurs between system design and image output. This thesis addresses limitations of the current literature through three distinct contributions. Given the force-controlled nature of an ultrasound robot, the first contribution presents a closed-loop calibration approach using impedance control and low-cost equipment. Accuracy is a fundamental requirement for high-quality ultrasound image generation and targeting. This is especially true when following a specified path along a patient or synthesizing 2D slices into a 3D ultrasound image. However, even though most industrial robots are inherently precise, they are not necessarily accurate. While robot calibration itself has been extensively studied, many of the approaches rely on expensive and highly delicate equipment. Experimental testing showed that this method is comparable in quality to traditional calibration using a laser tracker. As demonstrated through an experimental study and validated with a laser tracker, the absolute accuracy of a collaborative robot was improved to a maximum error of 0.990mm, representing a 58.4% improvement when compared to the nominal model. The second contribution explores collisions and contact events, as they are a natural by-product of applications involving physical human-robot interaction (pHRI) in unstructured environments. Robot-assisted medical ultrasound is an example of a task where simply stopping the robot upon contact detection may not be an appropriate reaction strategy. Thus, the robot should have an awareness of body contact location to properly plan force-controlled trajectories along the human body using the imaging probe. This is especially true for remote ultrasound systems where safety and manipulability are important elements to consider when operating a remote medical system through a communication network. A framework is proposed for robot contact classification using the built-in sensor data of a collaborative robot. Unlike previous studies, this classification does not discern between intended vs. unintended contact scenarios, but rather classifies what was involved in the contact event. The classifier can discern different ISO/TS 15066:2016 specific body areas along a human-model leg with 89.37% accuracy. Altogether, this contact distinction framework allows for more complex reaction strategies and tailored robot behaviour during pHRI. Lastly, given that the success of an ultrasound task depends on the capability of the robot system to handle pHRI, pure motion control is insufficient. Force control techniques are necessary to achieve effective and adaptable behaviour of a robotic system in the unstructured ultrasound environment while also ensuring safe pHRI. While force control does not require explicit knowledge of the environment, to achieve an acceptable dynamic behaviour, the control parameters must be tuned. The third contribution proposes a simple and effective online tuning framework for force-based robotic freehand ultrasound motion control. Within the context of medical ultrasound, different human body locations have a different stiffness and will require unique tunings. Through real-world experiments with a collaborative robot, the framework tuned motion control for optimal and safe trajectories along a human leg phantom. The optimization process was able to successfully reduce the mean absolute error (MAE) of the motion contact force to 0.537N through the evolution of eight motion control parameters. Furthermore, contextual awareness through motion classification can offer a framework for pHRI optimization and safety through predictive motion behaviour with a future goal of autonomous pHRI. As such, a classification pipeline, trained using the tuning process motion data, was able to reliably classify the future force tracking quality of a motion session with an accuracy of 91.82 %

    A Cost-Effective Haptic Device for Assistive and Rehabilitation Purposes

    Get PDF
    With the growing population of elderly, the need for assistance has also increased considerably especially for the tasks such as cleaning, reaching and grasping objects among others. There are numerous assistive devices in the market for this group of people. However, they are either too expensive or require overwhelming user effort for manipulation. Therefore, the presented research is primarily concerned with developing a low-cost, easy to use assistive device for elderly to reach and grasp objects through intuitive interface for the control of a slave anthropomorphic robotic arm (tele operator). The system also implements haptic feedback technology that enables the user to maneuver the grasping task in a realistic manner. A bilateral master-slave robotic system combined with the haptic feedback technology has been designed, built and tested to determine the suitability of this device for the chosen application. The final prototype consists of primarily off the shelf components programmed in such a way as to provide accurate teleoperation and haptic feedback to the user. While the nature of the project as a prototype precluded any patient trials, testing of the final system has shown that a fairly low cost device can be capable of providing the user an ability to remotely control a robotic arm for reaching and grasping objects with accurate force feedback

    Doctor of Philosophy

    Get PDF
    dissertationThe need for position and orientation information in a wide variety of applications has led to the development of equally varied methods for providing it. Amongst the alternatives, inertial navigation is a solution that o ffers self-contained operation and provides angular rate, orientation, acceleration, velocity, and position information. Until recently, the size, cost, and weight of inertial sensors has limited their use to vehicles with relatively large payload capacities and instrumentation budgets. However, the development of microelectromechanical system (MEMS) inertial sensors now o ers the possibility of using inertial measurement in smaller, even human-scale, applications. Though much progress has been made toward this goal, there are still many obstacles. While operating independently from any outside reference, inertial measurement su ers from unbounded errors that grow at rates up to cubic in time. Since the reduced size and cost of these new miniaturized sensors comes at the expense of accuracy and stability, the problem of error accumulation becomes more acute. Nevertheless, researchers have demonstrated that useful results can be obtained in real-world applications. The research presented herein provides several contributions to the development of human-scale inertial navigation. A calibration technique allowing complex sensor models to be identified using inexpensive hardware and linear solution techniques has been developed. This is shown to provide significant improvements in the accuracy of the calibrated outputs from MEMS inertial sensors. Error correction algorithms based on easily identifiable characteristics of the sensor outputs have also been developed. These are demonstrated in both one- and three-dimensional navigation. The results show significant improvements in the levels of accuracy that can be obtained using these inexpensive sensors. The algorithms also eliminate empirical, application-specific simplifications and heuristics, upon which many existing techniques have depended, and make inertial navigation a more viable solution for tracking the motion around us

    Image guided robotic assistance for the diagnosis and treatment of tumor

    Get PDF
    The aim of this thesis is to demonstrate the feasibility and the potentiality of introduction of robotics and image guidance in the overall oncologic workflow, from the diagnosis to the treatment phase. The popularity of robotics in the operating room has grown in recent years. Currently the most popular systems is the da Vinci telemanipulator (Intuitive Surgical), it is based on a master-slave control, for minimally invasive surgery and it is used in several surgical fields such us urology, general, gynecology, cardiothoracic. An accurate study of this system, from a technological field of view, has been conducted addressing all drawbacks and advantages of this system. The da Vinci System creates an immersive operating environment for the surgeon by providing both high quality stereo visualization and a human-machine interface that directly connects the surgeon’s hands to the motion of the surgical tool tips inside the patient’s body. It has undoubted advantages for the surgeon work and for the patient health, at least for some interventions, while its very high costs leaves many doubts on its price benefit ratio. In the robotic surgery field many researchers are working on the optimization and miniaturization robots mechanic, while others are trying to obtain smart functionalities to realize robotic systems, that, “knowing” the patient anatomy from radiological images, can assists the surgeon in an active way. Regarding the second point, image guided systems can be useful to plan and to control medical robots motion and to provide the surgeon pre-operative and intra-operative images with augmented reality visualization to enhance his/her perceptual capacities and, as a consequence, to improve the quality of treatments. To demonstrate this thesis some prototypes has been designed, implemented and tested. The development of image guided medical devices, comprehensive of augmented reality, virtual navigation and robotic surgical features, requires to address several problems. The first ones are the choosing of the robotic platform and of the image source to employ. An industrial anthropomorphic arm has been used as testing platform. The idea of integrating industrial robot components in the clinical workflow has been supported by the da Vinci technical analysis. The algorithms and methods developed, regarding in particular robot calibration, based on literature theories and on an easily integration in the clinical scenario, can be adapted to each anthropomorphic arm. In this way this work can be integrated with light-weight robots, for industrial or clinical use, able to work in close contact to humans, which will become numerous in the early future. Regarding the medical image source, it has been decided to work with ultrasound imaging. Two-dimensional ultrasound imaging is widely used in clinical practice because is not dangerous for the patient, inexpensive, compact and it is a highly flexible imaging that allows users to study many anatomic structures. It is routinely used for diagnosis and as guidance in percutaneous treatments. However the use of 2D ultrasound imaging presents some disadvantages that require great ability of the user: it requires that the clinician mentally integrates many images to reconstruct a complete idea of the anatomy in 3D. Furthermore the freehand control of the probe make it difficult to individuate anatomic positions and orientations and probe repositioning to reach a particular location. To overcome these problems it has been developed an image guided system that fuse 2D US real time images with routinely CT or MRI 3D images, previously acquired from the patient, to enhance clinician orientation and probe guidance. The implemented algorithms for robot calibration and US image guidance has been used to realize two applications responding to specific clinical needs. The first one to speed up the execution of routinely and very recurrently procedures like percutaneous biopsy or ablation. The second one to improve a new completely non invasive type of treatment for solid tumors, the HIFU (High Intensity Focused Ultrasound). An ultrasound guided robotic system has been developed to assist the clinician to execute complicated biopsies, or percutaneous ablations, in particular for deep abdominal organs. It was developed an integrated system that provides the clinician two types of assistance: a mixed reality visualization allows accurate and easy planning of needle trajectory and target reaching verification; the robot arm equipped with a six-degree-of-freedom force sensor allows the precise positioning of the needle holder and allows the clinician to adjust, by means of a cooperative control, the planned trajectory to overcome needle deflection and target motion. The second application consists in an augmented reality navigation system for HIFU treatment. HIFU represents a completely non invasive method for treatment of solid tumors, hemostasis and other vascular features in human tissues. The technology for HIFU treatments is still evolving and the systems available on the market have some limitations and drawbacks. A disadvantage resulting from our experience with the machinery available in our hospital (JC200 therapeutic system Haifu (HIFU) by Tech Co., Ltd, Chongqing), which is similar to other analogous machines, is the long time required to perform the procedure due to the difficulty to find the target, using the remote motion of an ultrasound probe under the patient. This problem has been addressed developing an augmented reality navigation system to enhance US guidance during HIFU treatments allowing an easy target localization. The system was implemented using an additional free hand ultrasound probe coupled with a localizer and CT fused imaging. It offers a simple and an economic solution to an easy HIFU target localization. This thesis demonstrates the utility and usability of robots for diagnosis and treatment of the tumor, in particular the combination of automatic positioning and cooperative control allows the surgeon and the robot to work in synergy. Further the work demonstrates the feasibility and the potentiality of the use of a mixed reality navigation system to facilitate the target localization and consequently to reduce the times of sittings, to increase the number of possible diagnosis/treatments and to decrease the risk of potential errors. The proposed solutions for the integration of robotics and image guidance in the overall oncologic workflow, take into account current available technologies, traditional clinical procedures and cost minimization

    Vision based robot manipulator error compensation

    Get PDF
    openCurrently one of the most important obstacles to a wider utilization of robots in industry is their accuracy. This is a critical issue for applications that allow no margin of error such as medical applications or precision machining. A possibility is to solve the problem at its root by manufacturing better robot, but that would require substantial economical and time investments. A viable alternative is to improve the robot accuracy through a calibration process, either by creating a model of the robot that closely represents the real robot or by studying the error model of the robot to compensate it directly. The aim of this study is to develop a correction algorithm to improve the robot accuracy by remapping the robot workspace through the use of computer vision and machine leaning techniques. Different correction techniques were tested against each other to find the one that would bring the robot accuracy closer to its repeatability, together with some important additions to seamlessly include the correction algorithm into an existing pipeline.Currently one of the most important obstacles to a wider utilization of robots in industry is their accuracy. This is a critical issue for applications that allow no margin of error such as medical applications or precision machining. A possibility is to solve the problem at its root by manufacturing better robot, but that would require substantial economical and time investments. A viable alternative is to improve the robot accuracy through a calibration process, either by creating a model of the robot that closely represents the real robot or by studying the error model of the robot to compensate it directly. The aim of this study is to develop a correction algorithm to improve the robot accuracy by remapping the robot workspace through the use of computer vision and machine leaning techniques. Different correction techniques were tested against each other to find the one that would bring the robot accuracy closer to its repeatability, together with some important additions to seamlessly include the correction algorithm into an existing pipeline

    Aspects of an open architecture robot controller and its integration with a stereo vision sensor.

    Get PDF
    The work presented in this thesis attempts to improve the performance of industrial robot systems in a flexible manufacturing environment by addressing a number of issues related to external sensory feedback and sensor integration, robot kinematic positioning accuracy, and robot dynamic control performance. To provide a powerful control algorithm environment and the support for external sensor integration, a transputer based open architecture robot controller is developed. It features high computational power, user accessibility at various robot control levels and external sensor integration capability. Additionally, an on-line trajectory adaptation scheme is devised and implemented in the open architecture robot controller, enabling a real-time trajectory alteration of robot motion to be achieved in response to external sensory feedback. An in depth discussion is presented on integrating a stereo vision sensor with the robot controller to perform external sensor guided robot operations. Key issues for such a vision based robot system are precise synchronisation between the vision system and the robot controller, and correct target position prediction to counteract the inherent time delay in image processing. These were successfully addressed in a demonstrator system based on a Puma robot. Efforts have also been made to improve the Puma robot kinematic and dynamic performance. A simple, effective, on-line algorithm is developed for solving the inverse kinematics problem of a calibrated industrial robot to improve robot positioning accuracy. On the dynamic control aspect, a robust adaptive robot tracking control algorithm is derived that has an improved performance compared to a conventional PID controller as well as exhibiting relatively modest computational complexity. Experiments have been carried out to validate the open architecture robot controller and demonstrate the performance of the inverse kinematics algorithm, the adaptive servo control algorithm, and the on-line trajectory generation. By integrating the open architecture robot controller with a stereo vision sensor system, robot visual guidance has been achieved with experimental results showing that the integrated system is capable of detecting, tracking and intercepting random objects moving in 3D trajectory at a velocity up to 40mm/s

    Design and Evaluation of a Contact-Free Interface for Minimally Invasive Robotics Assisted Surgery

    Get PDF
    Robotic-assisted minimally invasive surgery (RAMIS) is becoming increasingly more common for many surgical procedures. These minimally invasive techniques offer the benefit of reduced patient recovery time, mortality and scarring compared to traditional open surgery. Teleoperated procedures have the added advantage of increased visualization, and enhanced accuracy for the surgeon through tremor filtering and scaling down hand motions. There are however still limitations in these techniques preventing the widespread growth of the technology. In RAMIS, the surgeon is limited in their movement by the operating console or master device, and the cost of robotic surgery is often too high to justify for many procedures. Sterility issues arise as well, as the surgeon must be in contact with the master device, preventing a smooth transition between traditional and robotic modes of surgery. This thesis outlines the design and analysis of a novel method of interaction with the da Vinci Surgical Robot. Using the da Vinci Research Kit (DVRK), an open source research platform for the da Vinci robot, an interface was developed for controlling the robotic arms with the Leap Motion Controller. This small device uses infrared LEDs and two cameras to detect the 3D positions of the hand and fingers. This data from the hands is mapped to the da Vinci surgical tools in real time, providing the surgeon with an intuitive method of controlling the instruments. An analysis of the tracking workspace is provided, to give a solution to occlusion issues. Multiple sensors are fused together in order to increase the range of trackable motion over a single sensor. Additional work involves replacing the current viewing screen with a virtual reality (VR) headset (Oculus Rift), to provide the surgeon with a stereoscopic 3D view of the surgical site without the need for a large monitor. The headset also provides the user with a more intuitive and natural method of positioning the camera during surgery, using the natural motions of the head. The large master console of the da Vinci system has been replaced with an inexpensive vision based tracking system, and VR headset, allowing the surgeon to operate the da Vinci Surgical Robot with more natural movements for the user. A preliminary evaluation of the system is provided, with recommendations for future work

    Chartopolis - A Self Driving Car Test Bed

    Get PDF
    abstract: This thesis presents an autonomous vehicle test bed which can be used to conduct studies on the interaction between human-driven vehicles and autonomous vehicles on the road. The test bed will make use of a fleet of robots which is a microcosm of an autonomous vehicle performing all the vital tasks like lane following, traffic signal obeying and collision avoidance with other vehicles on the road. The robots use real-time image processing and closed-loop control techniques to achieve automation. The testbed also features a manual control mode where a user can choose to control the car with a joystick by viewing a video relayed to the control station. Stochastic rogue vehicle processes will be introduced into the system which will emulate random behaviors in an autonomous vehicle. The test bed was experimented to perform a comparative study of driving capabilities of the miniature self-driving car and a human driver.Dissertation/ThesisMasters Thesis Electrical Engineering 201
    • …
    corecore