84 research outputs found

    Smart Navigation in Surgical Robotics

    Get PDF
    La cirugía mínimamente invasiva, y concretamente la cirugía laparoscópica, ha supuesto un gran cambio en la forma de realizar intervenciones quirúrgicas en el abdomen. Actualmente, la cirugía laparoscópica ha evolucionado hacia otras técnicas aún menos invasivas, como es la cirugía de un solo puerto, en inglés Single Port Access Surgery. Esta técnica consiste en realizar una única incisión, por la que son introducidos los instrumentos y la cámara laparoscópica a través de un único trocar multipuerto. La principal ventaja de esta técnica es una reducción de la estancia hospitalaria por parte del paciente, y los resultados estéticos, ya que el trocar se suele introducir por el ombligo, quedando la cicatriz oculta en él. Sin embargo, el hecho de que los instrumentos estén introducidos a través del mismo trocar hace la intervención más complicada para el cirujano, que necesita unas habilidades específicas para este tipo de intervenciones. Esta tesis trata el problema de la navegación de instrumentos quirúrgicos mediante plataformas robóticas teleoperadas en cirugía de un solo puerto. En concreto, se propone un método de navegación que dispone de un centro de rotación remoto virtual, el cuál coincide con el punto de inserción de los instrumentos (punto de fulcro). Para estimar este punto se han empleado las fuerzas ejercidas por el abdomen en los instrumentos quirúrgicos, las cuales han sido medidas por sensores de esfuerzos colocados en la base de los instrumentos. Debido a que estos instrumentos también interaccionan con tejido blando dentro del abdomen, lo cual distorsionaría la estimación del punto de inserción, es necesario un método que permita detectar esta circunstancia. Para solucionar esto, se ha empleado un detector de interacción con tejido basado en modelos ocultos de Markov el cuál se ha entrenado para detectar cuatro gestos genéricos. Por otro lado, en esta tesis se plantea el uso de guiado háptico para mejorar la experiencia del cirujano cuando utiliza plataformas robóticas teleoperadas. En concreto, se propone la técnica de aprendizaje por demostración (Learning from Demonstration) para generar fuerzas que puedan guiar al cirujano durante la resolución de tareas específicas. El método de navegación propuesto se ha implantado en la plataforma quirúrgica CISOBOT, desarrollada por la Universidad de Málaga. Los resultados experimentales obtenidos validan tanto el método de navegación propuesto, como el detector de interacción con tejido blando. Por otro lado, se ha realizado un estudio preliminar del sistema de guiado háptico. En concreto, se ha empleado una tarea genérica, la inserción de una clavija, para realizar los experimentos necesarios que permitan demostrar que el método propuesto es válido para resolver esta tarea y otras similares

    Proceedings of the 1st Standardized Knowledge Representation and Ontologies for Robotics and Automation Workshop

    Get PDF
    Welcome to IEEE-ORA (Ontologies for Robotics and Automation) IROS workshop. This is the 1st edition of the workshop on! Standardized Knowledge Representation and Ontologies for Robotics and Automation. The IEEE-ORA 2014 workshop was held on the 18th September, 2014 in Chicago, Illinois, USA. In!the IEEE-ORA IROS workshop, 10 contributions were presented from 7 countries in North and South America, Asia and Europe. The presentations took place in the afternoon, from 1:30 PM to 5:00 PM. The first session was dedicated to “Standards for Knowledge Representation in Robotics”, where presentations were made from the IEEE working group standards for robotics and automation, and also from the ISO TC 184/SC2/WH7. The second session was dedicated to “Core and Application Ontologies”, where presentations were made for core robotics ontologies, and also for industrial and robot assisted surgery ontologies. Three posters were presented in emergent applications of ontologies in robotics. We would like to express our thanks to all participants. First of all to the authors, whose quality work is the essence of this workshop. Next, to all the members of the international program committee, who helped us with their expertise and valuable time. We would also like to deeply thank the IEEE-IROS 2014 organizers for hosting this workshop. Our deep gratitude goes to the IEEE Robotics and Automation Society, that sponsors! the IEEE-ORA group activities, and also to the scientific organizations that kindly agreed to sponsor all the workshop authors work

    Automatic multi-camera hand-eye calibration for robotic workcells

    Get PDF
    Human-robot collaboration (HRC) is an increasingly successful research field, widely investigated for several industrial tasks. Collaborative robots can physically interact with humans in a shared environment and simultaneously guarantee an high human safety during all the working process. This can be achieved through a vision system equipped by a single or a multi camera system which can provide to the manipulator essential information about the surrounding workspace and human behavior, ensuring the collision avoidance with objects and human operators. However, in order to guarantee human safety and an excellent working system where the robot arm is aware about the surrounding environment and it can monitor operator motions, a reliable Hand-Eye calibration is needed. An additional improvement for a really safe human-robot collaboration scenario can be provided by a multi-camera hand-eye calibration. This process guarantees an improved human safety and give the robot a greater ability for collision avoidance, thanks to the presence of more sensors which ensures a constant and more reliable vision of the robot arm and its whole workspace. This thesis is mainly focused on the development of an automatic multi-camera calibration method for robotic workcells, which guarantees ah high human safety and ensure a really accurate working system. In particular, the proposed method has two main properties. It is automatic, since it exploits the robot arm with a planar target attached on its end-effector to accomplish the image acquisition phase necessary for the calibration, which is generally realized with manual procedures. This approach allows to remove as much as possible the inaccurate human intervention and to speed up the whole calibration process. The second main feature is that our approach enables the calibration of a multi-camera system suitable for robotic workcells that are larger than those commonly considered in the literature. Our multi-camera hand-eye calibration method was tested through several experiments with the Franka Emika Panda robot arm and with different sensors: Microsoft Kinect V2, Intel RealSense depth camera D455 and Intel RealSense LiDAR camera L515, in order to prove its flexibility and to test which are the hardware devices which allow to achieve the highest calibration accuracy. However, really accurate results are generally achieved through our method even in large robotic workcell where cameras are placed at a distance d=3 m from the robot arm, achieving a reprojection error even lower than 1 pixel with respect to other state-of-art methods which can not even guarantee a proper calibration at these distances. Moreover our method is compared against other single- and multi-camera calibration techniques and it was proved that the proposed calibration process achieves highest accuracy with respect to other methods found in literature, which are mainly focused on the calibration between a single camera and the robot arm.Human-robot collaboration (HRC) is an increasingly successful research field, widely investigated for several industrial tasks. Collaborative robots can physically interact with humans in a shared environment and simultaneously guarantee an high human safety during all the working process. This can be achieved through a vision system equipped by a single or a multi camera system which can provide to the manipulator essential information about the surrounding workspace and human behavior, ensuring the collision avoidance with objects and human operators. However, in order to guarantee human safety and an excellent working system where the robot arm is aware about the surrounding environment and it can monitor operator motions, a reliable Hand-Eye calibration is needed. An additional improvement for a really safe human-robot collaboration scenario can be provided by a multi-camera hand-eye calibration. This process guarantees an improved human safety and give the robot a greater ability for collision avoidance, thanks to the presence of more sensors which ensures a constant and more reliable vision of the robot arm and its whole workspace. This thesis is mainly focused on the development of an automatic multi-camera calibration method for robotic workcells, which guarantees ah high human safety and ensure a really accurate working system. In particular, the proposed method has two main properties. It is automatic, since it exploits the robot arm with a planar target attached on its end-effector to accomplish the image acquisition phase necessary for the calibration, which is generally realized with manual procedures. This approach allows to remove as much as possible the inaccurate human intervention and to speed up the whole calibration process. The second main feature is that our approach enables the calibration of a multi-camera system suitable for robotic workcells that are larger than those commonly considered in the literature. Our multi-camera hand-eye calibration method was tested through several experiments with the Franka Emika Panda robot arm and with different sensors: Microsoft Kinect V2, Intel RealSense depth camera D455 and Intel RealSense LiDAR camera L515, in order to prove its flexibility and to test which are the hardware devices which allow to achieve the highest calibration accuracy. However, really accurate results are generally achieved through our method even in large robotic workcell where cameras are placed at a distance d=3 m from the robot arm, achieving a reprojection error even lower than 1 pixel with respect to other state-of-art methods which can not even guarantee a proper calibration at these distances. Moreover our method is compared against other single- and multi-camera calibration techniques and it was proved that the proposed calibration process achieves highest accuracy with respect to other methods found in literature, which are mainly focused on the calibration between a single camera and the robot arm

    Robot Simulation for Control Design

    Get PDF

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 376)

    Get PDF
    This bibliography lists 265 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during Jun. 1993. Subject coverage includes: aerospace medicine and physiology, life support systems and man/system technology, protective clothing, exobiology and extraterrestrial life, planetary biology, and flight crew behavior and performance

    Towards the development of safe, collaborative robotic freehand ultrasound

    Get PDF
    The use of robotics in medicine is of growing importance for modern health services, as robotic systems have the capacity to improve upon human tasks, thereby enhancing the treatment ability of a healthcare provider. In the medical sector, ultrasound imaging is an inexpensive approach without the high radiation emissions often associated with other modalities, especially when compared to MRI and CT imaging respectively. Over the past two decades, considerable effort has been invested into freehand ultrasound robotics research and development. However, this research has focused on the feasibility of the application, not the robotic fundamentals, such as motion control, calibration, and contextual awareness. Instead, much of the work is concentrated on custom designed robots, ultrasound image generation and visual servoing, or teleoperation. Research based on these topics often suffer from important limitations that impede their use in an adaptable, scalable, and real-world manner. Particularly, while custom robots may be designed for a specific application, commercial collaborative robots are a more robust and economical solution. Otherwise, various robotic ultrasound studies have shown the feasibility of using basic force control, but rarely explore controller tuning in the context of patient safety and deformable skin in an unstructured environment. Moreover, many studies evaluate novel visual servoing approaches, but do not consider the practicality of relying on external measurement devices for motion control. These studies neglect the importance of robot accuracy and calibration, which allow a system to safely navigate its environment while reducing the imaging errors associated with positioning. Hence, while the feasibility of robotic ultrasound has been the focal point in previous studies, there is a lack of attention to what occurs between system design and image output. This thesis addresses limitations of the current literature through three distinct contributions. Given the force-controlled nature of an ultrasound robot, the first contribution presents a closed-loop calibration approach using impedance control and low-cost equipment. Accuracy is a fundamental requirement for high-quality ultrasound image generation and targeting. This is especially true when following a specified path along a patient or synthesizing 2D slices into a 3D ultrasound image. However, even though most industrial robots are inherently precise, they are not necessarily accurate. While robot calibration itself has been extensively studied, many of the approaches rely on expensive and highly delicate equipment. Experimental testing showed that this method is comparable in quality to traditional calibration using a laser tracker. As demonstrated through an experimental study and validated with a laser tracker, the absolute accuracy of a collaborative robot was improved to a maximum error of 0.990mm, representing a 58.4% improvement when compared to the nominal model. The second contribution explores collisions and contact events, as they are a natural by-product of applications involving physical human-robot interaction (pHRI) in unstructured environments. Robot-assisted medical ultrasound is an example of a task where simply stopping the robot upon contact detection may not be an appropriate reaction strategy. Thus, the robot should have an awareness of body contact location to properly plan force-controlled trajectories along the human body using the imaging probe. This is especially true for remote ultrasound systems where safety and manipulability are important elements to consider when operating a remote medical system through a communication network. A framework is proposed for robot contact classification using the built-in sensor data of a collaborative robot. Unlike previous studies, this classification does not discern between intended vs. unintended contact scenarios, but rather classifies what was involved in the contact event. The classifier can discern different ISO/TS 15066:2016 specific body areas along a human-model leg with 89.37% accuracy. Altogether, this contact distinction framework allows for more complex reaction strategies and tailored robot behaviour during pHRI. Lastly, given that the success of an ultrasound task depends on the capability of the robot system to handle pHRI, pure motion control is insufficient. Force control techniques are necessary to achieve effective and adaptable behaviour of a robotic system in the unstructured ultrasound environment while also ensuring safe pHRI. While force control does not require explicit knowledge of the environment, to achieve an acceptable dynamic behaviour, the control parameters must be tuned. The third contribution proposes a simple and effective online tuning framework for force-based robotic freehand ultrasound motion control. Within the context of medical ultrasound, different human body locations have a different stiffness and will require unique tunings. Through real-world experiments with a collaborative robot, the framework tuned motion control for optimal and safe trajectories along a human leg phantom. The optimization process was able to successfully reduce the mean absolute error (MAE) of the motion contact force to 0.537N through the evolution of eight motion control parameters. Furthermore, contextual awareness through motion classification can offer a framework for pHRI optimization and safety through predictive motion behaviour with a future goal of autonomous pHRI. As such, a classification pipeline, trained using the tuning process motion data, was able to reliably classify the future force tracking quality of a motion session with an accuracy of 91.82 %

    Modeling and experimental validation of a parallel microrobot for biomanipulation

    Get PDF
    The main purpose of this project is the development of a commercial micropositioner's (SmarPod 115.25, SmarAct GmbH) geometrical model. SmarPod is characterized by parallel kinematics and is employed for precise and accurate sample's positioning under SEM microscope, being vacuum-compatible, for various applications. Geometrical modeling represents the preliminar step to fully understand, and possibly improve, robot's closed loop behaviour in terms of task's quality precision, when enterprises does not provide sufficient documentation. The robotic system, in fact, represents in this case a "black box" from which it's possible to extract information. This step is essential in order to improve, consequently, the reliability of bio-microsystem manipulation and characterization. Disposing of a detailed microrobot's model becomes essential to deal with the typical lack of sensing at microscale, as it allows a 3D precise and adequate reconstruction, realized through proper softwares, of the manipulation set-up. The roles of Virtual Reality (VR) and of simulations, carried out, in this case, in Blender environment, are asserted as well as an essential helping tool in mycrosystem's task planning. Blender is a professional free and open-source 3D computer graphics software and it is proven to be a basic instrument to validate microrobot's model, even to simplify it in case of complex system's geometries

    Industrial and Medical Cyber-Physical Systems: Tackling User Requirements and Challenges in Robotics

    Get PDF
    Robotics is one of the major megatrends unfolding these days. Clearly, robots are capable of doing much more outside the factories than ever imagined, and that has a great impact on the whole society. This chapter provides some practical updates and guidelines on a few exciting aspects of automated technologies: applied robotics in the industry, in service and personal use and in the operating theaters, performing not only teleoperated surgeries but complex, delicate procedures as well. However, building reliable autonomous systems is not easy, and for another while, human operators will be required as a fallback option. Ensuring the safety of such hybrid control systems is complex, and requires novel human–machine interfaces. Situation awareness remains a key issue, keeping humans in the loop. Arguably, the social robotic sector is growing much faster than any industrial one, and as predicted, there soon will be robots in every household and around

    Robot Assisted Laser Osteotomy

    Get PDF
    In the scope of this thesis world\u27s first robot system was developed, which facilitates osteotomy using laser in arbitrary geometries with an overall accuracy below 0.5mm. Methods of computer and robot assisted surgery were reconsidered and composed to a workflow. Adequate calibration and registration methods are proposed. Further a methodology for transferring geometrically defined cutting trajectories into pulse sequences and optimized execution plans is developed

    Robotics-Assisted Needle Steering for Percutaneous Interventions: Modeling and Experiments

    Get PDF
    Needle insertion and guidance plays an important role in medical procedures such as brachytherapy and biopsy. Flexible needles have the potential to facilitate precise targeting and avoid collisions during medical interventions while reducing trauma to the patient and post-puncture issues. Nevertheless, error introduced during guidance degrades the effectiveness of the planned therapy or diagnosis. Although steering using flexible bevel-tip needles provides great mobility and dexterity, a major barrier is the complexity of needle-tissue interaction that does not lend itself to intuitive control. To overcome this problem, a robotic system can be employed to perform trajectory planning and tracking by manipulation of the needle base. This research project focuses on a control-theoretic approach and draws on the rich literature from control and systems theory to model needle-tissue interaction and needle flexion and then design a robotics-based strategy for needle insertion/steering. The resulting solutions will directly benefit a wide range of needle-based interventions. The outcome of this computer-assisted approach will not only enable us to perform efficient preoperative trajectory planning, but will also provide more insight into needle-tissue interaction that will be helpful in developing advanced intraoperative algorithms for needle steering. Experimental validation of the proposed methodologies was carried out on a state of-the-art 5-DOF robotic system designed and constructed in-house primarily for prostate brachytherapy. The system is equipped with a Nano43 6-DOF force/torque sensor (ATI Industrial Automation) to measure forces and torques acting on the needle shaft. In our setup, an Aurora electromagnetic tracker (Northern Digital Inc.) is the sensing device used for measuring needle deflection. A multi-threaded application for control, sensor readings, data logging and communication over the ethernet was developed using Microsoft Visual C 2005, MATLAB 2007 and the QuaRC Toolbox (Quanser Inc.). Various artificial phantoms were developed so as to create a realistic medium in terms of elasticity and insertion force ranges; however, they simulated a uniform environment without exhibiting complexities of organic tissues. Experiments were also conducted on beef liver and fresh chicken breast, beef, and ham, to investigate the behavior of a variety biological tissues
    corecore