11 research outputs found

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Full text link
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Get PDF
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin

    Accelerating Surgical Robotics Research: A Review of 10 Years With the da Vinci Research Kit

    Get PDF
    Robotic-assisted surgery is now well-established in clinical practice and has become the gold standard clinical treatment option for several clinical indications. The field of robotic-assisted surgery is expected to grow substantially in the next decade with a range of new robotic devices emerging to address unmet clinical needs across different specialities. A vibrant surgical robotics research community is pivotal for conceptualizing such new systems as well as for developing and training the engineers and scientists to translate them into practice. The da Vinci Research Kit (dVRK), an academic and industry collaborative effort to re-purpose decommissioned da Vinci surgical systems (Intuitive Surgical Inc, CA, USA) as a research platform for surgical robotics research, has been a key initiative for addressing a barrier to entry for new research groups in surgical robotics. In this paper, we present an extensive review of the publications that have been facilitated by the dVRK over the past decade. We classify research efforts into different categories and outline some of the major challenges and needs for the robotics community to maintain this initiative and build upon it

    Real-time surgical tool tracking and pose estimation using a hybrid cylindrical marker.

    Get PDF
    PURPOSE: To provide an integrated visualisation of intraoperative ultrasound and endoscopic images to facilitate intraoperative guidance, real-time tracking of the ultrasound probe is required. State-of-the-art methods are suitable for planar targets while most of the laparoscopic ultrasound probes are cylindrical objects. A tracking framework for cylindrical objects with a large work space will improve the usability of the intraoperative ultrasound guidance. METHODS: A hybrid marker design that combines circular dots and chessboard vertices is proposed for facilitating tracking cylindrical tools. The circular dots placed over the curved surface are used for pose estimation. The chessboard vertices are employed to provide additional information for resolving the ambiguous pose problem due to the use of planar model points under a monocular camera. Furthermore, temporal information between consecutive images is considered to minimise tracking failures with real-time computational performance. RESULTS: Detailed validation confirms that our hybrid marker provides a large working space for different tool sizes (6-14 mm in diameter). The tracking framework allows translational movements between 40 and 185 mm along the depth direction and rotational motion around three local orthogonal axes up to [Formula: see text]. Comparative studies with the current state of the art confirm that our approach outperforms existing methods by providing nearly 100% detection rates and accurate pose estimation with mean errors of 2.8 mm and 0.72[Formula: see text]. The tracking algorithm runs at 20 frames per second for [Formula: see text] image resolution videos. CONCLUSION: Experiments show that the proposed hybrid marker can be applied to a wide range of surgical tools with superior detection rates and pose estimation accuracies. Both the qualitative and quantitative results demonstrate that our framework can be used not only for assisting intraoperative ultrasound guidance but also for tracking general surgical tools in MIS

    Applied Deep Learning in Orthopaedics

    Get PDF
    The reemergence of deep learning in recent years has led to its successful application in a wide variety of fields. As a subfield of machine learning, deep learning offers an array of powerful algorithms for data-driven applications. Orthopaedics stands to benefit from the potential of deep learning for advancements in the field. This thesis investigated applications of deep learning for the field of orthopaedics through the development of three distinct projects. First, algorithms were developed for the automatic segmentation of the structures in the knee from MRI. The resulting algorithms can be used to accurately segment full MRI scans in a matter of seconds. Reconstructed structures from predicted segmentation maps yielded on average submillimeter geometric errors when compared to geometries from ground truth segmentation maps on a test set. The resulting frameworks can further be applied to develop algorithms for automatic segmentation of other anatomies and modalities in the future. Next, neural networks (NNs) were developed and evaluated for the prediction of muscle and joint reaction forces of patients performing activities of daily living (ADLs) in a gait lab environment. The performance of these models demonstrates the potential of NNs to supplement traditional gait lab data collection and has implications for the development of new gait lab workflows with less hardware and time requirements. Additionally, the models performed activity classification using standard gait lab data with near-perfect accuracy. Lastly, a deep learning-based computer vision system was developed for the detection and 6-degree of freedom (6-DoF) pose estimation of two surgical tracking tools routinely used in total knee replacement (TKR). The resulting model demonstrated competitive object detection capabilities and translation error as little as a few centimeters for the pose estimation task. A preliminary evaluation of the system shows promise for its applications in skill assessment and operations research. The development of these three projects represents a significant step towards the adoption of deep learning methodologies by the field of orthopaedics and shows potential for future additional applications

    Multimodal optical systems for clinical oncology

    Get PDF
    This thesis presents three multimodal optical (light-based) systems designed to improve the capabilities of existing optical modalities for cancer diagnostics and theranostics. Optical diagnostic and therapeutic modalities have seen tremendous success in improving the detection, monitoring, and treatment of cancer. For example, optical spectroscopies can accurately distinguish between healthy and diseased tissues, fluorescence imaging can light up tumours for surgical guidance, and laser systems can treat many epithelial cancers. However, despite these advances, prognoses for many cancers remain poor, positive margin rates following resection remain high, and visual inspection and palpation remain crucial for tumour detection. The synergistic combination of multiple optical modalities, as presented here, offers a promising solution. The first multimodal optical system (Chapter 3) combines Raman spectroscopic diagnostics with photodynamic therapy using a custom-built multimodal optical probe. Crucially, this system demonstrates the feasibility of nanoparticle-free theranostics, which could simplify the clinical translation of cancer theranostic systems without sacrificing diagnostic or therapeutic benefit. The second system (Chapter 4) applies computer vision to Raman spectroscopic diagnostics to achieve spatial spectroscopic diagnostics. It provides an augmented reality display of the surgical field-of-view, overlaying spatially co-registered spectroscopic diagnoses onto imaging data. This enables the translation of Raman spectroscopy from a 1D technique to a 2D diagnostic modality and overcomes the trade-off between diagnostic accuracy and field-of-view that has limited optical systems to date. The final system (Chapter 5) integrates fluorescence imaging and Raman spectroscopy for fluorescence-guided spatial spectroscopic diagnostics. This facilitates macroscopic tumour identification to guide accurate spectroscopic margin delineation, enabling the spectroscopic examination of suspicious lesions across large tissue areas. Together, these multimodal optical systems demonstrate that the integration of multiple optical modalities has potential to improve patient outcomes through enhanced tumour detection and precision-targeted therapies.Open Acces

    Smart Camera Robotic Assistant for Laparoscopic Surgery

    Get PDF
    The cognitive architecture also includes learning mechanisms to adapt the behavior of the robot to the different ways of working of surgeons, and to improve the robot behavior through experience, in a similar way as a human assistant would do. The theoretical concepts of this dissertation have been validated both through in-vitro experimentation in the labs of medical robotics of the University of Malaga and through in-vivo experimentation with pigs in the IACE Center (Instituto Andaluz de Cirugía Experimental), performed by expert surgeons.In the last decades, laparoscopic surgery has become a daily practice in operating rooms worldwide, which evolution is tending towards less invasive techniques. In this scenario, robotics has found a wide field of application, from slave robotic systems that replicate the movements of the surgeon to autonomous robots able to assist the surgeon in certain maneuvers or to perform autonomous surgical tasks. However, these systems require the direct supervision of the surgeon, and its capacity of making decisions and adapting to dynamic environments is very limited. This PhD dissertation presents the design and implementation of a smart camera robotic assistant to collaborate with the surgeon in a real surgical environment. First, it presents the design of a novel camera robotic assistant able to augment the capacities of current vision systems. This robotic assistant is based on an intra-abdominal camera robot, which is completely inserted into the patient’s abdomen and it can be freely moved along the abdominal cavity by means of magnetic interaction with an external magnet. To provide the camera with the autonomy of motion, the external magnet is coupled to the end effector of a robotic arm, which controls the shift of the camera robot along the abdominal wall. This way, the robotic assistant proposed in this dissertation has six degrees of freedom, which allow providing a wider field of view compared to the traditional vision systems, and also to have different perspectives of the operating area. On the other hand, the intelligence of the system is based on a cognitive architecture specially designed for autonomous collaboration with the surgeon in real surgical environments. The proposed architecture simulates the behavior of a human assistant, with a natural and intuitive human-robot interface for the communication between the robot and the surgeon

    Entwicklung eines Feedbacksystems zur Optimierung der laparoskopischen InstrumentenfĂĽhrung durch Integration einer automatisierten Bildklassifizierung

    Get PDF
    Während laparoskopischer Eingriffe kann es zu akzidentellen Verletzungen benachbarter Gewebestrukturen kommen, vor allem wenn sich das Arbeitsinstrument außerhalb des Sichtfeldes der laparoskopischen Kamera befindet. Ausgangspunkt der vorliegenden Arbeit war die quantitative sowie qualitative Untersuchung des Auftretens dieser als „Adverse Events“ (AE) bezeichneten Situationen während der laparoskopischen Cholezystektomie in einem realitätsnahen Trainingssetting. Des Weiteren sollte mit der Entwicklung eines Funktionsmusters die Machbarkeit eines kontextsensitiven, audiovisuellen Feedbacksystems durch Implementierung einer automatisierten binären Klassifizierung der zugrundeliegenden Bilddaten belegt werden. Das Ziel war dabei die Erkennung von AE während des Eingriffs in Echtzeit und deren Rückmeldung an das Operationsteam. Die Evaluation erfolgte im Rahmen einer randomisierten kontrollierten Probandenstudie mit 24 Medizinstudierenden (je 12 in Interventions- versus Kontrollgruppe), welche jeweils vier konsekutive laparoskopische Cholezystektomien in einer standardisierten Trainingsumgebung durchführten. Der Interventionsgruppe nutzte dabei das Feedbacksystem. Primärer Endpunkt war die Inzidenz von AE. Insgesamt wurden in der Gesamtpopulation 2895 AE registriert. Die mediane Anzahl der AE pro Eingriff lag bei 20,5. Die entwickelte Anwendung zur binären Bildklassifizierung konnte davon lediglich 33,9 % korrekt zuordnen. In der vergleichenden Auswertung von Interventions- und Kontrollgruppe ergaben sich hinsichtlich des primären Endpunkts keine statistisch signifikanten Unterschiede. Es wird geschlussfolgert, dass sich mit dem entwickelten Klassifizierungs- und Feedbacksystem das Auftreten von AE nicht beeinflussen lässt. Grundsätzlich deutet jedoch die hohe Anzahl an AE in Verbindung mit den aus der Literatur bekannten, teils schwerwiegenden Folgen für die betroffenen Patientinnen und Patienten nach iatrogenen Verletzungen im Rahmen laparoskopischer Eingriffe auf den Bedarf an zusätzlichen Sicherheitskonzepten hin. Diesbezüglich sind die angestoßenen Weiterentwicklungen unter Verwendung von Technologien auf dem Gebiet der künstlichen Intelligenz als vielversprechend zu beurteilen
    corecore