9 research outputs found
Recent advances in robot-assisted echography: Combining perception, control and cognition
Echography imaging is an important technique frequently used in medical diagnostics due to low-cost, non-ionising characteristics, and pragmatic convenience. Due to the shortage of skilful technicians and injuries of physicians sustained from diagnosing several patients, robot-assisted echography (RAE) system is gaining great attention in recent decades. A thorough study of the recent research advances in the field of perception, control and cognition techniques used in RAE systems is presented in this study. This survey introduces the representative system structure, applications and projects, and products. Challenges and key technological issues faced by the traditional RAE system and how the current artificial intelligence and cobots attempt to overcome these issues are summarised. Furthermore, significant future research directions in this field have been identified by this study as cognitive computing, operational skills transfer, and commercially feasible system design
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives
Ultrasound (US) is one of the most widely used modalities for clinical
intervention and diagnosis due to the merits of providing non-invasive,
radiation-free, and real-time images. However, free-hand US examinations are
highly operator-dependent. Robotic US System (RUSS) aims at overcoming this
shortcoming by offering reproducibility, while also aiming at improving
dexterity, and intelligent anatomy and disease-aware imaging. In addition to
enhancing diagnostic outcomes, RUSS also holds the potential to provide medical
interventions for populations suffering from the shortage of experienced
sonographers. In this paper, we categorize RUSS as teleoperated or autonomous.
Regarding teleoperated RUSS, we summarize their technical developments, and
clinical evaluations, respectively. This survey then focuses on the review of
recent work on autonomous robotic US imaging. We demonstrate that machine
learning and artificial intelligence present the key techniques, which enable
intelligent patient and process-specific, motion and deformation-aware robotic
image acquisition. We also show that the research on artificial intelligence
for autonomous RUSS has directed the research community toward understanding
and modeling expert sonographers' semantic reasoning and action. Here, we call
this process, the recovery of the "language of sonography". This side result of
research on autonomous robotic US acquisitions could be considered as valuable
and essential as the progress made in the robotic US examination itself. This
article will provide both engineers and clinicians with a comprehensive
understanding of RUSS by surveying underlying techniques.Comment: Accepted by Medical Image Analysi
Intelligent Robotic Sonographer: Mutual Information-based Disentangled Reward Learning from Few Demonstrations
Ultrasound (US) imaging is widely used for biometric measurement and
diagnosis of internal organs due to the advantages of being real-time and
radiation-free. However, due to high inter-operator variability, resulting
images highly depend on operators' experience. In this work, an intelligent
robotic sonographer is proposed to autonomously "explore" target anatomies and
navigate a US probe to a relevant 2D plane by learning from expert. The
underlying high-level physiological knowledge from experts is inferred by a
neural reward function, using a ranked pairwise image comparisons approach in a
self-supervised fashion. This process can be referred to as understanding the
"language of sonography". Considering the generalization capability to overcome
inter-patient variations, mutual information is estimated by a network to
explicitly extract the task-related and domain features in latent space.
Besides, a Gaussian distribution-based filter is developed to automatically
evaluate and take the quality of the expert's demonstrations into account. The
robotic localization is carried out in coarse-to-fine mode based on the
predicted reward associated to B-mode images. To demonstrate the performance of
the proposed approach, representative experiments for the "line" target and
"point" target are performed on vascular phantom and two ex-vivo animal organ
phantoms (chicken heart and lamb kidney), respectively. The results
demonstrated that the proposed advanced framework can robustly work on
different kinds of known and unseen phantoms
From teleoperation to autonomous robot-assisted microsurgery: A survey
Robot-assisted microsurgery (RAMS) has many benefits compared to traditional microsurgery. Microsurgical platforms with advanced control strategies, high-quality micro-imaging modalities and micro-sensing systems are worth developing to further enhance the clinical outcomes of RAMS. Within only a few decades, microsurgical robotics has evolved into a rapidly developing research field with increasing attention all over the world. Despite the appreciated benefits, significant challenges remain to be solved. In this review paper, the emerging concepts and achievements of RAMS will be presented. We introduce the development tendency of RAMS from teleoperation to autonomous systems. We highlight the upcoming new research opportunities that require joint efforts from both clinicians and engineers to pursue further outcomes for RAMS in years to come
Probabilistic Learning by Demonstration from Complete and Incomplete Data
In recent years we have observed a convergence of the fields of robotics and machine learning initiated by technological advances bringing AI closer to the physical world. A prerequisite, however, for successful applications is to formulate reliable and precise offline algorithms, requiring minimal tuning, fast and adaptive online algorithms and finally effective ways of rectifying corrupt demonstrations. In this work we aim to address some of those challenges.
We begin by employing two offline algorithms for the purpose of Learning by Demonstration (LbD). A Bayesian non-parametric approach, able to infer the optimal model size without compromising the model's descriptive power and a Quantum Statistical extension to the mixture model able to achieve high precision for a given model size. We explore the efficacy of those algorithms in several one- and multi-shot LbD application achieving very promising results in terms of speed and and accuracy.
Acknowledging that more realistic robotic applications also require more adaptive algorithmic approaches, we then introduce an online learning algorithm for quantum mixtures based on the online EM. The method exhibits high stability and precision, outperforming well-established online algorithms, as demonstrated for several regression benchmark datasets and a multi-shot trajectory LbD case study.
Finally, aiming to account for data corruption due to sensor failures or occlusions, we propose a model for automatically rectifying damaged sequences in an unsupervised manner. In our approach we take into account the sequential nature of the data, the redundancy manifesting itself among repetitions of the same task and the potential of knowledge transfer across different tasks. We have devised a temporal factor model, with each factor modelling a single basic pattern in time and collectively forming a dictionary of fundamental trajectories shared across sequences. We have evaluated our method in a number of real-life datasets.Open Acces
A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions
The Internet has made several giant leaps over the years, from a fixed to a
mobile Internet, then to the Internet of Things, and now to a Tactile Internet.
The Tactile Internet goes far beyond data, audio and video delivery over fixed
and mobile networks, and even beyond allowing communication and collaboration
among things. It is expected to enable haptic communication and allow skill set
delivery over networks. Some examples of potential applications are
tele-surgery, vehicle fleets, augmented reality and industrial process
automation. Several papers already cover many of the Tactile Internet-related
concepts and technologies, such as haptic codecs, applications, and supporting
technologies. However, none of them offers a comprehensive survey of the
Tactile Internet, including its architectures and algorithms. Furthermore, none
of them provides a systematic and critical review of the existing solutions. To
address these lacunae, we provide a comprehensive survey of the architectures
and algorithms proposed to date for the Tactile Internet. In addition, we
critically review them using a well-defined set of requirements and discuss
some of the lessons learned as well as the most promising research directions
Robotic Assistant Systems for Otolaryngology-Head and Neck Surgery
Recently, there has been a significant movement in otolaryngology-head and neck surgery (OHNS) toward minimally invasive techniques, particularly those utilizing natural orifices. However, while these techniques can reduce the risk of complications encountered with classic open approaches such as scarring, infection, and damage to healthy tissue in order to access the surgical site, there remain significant challenges in both visualization and manipulation, including poor sensory feedback, reduced visibility, limited working area, and decreased precision due to long instruments. This work presents two robotic assistance systems which help to overcome different aspects of these challenges.
The first is the Robotic Endo-Laryngeal Flexible (Robo-ELF) Scope, which assists surgeons in manipulating flexible endoscopes. Flexible endoscopes can provide superior visualization compared to microscopes or rigid endoscopes by allowing views not constrained by line-of-sight. However, they are seldom used in the operating room due to the difficulty in precisely manually manipulating and stabilizing them for long periods of time. The Robo-ELF Scope enables stable, precise robotic manipulation for flexible scopes and frees the surgeon’s hands to operate bimanually. The Robo-ELF Scope has been demonstrated and evaluated in human cadavers and is moving toward a human subjects study.
The second is the Robotic Ear Nose and Throat Microsurgery System (REMS), which assists surgeons in manipulating rigid instruments and endoscopes. There are two main types of challenges involved in manipulating rigid instruments: reduced precision from hand tremor amplified by long instruments, and difficulty navigating through complex anatomy surrounded by sensitive structures. The REMS enables precise manipulation by allowing the surgeon to hold the surgical instrument while filtering unwanted movement such as hand tremor. The REMS also enables augmented navigation by calculating the position of the instrument with high accuracy, and combining this information with registered preoperative imaging data to enforce virtual safety barriers around sensitive anatomy. The REMS has been demonstrated and evaluated in user studies with synthetic phantoms and human cadavers