28 research outputs found

    Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives

    Full text link
    Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the "language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques.Comment: Accepted by Medical Image Analysi

    Kinematic optimization for the design of a collaborative robot end-effector for tele-echography

    Get PDF
    Tele-examination based on robotic technologies is a promising solution to solve the current worsening shortage of physicians. Echocardiography is among the examinations that would benefit more from robotic solutions. However, most of the state-of-the-art solutions are based on the development of specific robotic arms, instead of exploiting COTS (commercial-off-the-shelf) arms to reduce costs and make such systems affordable. In this paper, we address this problem by studying the design of an end-effector for tele-echography to be mounted on two popular and low-cost collaborative robots, i.e., the Universal Robot UR5, and the Franka Emika Panda. In the case of the UR5 robot, we investigate the possibility of adding a seventh rotational degree of freedom. The design is obtained by kinematic optimization, in which a manipulability measure is an objective function. The optimization domain includes the position of the patient with regards to the robot base and the pose of the end-effector frame. Constraints include the full coverage of the examination area, the possibility to orient the probe correctly, have the base of the robot far enough from the patient’s head, and a suitable distance from singularities. The results show that adding a degree of freedom improves manipulability by 65% and that adding a custom-designed actuated joint is better than adopting a native seven-degrees-freedom robot

    Towards the development of safe, collaborative robotic freehand ultrasound

    Get PDF
    The use of robotics in medicine is of growing importance for modern health services, as robotic systems have the capacity to improve upon human tasks, thereby enhancing the treatment ability of a healthcare provider. In the medical sector, ultrasound imaging is an inexpensive approach without the high radiation emissions often associated with other modalities, especially when compared to MRI and CT imaging respectively. Over the past two decades, considerable effort has been invested into freehand ultrasound robotics research and development. However, this research has focused on the feasibility of the application, not the robotic fundamentals, such as motion control, calibration, and contextual awareness. Instead, much of the work is concentrated on custom designed robots, ultrasound image generation and visual servoing, or teleoperation. Research based on these topics often suffer from important limitations that impede their use in an adaptable, scalable, and real-world manner. Particularly, while custom robots may be designed for a specific application, commercial collaborative robots are a more robust and economical solution. Otherwise, various robotic ultrasound studies have shown the feasibility of using basic force control, but rarely explore controller tuning in the context of patient safety and deformable skin in an unstructured environment. Moreover, many studies evaluate novel visual servoing approaches, but do not consider the practicality of relying on external measurement devices for motion control. These studies neglect the importance of robot accuracy and calibration, which allow a system to safely navigate its environment while reducing the imaging errors associated with positioning. Hence, while the feasibility of robotic ultrasound has been the focal point in previous studies, there is a lack of attention to what occurs between system design and image output. This thesis addresses limitations of the current literature through three distinct contributions. Given the force-controlled nature of an ultrasound robot, the first contribution presents a closed-loop calibration approach using impedance control and low-cost equipment. Accuracy is a fundamental requirement for high-quality ultrasound image generation and targeting. This is especially true when following a specified path along a patient or synthesizing 2D slices into a 3D ultrasound image. However, even though most industrial robots are inherently precise, they are not necessarily accurate. While robot calibration itself has been extensively studied, many of the approaches rely on expensive and highly delicate equipment. Experimental testing showed that this method is comparable in quality to traditional calibration using a laser tracker. As demonstrated through an experimental study and validated with a laser tracker, the absolute accuracy of a collaborative robot was improved to a maximum error of 0.990mm, representing a 58.4% improvement when compared to the nominal model. The second contribution explores collisions and contact events, as they are a natural by-product of applications involving physical human-robot interaction (pHRI) in unstructured environments. Robot-assisted medical ultrasound is an example of a task where simply stopping the robot upon contact detection may not be an appropriate reaction strategy. Thus, the robot should have an awareness of body contact location to properly plan force-controlled trajectories along the human body using the imaging probe. This is especially true for remote ultrasound systems where safety and manipulability are important elements to consider when operating a remote medical system through a communication network. A framework is proposed for robot contact classification using the built-in sensor data of a collaborative robot. Unlike previous studies, this classification does not discern between intended vs. unintended contact scenarios, but rather classifies what was involved in the contact event. The classifier can discern different ISO/TS 15066:2016 specific body areas along a human-model leg with 89.37% accuracy. Altogether, this contact distinction framework allows for more complex reaction strategies and tailored robot behaviour during pHRI. Lastly, given that the success of an ultrasound task depends on the capability of the robot system to handle pHRI, pure motion control is insufficient. Force control techniques are necessary to achieve effective and adaptable behaviour of a robotic system in the unstructured ultrasound environment while also ensuring safe pHRI. While force control does not require explicit knowledge of the environment, to achieve an acceptable dynamic behaviour, the control parameters must be tuned. The third contribution proposes a simple and effective online tuning framework for force-based robotic freehand ultrasound motion control. Within the context of medical ultrasound, different human body locations have a different stiffness and will require unique tunings. Through real-world experiments with a collaborative robot, the framework tuned motion control for optimal and safe trajectories along a human leg phantom. The optimization process was able to successfully reduce the mean absolute error (MAE) of the motion contact force to 0.537N through the evolution of eight motion control parameters. Furthermore, contextual awareness through motion classification can offer a framework for pHRI optimization and safety through predictive motion behaviour with a future goal of autonomous pHRI. As such, a classification pipeline, trained using the tuning process motion data, was able to reliably classify the future force tracking quality of a motion session with an accuracy of 91.82 %

    Toward Fully Automated Robotic Platform for Remote Auscultation

    Full text link
    Since most developed countries are facing an increase in the number of patients per healthcare worker due to a declining birth rate and an aging population, relatively simple and safe diagnosis tasks may need to be performed using robotics and automation technologies, without specialists and hospitals. This study presents an automated robotic platform for remote auscultation, which is a highly cost-effective screening tool for detecting abnormal clinical signs. The developed robotic platform is composed of a 6-degree-of-freedom cooperative robotic arm, light detection and ranging (LiDAR) camera, and a spring-based mechanism holding an electric stethoscope. The platform enables autonomous stethoscope positioning based on external body information acquired using the LiDAR camera-based multi-way registration; the platform also ensures safe and flexible contact, maintaining the contact force within a certain range through the passive mechanism. Our preliminary results confirm that the robotic platform enables estimation of the landing positions required for cardiac examinations based on the depth and landmark information of the body surface. It also handles the stethoscope while maintaining the contact force without relying on the push-in displacement by the robotic arm.Comment: 8 pages, 11 figure

    Recent advances in robot-assisted echography: Combining perception, control and cognition

    Get PDF
    Echography imaging is an important technique frequently used in medical diagnostics due to low-cost, non-ionising characteristics, and pragmatic convenience. Due to the shortage of skilful technicians and injuries of physicians sustained from diagnosing several patients, robot-assisted echography (RAE) system is gaining great attention in recent decades. A thorough study of the recent research advances in the field of perception, control and cognition techniques used in RAE systems is presented in this study. This survey introduces the representative system structure, applications and projects, and products. Challenges and key technological issues faced by the traditional RAE system and how the current artificial intelligence and cobots attempt to overcome these issues are summarised. Furthermore, significant future research directions in this field have been identified by this study as cognitive computing, operational skills transfer, and commercially feasible system design

    Design and validation of a system for controlling a robot for 3D ultrasound scanning of the lower limbs

    Get PDF
    Peripheral arterial disease (PAD) is a common circulatory problem featured by arterial narrowing or stenosis, usually in the lower limbs (i.e. legs). Without sufficient blood supply, in the case of PAD, the patient may suffer from intermittent claudication, or even require an amputation. Due to the PAD’s high prevalence yet low public awareness in the early stages, its diagnosis becomes very important. Among the most common medical imaging technologies in PAD diagnosis, the ultrasound probe has the advantages of lower cost and non-radiation. Traditional ultrasound scanning is conducted by sonographers and it causes musculoskeletal disorders in the operators. In addition, the data obtained from the manual operation are unable for the three-dimensional reconstruction of the artery needed for further study. Medical ultrasound robots release sonographers from routine lifting strain and provide accurate data for three-dimensional reconstruction. However, most existing medical ultrasound robots are designed for other purposes, and are unsuited to PAD diagnosis in the lower limbs. In this study, we present a novel medical ultrasound robot designed for PAD diagnosis in the lower limbs. The robot platform and the system setup are illustrated. Its forward and inverse kinematic models are solved by decomposing a complex parallel robot into several simple assemblies. Singularity issues and workspace are also discussed. Robots need to meet certain accuracy requirements to perform dedicated tasks. Our robot is calibrated by direct measurement with a laser tracker. The calibration method used is easy to implement without requiring knowledge of advanced calibration or heavy computation. The calibration result shows that, as an early prototype, the robot has noticeable errors in manufacturing and assembling. The implemented calibration method greatly improves the robot's accuracy. A force control design is essential when the robot needs to interact with an object/environment. Variable admittance controllers are implemented to adapt the variable stiffness encountered in human-robot interaction. An intuitive implementation of the passivity theory is proposed to ensure that the admittance model possesses a passivity property. Finally, experiments involving human interaction demonstrate the effectiveness of the proposed control design

    Design and integration of a parallel, soft robotic end-effector for extracorporeal ultrasound

    Get PDF
    Objective: In this work we address limitations in state-of-the-art ultrasound robots by designing and integrating a novel soft robotic system for ultrasound imaging. It employs the inherent qualities of soft fluidic actuators to establish safe, adaptable interaction between ultrasound probe and patient. Methods: We acquire clinical data to determine the movement ranges and force levels required in prenatal foetal ultrasound imaging and design the soft robotic end-effector accordingly. We verify its mechanical characteristics, derive and validate a kinetostatic model and demonstrate controllability and imaging capabilities on an ultrasound phantom. Results: The soft robot exhibits the desired stiffness characteristics and is able to reach 100% of the required workspace when no external force is present, and 95% of the workspace when considering its compliance. The model can accurately predict the end-effector pose with a mean error of 1.18+/-0.29mm in position and 0.92+/-0.47deg in orientation. The derived controller is, with an average position error of 0.39mm, able to track a target pose efficiently without and with externally applied loads. Ultrasound images acquired with the system are of equally good quality compared to a manual sonographer scan. Conclusion: The system is able to withstand loads commonly applied during foetal ultrasound scans and remains controllable with a motion range similar to manual scanning. Significance: The proposed soft robot presents a safe, cost-effective solution to offloading sonographers in day-to-day scanning routines. The design and modelling paradigms are greatly generalizable and particularly suitable for designing soft robots for physical interaction tasks

    Robotic-assisted approaches for image-controlled ultrasound procedures

    Get PDF
    Tese de mestrado integrado, Engenharia Biomédica e Biofísica (Engenharia Clínica e Instrumentação Médica), Universidade de Lisboa, Faculdade de Ciências, 2019A aquisição de imagens de ultrassons (US) é atualmente uma das modalidades de aquisição de imagem mais implementadas no meio médico por diversas razões. Quando comparada a outras modalidades como a tomografia computorizada (CT) e ressonância magnética (MRI), a combinação da sua portabilidade e baixo custo com a possibilidade de adquirir imagens em tempo real resulta numa enorme flexibilidade no que diz respeito às suas aplicações em medicina. Estas aplicações estendem-se desde o simples diagnóstico em ginecologia e obstetrícia, até tarefas que requerem alta precisão como cirurgia guiada por imagem ou mesmo em oncologia na área da braquiterapia. No entanto ao contrário das suas contrapartes devido à natureza do princípio físico da qual decorrem as imagens, a sua qualidade de imagem é altamente dependente da destreza do utilizador para colocar e orientar a sonda de US na região de interesse (ROI) correta, bem como, na sua capacidade de interpretar as imagens obtidas e localizar espacialmente as estruturas no corpo do paciente. De modo para tornar os procedimentos de diagnóstico menos propensos a erros, bem como os procedimentos guiados por imagem mais precisos, o acoplamento desta modalidade de imagem com uma abordagem robótica com controlo baseado na imagem adquirida é cada vez mais comum. Isto permite criar sistemas de diagnóstico e terapia semiautónomos, completamente autónomos ou cooperativos com o seu utilizador. Esta é uma tarefa que requer conhecimento e recursos de múltiplas áreas de conhecimento, incluindo de visão por computador, processamento de imagem e teoria de controlo. Em abordagens deste tipo a sonda de US vai agir como câmara para o interior do corpo do paciente e o processo de controlo vai basear-se em parâmetros tais como, as informações espaciais de uma certa estrutura-alvo presente na imagem adquirida. Estas informações que são extraídos através de vários estágios de processamento de imagem são utilizadas como realimentação no ciclo de controlo do sistema robótico em questão. A extração de informação espacial e controlo devem ser o mais autónomos e céleres possível, de modo a conseguir produzir-se um sistema com a capacidade de atuar em situações que requerem resposta em tempo real. Assim, o objetivo deste projeto foi desenvolver, implementar e validar, em MATLAB, as bases de uma abordagem para o controlo semiautónomo baseado em imagens de um sistema robótico de US e que possibilite o rastreio de estruturas-alvo e a automação de procedimentos de diagnóstico gerais com esta modalidade de imagem. De modo a atingir este objetivo foi assim implementada nesta plataforma, um programa semiautónomo com a capacidade de rastrear contornos em imagens US e capaz de produzir informação relativamente à sua posição e orientação na imagem. Este programa foi desenhado para ser compatível com uma abordagem em tempo real utilizando um sistema de aquisição SONOSITE TITAN, cuja velocidade de aquisição de imagem é de 25 fps. Este programa depende de fortemente de conceitos integrados na área de visão por computador, como computação de momentos e contornos ativos, sendo este último o motor principal da ferramenta de rastreamento. De um modo geral este programa pode ser descrito como uma implementação para rastreamento de contornos baseada em contornos ativos. Este tipo de contornos beneficia de um modelo físico subjacente que o permite ser atraído e convergir para determinadas características da imagem, como linhas, fronteiras, cantos ou regiões específicas, decorrente da minimização de um funcional de energia definido para a sua fronteira. De modo a simplificar e tornar mais célere a sua implementação este modelo dinâmico recorreu à parametrização dos contornos com funções harmónicas, pelo que as suas variáveis de sistema são descritoras de Fourier. Ao basear-se no princípio de menor energia o sistema pode ser encaixado na formulação da mecânica de Euler-Lagrange para sistemas físicos e a partir desta podem extrair-se sistemas de equações diferenciais que descrevem a evolução de um contorno ao longo do tempo. Esta evolução dependente não só da energia interna do contorno em sim, devido às forças de tensão e coesão entre pontos, mas também de forças externas que o vão guiar na imagem. Estas forças externas são determinadas de acordo com a finalidade do contorno e são geralmente derivadas de informação presente na imagem, como intensidades, gradientes e derivadas de ordem superior. Por fim, este sistema é implementado utilizando um método explícito de Euler que nos permite obter uma discretização do sistema em questão e nos proporciona uma expressão iterativa para a evolução do sistema de um estado prévio para um estado futuro que tem em conta os efeitos externos da imagem. Depois de ser implementado o desempenho do programa semiautomático de rastreamento foi validado. Esta validação concentrou-se em duas vertentes: na vertente da robustez do rastreio de contornos quando acoplado a uma sonda de US e na vertente da eficiência temporal do programa e da sua compatibilidade com sistemas de aquisição de imagem em tempo real. Antes de se proceder com a validação este sistema de aquisição foi primeiro calibrado espacialmente de forma simples, utilizando um fantoma de cabos em N contruído em acrílico capaz de produzir padrões reconhecíveis na imagem de ultrassons. Foram utilizados padrões verticais, horizontais e diagonais para calibrar a imagem, para os quais se consegue concluir que os dois primeiros produzem melhores valores para os espaçamentos reais entre pixéis da imagem de US. Finalmente a robustez do programa foi testada utilizando fantomas de 5%(m/m) de agar-agar incrustados com estruturas hipoecogénicas, simuladas por balões de água, construídos especialmente para este propósito. Para este tipo de montagem o programa consegue demonstrar uma estabilidade e robustez satisfatórias para diversos movimentos de translação e rotação da sonda US dentro do plano da imagem e mostrando também resultados promissores de resposta ao alongamento de estruturas, decorrentes de movimentos da sonda de US fora do plano da imagem. A validação da performance temporal do programa foi feita com este a funcionar a solo utilizando vídeos adquiridos na fase anterior para modelos de contornos ativos com diferentes níveis de detalhe. O tempo de computação do algoritmo em cada imagem do vídeo foi medido e a sua média foi calculada. Este valor encontra-se dentro dos níveis previstos, sendo facilmente compatível com a montagem da atual da sonda, cuja taxa de aquisição é 25 fps, atingindo a solo valores na gama entre 40 e 50 fps. Apesar demonstrar uma performance temporal e robustez promissoras esta abordagem possui ainda alguns limites para os quais a ainda não possui solução. Estes limites incluem: o suporte para um sistema rastreamento de contornos múltiplos e em simultâneo para estruturas-alvo mais complexas; a deteção e resolução de eventos topológicos dos contornos, como a fusão, separação e auto-interseção de contornos; a adaptabilidade automática dos parâmetros do sistema de equações para diferentes níveis de ruido da imagem e finalmente a especificidade dos potenciais da imagem para a convergência da abordagem em regiões da imagem que codifiquem tipo de tecidos específicos. Mesmo podendo beneficiar de algumas melhorias este projeto conseguiu atingir o objetivo a que se propôs, proporcionando uma implementação eficiente e robusta para um programa de rastreamento de contornos, permitindo lançar as bases nas quais vai ser futuramente possível trabalhar para finalmente atingir um sistema autónomo de diagnóstico em US. Além disso também demonstrou a utilidade de uma abordagem de contornos ativos para a construção de algoritmos de rastreamento robustos aos movimentos de estruturas-alvo no a imagem e com compatibilidade para abordagens em tempo-real.Ultrasound (US) systems are very popular in the medical field for several reasons. Compared to other imaging techniques such as CT or MRI, the combination of low-priced and portable hardware with realtime image acquisition enables great flexibility regarding medical applications, from simple diagnostics tasks to high precision ones, including those with robotic assistance. Unlike other techniques, the image quality and procedure accuracy are highly dependent on user skills for spatial ultrasound probe positioning and orientation around a region of interest (ROI) for inspection. To make diagnostics less prone to error and guided procedures more precise, and consequently safer, the US approach can be coupled to a robotic system. The probe acts as a camera to the patient body and relevant imaging information can be used to control a robotic arm, enabling the creation of semi-autonomous, cooperative and possibly fully autonomous diagnostics and therapeutics. In this project our aim is to develop a semi-autonomous tool for tracking defined structures of interest within US images, that outputs meaningful spatial information of a target structure (location of the centre of mass [CM], main orientation and elongation). Such tool must accomplish real-time requirements for future use in autonomous image-guided robotic systems. To this end, the concepts of moment-based visual servoing and active contours are fundamental. Active contours possess an underlying physical model allowing deformation according to image information, such as edges, image regions and specific image features. Additionally, the mathematical framework of vision-based control enables us to establish the types of necessary information for controlling a future autonomous system and how such information can be transformed to specify a desired task. Once implemented in MATLAB the tracking and temporal performance of this approach is tested in built agar-agar phantoms embedded with water-filled balloons, for stability demonstration, probe motion robustness in translational and rotational movements, as well as promising capability in responding to target structure deformations. The developed framework is also inside the expected levels, being compatible with a 25 frames per second image acquisition setup. The framework also has a standalone tool capable of dealing with 50 fps. Thus, this work lays the foundation for US guided procedures compatible with real-time approaches in moving and deforming targets

    CO-ROBOTIC ULTRASOUND IMAGING: A COOPERATIVE FORCE CONTROL APPROACH

    Get PDF
    Ultrasound (US) imaging remains one of the most commonly used imaging modalities in medical practice due to its low cost and safety. However, 63-91% of ultrasonographers develop musculoskeletal disorders due to the effort required to perform imaging tasks. Robotic ultrasound (RUS), the application of robotic systems to assist ultrasonographers in ultrasound scanning procedures, has been proposed in literature and recently deployed in clinical settings using limited degree-of-freedom (DOF) systems. An example of this includes breast-scanning systems, which allow one-DOF translation of a large ultrasound array in order to capture patients’ breast scans and minimize sonographer effort while preserving a desired clinical outcome. Recently, the robotic industry has evolved to provide light-weight, compact, accurate, and cost-effective manipulators. We leverage this new reality in able to provide ultrasonographers with a full 6-DOF system that provides force assistance to facilitate US image acquisition. Admittance robot control allows for smooth human-machine interaction in a desired task. In the case of RUS, force control is capable of assisting sonographers in facilitating and even improving the imaging results of typical procedures. We propose a new system setup for collaborative force control in US applications. This setup consists of the 6-DOF UR5 industrial robot, and a 6-axes force sensor attached to the robot tooltip, which in turn has an US probe attached to it through a custom-designed probe attachment mechanism. Additionally, an independent one-axis load cell is placed inside this attachment device and used to measure the contact force between the probe and the patient’s anatomy in real time and independent of any other forces. As the sonographer guides the US probe, the robot collaborates with the hand motions, following the path of the user. When imaging, the robot can offer assistance to the sonographer by augmenting the forces applied by him or her, thereby lessening the physical effort required as well as the resulting strain. Additional benefits include force and velocity limiting for patient safety and robot motion constraints for particular imaging tasks. Initial results of a conducted user study show the feasibility of implementing the presented robot-assisted system in a clinical setting
    corecore