64 research outputs found

    3D laser scanner for underwater manipulation

    Get PDF
    Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF) fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS) is used to autonomously grasp an object from the bottom of a water tank

    Design and Development of Aerial Robotic Systems for Sampling Operations in Industrial Environment

    Get PDF
    This chapter describes the development of an autonomous fluid sampling system for outdoor facilities, and the localization solution to be used. The automated sampling system will be based on collaborative robotics, with a team of a UAV and a UGV platform travelling through a plant to collect water samples. The architecture of the system is described, as well as the hardware present in the UAV and the different software frameworks used. A visual simultaneous localization and mapping (SLAM) technique is proposed to deal with the localization problem, based on authors’ previous works, including several innovations: a new method to initialize the scale using unreliable global positioning system (GPS) measurements, integration of attitude and heading reference system (AHRS) measurements into the recursive state estimation, and a new technique to track features during the delayed feature initialization process. These procedures greatly enhance the robustness and usability of the SLAM technique as they remove the requirement of assisted scale initialization, and they reduce the computational effort to initialize features. To conclude, results from experiments performed with simulated data and real data captured with a prototype UAV are presented and discussed

    A Hybrid Visual Control Scheme to Assist the Visually Impaired with Guided Reaching Tasks

    Get PDF
    In recent years, numerous researchers have been working towards adapting technology developed for robotic control to use in the creation of high-technology assistive devices for the visually impaired. These types of devices have been proven to help visually impaired people live with a greater degree of confidence and independence. However, most prior work has focused primarily on a single problem from mobile robotics, namely navigation in an unknown environment. In this work we address the issue of the design and performance of an assistive device application to aid the visually-impaired with a guided reaching task. The device follows an eye-in-hand, IBLM visual servoing configuration with a single camera and vibrotactile feedback to the user to direct guided tracking during the reaching task. We present a model for the system that employs a hybrid control scheme based on a Discrete Event System (DES) approach. This approach avoids significant problems inherent in the competing classical control or conventional visual servoing models for upper limb movement found in the literature. The proposed hybrid model parameterizes the partitioning of the image state-space that produces a variable size targeting window for compensatory tracking in the reaching task. The partitioning is created through the positioning of hypersurface boundaries within the state space, which when crossed trigger events that cause DES-controller state transition that enable differing control laws. A set of metrics encompassing, accuracy (DD), precision (θe\theta_{e}), and overall tracking performance (ψ\psi) are also proposed to quantity system performance so that the effect of parameter variations and alternate controller configurations can be compared. To this end, a prototype called \texttt{aiReach} was constructed and experiments were conducted testing the functional use of the system and other supporting aspects of the system behaviour using participant volunteers. Results are presented validating the system design and demonstrating effective use of a two parameter partitioning scheme that utilizes a targeting window with additional hysteresis region to filtering perturbations due to natural proprioceptive limitations for precise control of upper limb movement. Results from the experiments show that accuracy performance increased with the use of the dual parameter hysteresis target window model (0.91≤D≤10.91 \leq D \leq 1, μ(D)=0.9644\mu(D)=0.9644, σ(D)=0.0172\sigma(D)=0.0172) over the single parameter fixed window model (0.82≤D≤0.980.82 \leq D \leq 0.98, μ(D)=0.9205\mu(D)=0.9205, σ(D)=0.0297\sigma(D)=0.0297) while the precision metric, θe\theta_{e}, remained relatively unchanged. In addition, the overall tracking performance metric produces scores which correctly rank the performance of the guided reaching tasks form most difficult to easiest

    Robotic-assisted approaches for image-controlled ultrasound procedures

    Get PDF
    Tese de mestrado integrado, Engenharia Biomédica e Biofísica (Engenharia Clínica e Instrumentação Médica), Universidade de Lisboa, Faculdade de Ciências, 2019A aquisição de imagens de ultrassons (US) é atualmente uma das modalidades de aquisição de imagem mais implementadas no meio médico por diversas razões. Quando comparada a outras modalidades como a tomografia computorizada (CT) e ressonância magnética (MRI), a combinação da sua portabilidade e baixo custo com a possibilidade de adquirir imagens em tempo real resulta numa enorme flexibilidade no que diz respeito às suas aplicações em medicina. Estas aplicações estendem-se desde o simples diagnóstico em ginecologia e obstetrícia, até tarefas que requerem alta precisão como cirurgia guiada por imagem ou mesmo em oncologia na área da braquiterapia. No entanto ao contrário das suas contrapartes devido à natureza do princípio físico da qual decorrem as imagens, a sua qualidade de imagem é altamente dependente da destreza do utilizador para colocar e orientar a sonda de US na região de interesse (ROI) correta, bem como, na sua capacidade de interpretar as imagens obtidas e localizar espacialmente as estruturas no corpo do paciente. De modo para tornar os procedimentos de diagnóstico menos propensos a erros, bem como os procedimentos guiados por imagem mais precisos, o acoplamento desta modalidade de imagem com uma abordagem robótica com controlo baseado na imagem adquirida é cada vez mais comum. Isto permite criar sistemas de diagnóstico e terapia semiautónomos, completamente autónomos ou cooperativos com o seu utilizador. Esta é uma tarefa que requer conhecimento e recursos de múltiplas áreas de conhecimento, incluindo de visão por computador, processamento de imagem e teoria de controlo. Em abordagens deste tipo a sonda de US vai agir como câmara para o interior do corpo do paciente e o processo de controlo vai basear-se em parâmetros tais como, as informações espaciais de uma certa estrutura-alvo presente na imagem adquirida. Estas informações que são extraídos através de vários estágios de processamento de imagem são utilizadas como realimentação no ciclo de controlo do sistema robótico em questão. A extração de informação espacial e controlo devem ser o mais autónomos e céleres possível, de modo a conseguir produzir-se um sistema com a capacidade de atuar em situações que requerem resposta em tempo real. Assim, o objetivo deste projeto foi desenvolver, implementar e validar, em MATLAB, as bases de uma abordagem para o controlo semiautónomo baseado em imagens de um sistema robótico de US e que possibilite o rastreio de estruturas-alvo e a automação de procedimentos de diagnóstico gerais com esta modalidade de imagem. De modo a atingir este objetivo foi assim implementada nesta plataforma, um programa semiautónomo com a capacidade de rastrear contornos em imagens US e capaz de produzir informação relativamente à sua posição e orientação na imagem. Este programa foi desenhado para ser compatível com uma abordagem em tempo real utilizando um sistema de aquisição SONOSITE TITAN, cuja velocidade de aquisição de imagem é de 25 fps. Este programa depende de fortemente de conceitos integrados na área de visão por computador, como computação de momentos e contornos ativos, sendo este último o motor principal da ferramenta de rastreamento. De um modo geral este programa pode ser descrito como uma implementação para rastreamento de contornos baseada em contornos ativos. Este tipo de contornos beneficia de um modelo físico subjacente que o permite ser atraído e convergir para determinadas características da imagem, como linhas, fronteiras, cantos ou regiões específicas, decorrente da minimização de um funcional de energia definido para a sua fronteira. De modo a simplificar e tornar mais célere a sua implementação este modelo dinâmico recorreu à parametrização dos contornos com funções harmónicas, pelo que as suas variáveis de sistema são descritoras de Fourier. Ao basear-se no princípio de menor energia o sistema pode ser encaixado na formulação da mecânica de Euler-Lagrange para sistemas físicos e a partir desta podem extrair-se sistemas de equações diferenciais que descrevem a evolução de um contorno ao longo do tempo. Esta evolução dependente não só da energia interna do contorno em sim, devido às forças de tensão e coesão entre pontos, mas também de forças externas que o vão guiar na imagem. Estas forças externas são determinadas de acordo com a finalidade do contorno e são geralmente derivadas de informação presente na imagem, como intensidades, gradientes e derivadas de ordem superior. Por fim, este sistema é implementado utilizando um método explícito de Euler que nos permite obter uma discretização do sistema em questão e nos proporciona uma expressão iterativa para a evolução do sistema de um estado prévio para um estado futuro que tem em conta os efeitos externos da imagem. Depois de ser implementado o desempenho do programa semiautomático de rastreamento foi validado. Esta validação concentrou-se em duas vertentes: na vertente da robustez do rastreio de contornos quando acoplado a uma sonda de US e na vertente da eficiência temporal do programa e da sua compatibilidade com sistemas de aquisição de imagem em tempo real. Antes de se proceder com a validação este sistema de aquisição foi primeiro calibrado espacialmente de forma simples, utilizando um fantoma de cabos em N contruído em acrílico capaz de produzir padrões reconhecíveis na imagem de ultrassons. Foram utilizados padrões verticais, horizontais e diagonais para calibrar a imagem, para os quais se consegue concluir que os dois primeiros produzem melhores valores para os espaçamentos reais entre pixéis da imagem de US. Finalmente a robustez do programa foi testada utilizando fantomas de 5%(m/m) de agar-agar incrustados com estruturas hipoecogénicas, simuladas por balões de água, construídos especialmente para este propósito. Para este tipo de montagem o programa consegue demonstrar uma estabilidade e robustez satisfatórias para diversos movimentos de translação e rotação da sonda US dentro do plano da imagem e mostrando também resultados promissores de resposta ao alongamento de estruturas, decorrentes de movimentos da sonda de US fora do plano da imagem. A validação da performance temporal do programa foi feita com este a funcionar a solo utilizando vídeos adquiridos na fase anterior para modelos de contornos ativos com diferentes níveis de detalhe. O tempo de computação do algoritmo em cada imagem do vídeo foi medido e a sua média foi calculada. Este valor encontra-se dentro dos níveis previstos, sendo facilmente compatível com a montagem da atual da sonda, cuja taxa de aquisição é 25 fps, atingindo a solo valores na gama entre 40 e 50 fps. Apesar demonstrar uma performance temporal e robustez promissoras esta abordagem possui ainda alguns limites para os quais a ainda não possui solução. Estes limites incluem: o suporte para um sistema rastreamento de contornos múltiplos e em simultâneo para estruturas-alvo mais complexas; a deteção e resolução de eventos topológicos dos contornos, como a fusão, separação e auto-interseção de contornos; a adaptabilidade automática dos parâmetros do sistema de equações para diferentes níveis de ruido da imagem e finalmente a especificidade dos potenciais da imagem para a convergência da abordagem em regiões da imagem que codifiquem tipo de tecidos específicos. Mesmo podendo beneficiar de algumas melhorias este projeto conseguiu atingir o objetivo a que se propôs, proporcionando uma implementação eficiente e robusta para um programa de rastreamento de contornos, permitindo lançar as bases nas quais vai ser futuramente possível trabalhar para finalmente atingir um sistema autónomo de diagnóstico em US. Além disso também demonstrou a utilidade de uma abordagem de contornos ativos para a construção de algoritmos de rastreamento robustos aos movimentos de estruturas-alvo no a imagem e com compatibilidade para abordagens em tempo-real.Ultrasound (US) systems are very popular in the medical field for several reasons. Compared to other imaging techniques such as CT or MRI, the combination of low-priced and portable hardware with realtime image acquisition enables great flexibility regarding medical applications, from simple diagnostics tasks to high precision ones, including those with robotic assistance. Unlike other techniques, the image quality and procedure accuracy are highly dependent on user skills for spatial ultrasound probe positioning and orientation around a region of interest (ROI) for inspection. To make diagnostics less prone to error and guided procedures more precise, and consequently safer, the US approach can be coupled to a robotic system. The probe acts as a camera to the patient body and relevant imaging information can be used to control a robotic arm, enabling the creation of semi-autonomous, cooperative and possibly fully autonomous diagnostics and therapeutics. In this project our aim is to develop a semi-autonomous tool for tracking defined structures of interest within US images, that outputs meaningful spatial information of a target structure (location of the centre of mass [CM], main orientation and elongation). Such tool must accomplish real-time requirements for future use in autonomous image-guided robotic systems. To this end, the concepts of moment-based visual servoing and active contours are fundamental. Active contours possess an underlying physical model allowing deformation according to image information, such as edges, image regions and specific image features. Additionally, the mathematical framework of vision-based control enables us to establish the types of necessary information for controlling a future autonomous system and how such information can be transformed to specify a desired task. Once implemented in MATLAB the tracking and temporal performance of this approach is tested in built agar-agar phantoms embedded with water-filled balloons, for stability demonstration, probe motion robustness in translational and rotational movements, as well as promising capability in responding to target structure deformations. The developed framework is also inside the expected levels, being compatible with a 25 frames per second image acquisition setup. The framework also has a standalone tool capable of dealing with 50 fps. Thus, this work lays the foundation for US guided procedures compatible with real-time approaches in moving and deforming targets

    Object distance measurement using a single camera for robotic applications

    Get PDF
    Visual servoing is defined as controlling robots by extracting data obtained from the vision system, such as the distance of an object with respect to a reference frame, or the length and width of the object. There are three image-based object distance measurement techniques: i) using two cameras, i.e., stereovision; ii) using a single camera, i.e., monovision; and iii) time-of-flight camera. The stereovision method uses two cameras to find the object’s depth and is highly accurate. However, it is costly compared to the monovision technique due to the higher computational burden and the cost of two cameras (rather than one) and related accessories. In addition, in stereovision, a larger number of images of the object need to be processed in real-time, and by increasing the distance of the object from cameras, the measurement accuracy decreases. In the time-of-flight distance measurement technique, distance information is obtained by measuring the total time for the light to transmit to and reflect from the object. The shortcoming of this technique is that it is difficult to separate the incoming signal, since it depends on many parameters such as the intensity of the reflected light, the intensity of the background light, and the dynamic range of the sensor. However, for applications such as rescue robot or object manipulation by a robot in a home and office environment, the high accuracy distance measurement provided by stereovision is not required. Instead, the monovision approach is attractive for some applications due to: i) lower cost and lower computational burden; and ii) lower complexity due to the use of only one camera. Using a single camera for distance measurement, object detection and feature extraction (i.e., finding the length and width of an object) is not yet well researched and there are very few published works on the topic in the literature. Therefore, using this technique for real-world robotics applications requires more research and improvements. This thesis mainly focuses on the development of object distance measurement and feature extraction algorithms using a single fixed camera and a single camera with variable pitch angle based on image processing techniques. As a result, two different improved and modified object distance measurement algorithms were proposed for cases where a camera is fixed at a given angle in the vertical plane and when it is rotating in a vertical plane. In the proposed algorithms, as a first step, the object distance and dimension such as length and width were obtained using existing image processing techniques. Since the results were not accurate due to lens distortion, noise, variable light intensity and other uncertainties such as deviation of the position of the object from the optical axes of camera, in the second step, the distance and dimension of the object obtained from existing techniques were modified in the X- and Y-directions and for the orientation of the object about the Z-axis in the object plane by using experimental data and identification techniques such as the least square method. Extensive experimental results confirmed that the accuracy increased for measured distance from 9.4 mm to 2.95 mm, for length from 11.6 mm to 2.2 mm, and for width from 18.6 mm to 10.8 mm. In addition, the proposed algorithm is significantly improved with proposed corrections compared to existing methods. Furthermore, the improved distance measurement method is computationally efficient and can be used for real-time robotic application tasks such as pick and place and object manipulation in a home or office environment.Master's Thesi

    Visuomotor Coordination in Reach-To-Grasp Tasks: From Humans to Humanoids and Vice Versa

    Get PDF
    Understanding the principles involved in visually-based coordinated motor control is one of the most fundamental and most intriguing research problems across a number of areas, including psychology, neuroscience, computer vision and robotics. Not very much is known regarding computational functions that the central nervous system performs in order to provide a set of requirements for visually-driven reaching and grasping. Additionally, in spite of several decades of advances in the field, the abilities of humanoids to perform similar tasks are by far modest when needed to operate in unstructured and dynamically changing environments. More specifically, our first focus is understanding the principles involved in human visuomotor coordination. Not many behavioral studies considered visuomotor coordination in natural, unrestricted, head-free movements in complex scenarios such as obstacle avoidance. To fill this gap, we provide an assessment of visuomotor coordination when humans perform prehensile tasks with obstacle avoidance, an issue that has received far less attention. Namely, we quantify the relationships between the gaze and arm-hand systems, so as to inform robotic models, and we investigate how the presence of an obstacle modulates this pattern of correlations. Second, to complement these observations, we provide a robotic model of visuomotor coordination, with and without the presence of obstacles in the workspace. The parameters of the controller are solely estimated by using the human motion capture data from our human study. This controller has a number of interesting properties. It provides an efficient way to control the gaze, arm and hand movements in a stable and coordinated manner. When facing perturbations while reaching and grasping, our controller adapts its behavior almost instantly, while preserving coordination between the gaze, arm, and hand. In the third part of the thesis, we study the neuroscientific literature of the primates. We here stress the view that the cerebellum uses the cortical reference frame representation. The cerebellum by taking into account this representation performs closed-loop programming of multi-joint movements and movement synchronization between the eye-head system, arm and hand. Based on this investigation, we propose a functional architecture of the cerebellar-cortical involvement. We derive a number of improvements of our visuomotor controller for obstacle-free reaching and grasping. Because this model is devised by carefully taking into account the neuroscientific evidence, we are able to provide a number of testable predictions about the functions of the central nervous system in visuomotor coordination. Finally, we tackle the flow of the visuomotor coordination in the direction from the arm-hand system to the visual system. We develop two models of motor-primed attention for humanoid robots. Motor-priming of attention is a mechanism that implements prioritizing of visual processing with respect to motor-relevant parts of the visual field. Recent studies in humans and monkeys have shown that visual attention supporting natural behavior is not exclusively defined in terms of visual saliency in color or texture cues, rather the reachable space and motor plans present the predominant source of this attentional modulation. Here, we show that motor-priming of visual attention can be used to efficiently distribute robot's computational resources devoted to visual processing

    Using human-inspired models for guiding robot locomotion

    Get PDF
    Cette thèse a été effectuée dans le cadre du projet européen Koroibot dont l'objectif est le développement d'algorithmes de marche avancés pour les robots humanoïdes. Dans le but de contrôler les robots d'une manière sûre et efficace chez les humains, il est nécessaire de comprendre les règles, les principes et les stratégies de l'homme lors de la locomotion et de les transférer à des robots. L'objectif de cette thèse est d'étudier et d'identifier les stratégies de locomotion humaine et créer des algorithmes qui pourraient être utilisés pour améliorer les capacités du robot. La contribution principale est l'analyse sur les principes de piétons qui guident les stratégies d'évitement des collisions. En particulier, nous observons comment les humains adapter une tâche de locomotion objectif direct quand ils ont à interférer avec un obstacle en mouvement traversant leur chemin. Nous montrons les différences entre la stratégie définie par les humains pour éviter un obstacle non-collaboratif et la stratégie pour éviter un autre être humain, et la façon dont les humains interagissent avec un objet si se déplaçant en manier simil à l'humaine. Deuxièmement, nous présentons un travail effectué en collaboration avec les neuroscientifiques de calcul. Nous proposons une nouvelle approche pour synthétiser réalistes complexes mouvements du robot humanoïde avec des primitives de mouvement. Trajectoires humaines walking-to-grasp ont été enregistrés. L'ensemble des mouvements du corps sont reciblées et proportionnée afin de correspondre à la cinématique de robots humanoïdes. Sur la base de cette base de données des mouvements, nous extrayons les primitives de mouvement. Nous montrons que ces signaux sources peuvent être exprimées sous forme de solutions stables d'un système dynamique autonome, qui peut être considéré comme un système de central pattern generators (CPGs). Sur la base de cette approche, les stratégies réactives walking-to-grasp ont été développés et expérimenté avec succès sur le robot humanoïde HRP-2 au LAAS-CNRS. Dans la troisième partie de la thèse, nous présentons une nouvelle approche du problème de pilotage d'un robot soumis à des contraintes non holonomes par une porte en utilisant l'asservissement visuel. La porte est représentée par deux points de repère situés sur ses supports verticaux. La plan géométric qui a été construit autour de la porte est constituée de faisceaux de hyperboles, des ellipses et des cercles orthogonaux. Nous montrons que cette géométrie peut être mesurée directement dans le plan d'image de la caméra et que la stratégie basée sur la vision présentée peut également être lié à l'homme. Simulation et expériences réalistes sont présentés pour montrer l'efficacité de nos solutions.This thesis has been done within the framework of the European Project Koroibot which aims at developing advanced algorithms to improve the humanoid robots locomotion. It is organized in three parts. With the aim of steering robots in a safe and efficient manner among humans it is required to understand the rules, principles and strategies of human during locomotion and transfer them to robots. The goal of this thesis is to investigate and identify the human locomotion strategies and create algorithms that could be used to improve robot capabilities. A first contribution is the analysis on pedestrian principles which guide collision avoidance strategies. In particular, we observe how humans adapt a goal-direct locomotion task when they have to interfere with a moving obstacle crossing their way. We show differences both in the strategy set by humans to avoid a non-collaborative obstacle with respect to avoid another human, and the way humans interact with an object moving in human-like way. Secondly, we present a work done in collaboration with computational neuroscientists. We propose a new approach to synthetize realistic complex humanoid robot movements with motion primitives. Human walking-to-grasp trajectories have been recorded. The whole body movements are retargeted and scaled in order to match the humanoid robot kinematics. Based on this database of movements, we extract the motion primitives. We prove that these sources signals can be expressed as stable solutions of an autonomous dynamical system, which can be regarded as a system of coupled central pattern generators (CPGs). Based on this approach, reactive walking-to-grasp strategies have been developed and successfully experimented on the humanoid robot HRP at LAAS-CNRS. In the third part of the thesis, we present a new approach to the problem of vision-based steering of robot subject to non-holonomic constrained to pass through a door. The door is represented by two landmarks located on its vertical supports. The planar geometry that has been built around the door consists of bundles of hyperbolae, ellipses, and orthogonal circles. We prove that this geometry can be directly measured in the camera image plane and that the proposed vision-based control strategy can also be related to human. Realistic simulation and experiments are reported to show the effectiveness of our solutions

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Optimization and validation of a new 3D-US imaging robot to detect, localize and quantify lower limb arterial stenoses

    Get PDF
    L’athérosclérose est une maladie qui cause, par l’accumulation de plaques lipidiques, le durcissement de la paroi des artères et le rétrécissement de la lumière. Ces lésions sont généralement localisées sur les segments artériels coronariens, carotidiens, aortiques, rénaux, digestifs et périphériques. En ce qui concerne l’atteinte périphérique, celle des membres inférieurs est particulièrement fréquente. En effet, la sévérité de ces lésions artérielles est souvent évaluée par le degré d’une sténose (réduction >50 % du diamètre de la lumière) en angiographie, imagerie par résonnance magnétique (IRM), tomodensitométrie ou échographie. Cependant, pour planifier une intervention chirurgicale, une représentation géométrique artérielle 3D est notamment préférable. Les méthodes d’imagerie par coupe (IRM et tomodensitométrie) sont très performantes pour générer une imagerie tridimensionnelle de bonne qualité mais leurs utilisations sont dispendieuses et invasives pour les patients. L’échographie 3D peut constituer une avenue très prometteuse en imagerie pour la localisation et la quantification des sténoses. Cette modalité d’imagerie offre des avantages distincts tels la commodité, des coûts peu élevés pour un diagnostic non invasif (sans irradiation ni agent de contraste néphrotoxique) et aussi l’option d’analyse en Doppler pour quantifier le flux sanguin. Étant donné que les robots médicaux ont déjà été utilisés avec succès en chirurgie et en orthopédie, notre équipe a conçu un nouveau système robotique d’échographie 3D pour détecter et quantifier les sténoses des membres inférieurs. Avec cette nouvelle technologie, un radiologue fait l’apprentissage manuel au robot d’un balayage échographique du vaisseau concerné. Par la suite, le robot répète à très haute précision la trajectoire apprise, contrôle simultanément le processus d’acquisition d’images échographiques à un pas d’échantillonnage constant et conserve de façon sécuritaire la force appliquée par la sonde sur la peau du patient. Par conséquent, la reconstruction d’une géométrie artérielle 3D des membres inférieurs à partir de ce système pourrait permettre une localisation et une quantification des sténoses à très grande fiabilité. L’objectif de ce projet de recherche consistait donc à valider et optimiser ce système robotisé d’imagerie échographique 3D. La fiabilité d’une géométrie reconstruite en 3D à partir d’un système référentiel robotique dépend beaucoup de la précision du positionnement et de la procédure de calibration. De ce fait, la précision pour le positionnement du bras robotique fut évaluée à travers son espace de travail avec un fantôme spécialement conçu pour simuler la configuration des artères des membres inférieurs (article 1 - chapitre 3). De plus, un fantôme de fils croisés en forme de Z a été conçu pour assurer une calibration précise du système robotique (article 2 - chapitre 4). Ces méthodes optimales ont été utilisées pour valider le système pour l’application clinique et trouver la transformation qui convertit les coordonnées de l’image échographique 2D dans le référentiel cartésien du bras robotisé. À partir de ces résultats, tout objet balayé par le système robotique peut être caractérisé pour une reconstruction 3D adéquate. Des fantômes vasculaires compatibles avec plusieurs modalités d’imagerie ont été utilisés pour simuler différentes représentations artérielles des membres inférieurs (article 2 - chapitre 4, article 3 - chapitre 5). La validation des géométries reconstruites a été effectuée à l`aide d`analyses comparatives. La précision pour localiser et quantifier les sténoses avec ce système robotisé d’imagerie échographique 3D a aussi été déterminée. Ces évaluations ont été réalisées in vivo pour percevoir le potentiel de l’utilisation d’un tel système en clinique (article 3- chapitre 5).Atherosclerosis is a disease caused by the accumulation of lipid deposits inducing the remodeling and hardening of the vessel wall, which leads to a progressive narrowing of arteries. These lesions are generally located on the coronary, carotid, aortic, renal, digestive and peripheral arteries. With regards to peripheral vessels, lower limb arteries are frequently affected. The severity of arterial lesions are evaluated by the stenosis degree (reduction > 50.0 % of the lumen diameter) using angiography, magnetic resonance angiography (MRA), computed tomography (CT) and ultrasound (US). However, to plan a surgical therapeutic intervention, a 3D arterial geometric representation is notably preferable. Imaging methods such as MRA and CT are very efficient to generate a three-dimensional imaging of good quality even though their use is expensive and invasive for patients. 3D-ultrasound can be perceived as a promising avenue in imaging for the location and the quantification of stenoses. This non invasive, non allergic (i.e, nephrotoxic contrast agent) and non-radioactive imaging modality offers distinct advantages in convenience, low cost and also multiple diagnostic options to quantify blood flow in Doppler. Since medical robots already have been used with success in surgery and orthopedics, our team has conceived a new medical 3D-US robotic imaging system to localize and quantify arterial stenoses in lower limb vessels. With this new technology, a clinician manually teaches the robotic arm the scanning path. Then, the robotic arm repeats with high precision the taught trajectory and controls simultaneously the ultrasound image acquisition process at even sampling and preserves safely the force applied by the US probe. Consequently, the reconstruction of a lower limb arterial geometry in 3D with this system could allow the location and quantification of stenoses with high accuracy. The objective of this research project consisted in validating and optimizing this 3D-ultrasound imaging robotic system. The reliability of a 3D reconstructed geometry obtained with 2D-US images captured with a robotic system depends considerably on the positioning accuracy and the calibration procedure. Thus, the positioning accuracy of the robotic arm was evaluated in the workspace with a lower limb-mimicking phantom design (article 1 - chapter 3). In addition, a Z-phantom was designed to assure a precise calibration of the robotic system. These optimal methods were used to validate the system for the clinical application and to find the transformation which converts image coordinates of a 2D-ultrasound image into the robotic arm referential. From these results, all objects scanned by the robotic system can be adequately reconstructed in 3D. Multimodal imaging vascular phantoms of lower limb arteries were used to evaluate the accuracy of the 3D representations (article 2 - chapter 4, article 3 - chapter 5). The validation of the reconstructed geometry with this system was performed by comparing surface points with the manufacturing vascular phantom file surface points. The accuracy to localize and quantify stenoses with the 3D-ultrasound robotic imaging system was also determined. These same evaluations were analyzed in vivo to perceive the feasibility of the study
    • …
    corecore