312 research outputs found

    An Active Pattern Recognition Architecture for Mobile Robots

    Full text link
    An active, attentionally-modulated recognition architecture is proposed for object recognition and scene analysis. The proposed architecture forms part of navigation and trajectory planning modules for mobile robots. Key characteristics of the system include movement planning and execution based on environmental factors and internal goal definitions. Real-time implementation of the system is based on space-variant representation of the visual field, as well as an optimal visual processing scheme utilizing separate and parallel channels for the extraction of boundaries and stimulus qualities. A spatial and temporal grouping module (VWM) allows for scene scanning, multi-object segmentation, and featural/object priming. VWM is used to modulate a tn~ectory formation module capable of redirecting the focus of spatial attention. Finally, an object recognition module based on adaptive resonance theory is interfaced through VWM to the visual processing module. The system is capable of using information from different modalities to disambiguate sensory input.Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-92-J-1309); Consejo Nacional de Ciencia y Tecnología (63462

    Advanced tracking and image registration techniques for intraoperative radiation therapy

    Get PDF
    Mención Internacional en el título de doctorIntraoperative electron radiation therapy (IOERT) is a technique used to deliver radiation to the surgically opened tumor bed without irradiating healthy tissue. Treatment planning systems and mobile linear accelerators enable clinicians to optimize the procedure, minimize stress in the operating room (OR) and avoid transferring the patient to a dedicated radiation room. However, placement of the radiation collimator over the tumor bed requires a validation methodology to ensure correct delivery of the dose prescribed in the treatment planning system. In this dissertation, we address three well-known limitations of IOERT: applicator positioning over the tumor bed, docking of the mobile linear accelerator gantry with the applicator and validation of the dose delivery prescribed. This thesis demonstrates that these limitations can be overcome by positioning the applicator appropriately with respect to the patient’s anatomy. The main objective of the study was to assess technological and procedural alternatives for improvement of IOERT performance and resolution of problems of uncertainty. Image-to-world registration, multicamera optical trackers, multimodal imaging techniques and mobile linear accelerator docking are addressed in the context of IOERT. IOERT is carried out by a multidisciplinary team in a highly complex environment that has special tracking needs owing to the characteristics of its working volume (i.e., large and prone to occlusions), in addition to the requisites of accuracy. The first part of this dissertation presents the validation of a commercial multicamera optical tracker in terms of accuracy, sensitivity to miscalibration, camera occlusions and detection of tools using a feasible surgical setup. It also proposes an automatic miscalibration detection protocol that satisfies the IOERT requirements of automaticity and speed. We show that the multicamera tracker is suitable for IOERT navigation and demonstrate the feasibility of the miscalibration detection protocol in clinical setups. Image-to-world registration is one of the main issues during image-guided applications where the field of interest and/or the number of possible anatomical localizations is large, such as IOERT. In the second part of this dissertation, a registration algorithm for image-guided surgery based on lineshaped fiducials (line-based registration) is proposed and validated. Line-based registration decreases acquisition time during surgery and enables better registration accuracy than other published algorithms. In the third part of this dissertation, we integrate a commercial low-cost ultrasound transducer and a cone beam CT C-arm with an optical tracker for image-guided interventions to enable surgical navigation and explore image based registration techniques for both modalities. In the fourth part of the dissertation, a navigation system based on optical tracking for the docking of the mobile linear accelerator to the radiation applicator is assessed. This system improves safety and reduces procedure time. The system tracks the prescribed collimator location to solve the movements that the linear accelerator should perform to reach the docking position and warns the user about potentially unachievable arrangements before the actual procedure. A software application was implemented to use this system in the OR, where it was also evaluated to assess the improvement in docking speed. Finally, in the last part of the dissertation, we present and assess the installation setup for a navigation system in a dedicated IOERT OR, determine the steps necessary for the IOERT process, identify workflow limitations and evaluate the feasibility of the integration of the system in a real OR. The navigation system safeguards the sterile conditions of the OR, clears the space available for surgeons and is suitable for any similar dedicated IOERT OR.La Radioterapia Intraoperatoria por electrones (RIO) consiste en la aplicación de radiación de alta energía directamente sobre el lecho tumoral, accesible durante la cirugía, evitando radiar los tejidos sanos. Hoy en día, avances como los sistemas de planificación (TPS) y la aparición de aceleradores lineales móviles permiten optimizar el procedimiento, minimizar el estrés clínico en el entorno quirúrgico y evitar el desplazamiento del paciente durante la cirugía a otra sala para ser radiado. La aplicación de la radiación se realiza mediante un colimador del haz de radiación (aplicador) que se coloca sobre el lecho tumoral de forma manual por el oncólogo radioterápico. Sin embargo, para asegurar una correcta deposición de la dosis prescrita y planificada en el TPS, es necesaria una adecuada validación de la colocación del colimador. En esta Tesis se abordan tres limitaciones conocidas del procedimiento RIO: el correcto posicionamiento del aplicador sobre el lecho tumoral, acoplamiento del acelerador lineal con el aplicador y validación de la dosis de radiación prescrita. Esta Tesis demuestra que estas limitaciones pueden ser abordadas mediante el posicionamiento del aplicador de radiación en relación con la anatomía del paciente. El objetivo principal de este trabajo es la evaluación de alternativas tecnológicas y procedimentales para la mejora de la práctica de la RIO y resolver los problemas de incertidumbre descritos anteriormente. Concretamente se revisan en el contexto de la radioterapia intraoperatoria los siguientes temas: el registro de la imagen y el paciente, sistemas de posicionamiento multicámara, técnicas de imagen multimodal y el acoplamiento del acelerador lineal móvil. El entorno complejo y multidisciplinar de la RIO precisa de necesidades especiales para el empleo de sistemas de posicionamiento como una alta precisión y un volumen de trabajo grande y propenso a las oclusiones de los sensores de posición. La primera parte de esta Tesis presenta una exhaustiva evaluación de un sistema de posicionamiento óptico multicámara comercial. Estudiamos la precisión del sistema, su sensibilidad a errores cometidos en la calibración, robustez frente a posibles oclusiones de las cámaras y precisión en el seguimiento de herramientas en un entorno quirúrgico real. Además, proponemos un protocolo para la detección automática de errores por calibración que satisface los requisitos de automaticidad y velocidad para la RIO demostrando la viabilidad del empleo de este sistema para la navegación en RIO. Uno de los problemas principales de la cirugía guiada por imagen es el correcto registro de la imagen médica y la anatomía del paciente en el quirófano. En el caso de la RIO, donde el número de posibles localizaciones anatómicas es bastante amplio, así como el campo de trabajo es grande se hace necesario abordar este problema para una correcta navegación. Por ello, en la segunda parte de esta Tesis, proponemos y validamos un nuevo algoritmo de registro (LBR) para la cirugía guiada por imagen basado en marcadores lineales. El método propuesto reduce el tiempo de la adquisición de la posición de los marcadores durante la cirugía y supera en precisión a otros algoritmos de registro establecidos y estudiados en la literatura. En la tercera parte de esta tesis, integramos un transductor de ultrasonido comercial de bajo coste, un arco en C de rayos X con haz cónico y un sistema de posicionamiento óptico para intervenciones guiadas por imagen que permite la navegación quirúrgica y exploramos técnicas de registro de imagen para ambas modalidades. En la cuarta parte de esta tesis se evalúa un navegador basado en el sistema de posicionamiento óptico para el acoplamiento del acelerador lineal móvil con aplicador de radiación, mejorando la seguridad y reduciendo el tiempo del propio acoplamiento. El sistema es capaz de localizar el colimador en el espacio y proporcionar los movimientos que el acelerador lineal debe realizar para alcanzar la posición de acoplamiento. El sistema propuesto es capaz de advertir al usuario de aquellos casos donde la posición de acoplamiento sea inalcanzable. El sistema propuesto de ayuda para el acoplamiento se integró en una aplicación software que fue evaluada para su uso final en quirófano demostrando su viabilidad y la reducción de tiempo de acoplamiento mediante su uso. Por último, presentamos y evaluamos la instalación de un sistema de navegación en un quirófano RIO dedicado, determinamos las necesidades desde el punto de vista procedimental, identificamos las limitaciones en el flujo de trabajo y evaluamos la viabilidad de la integración del sistema en un entorno quirúrgico real. El sistema propuesto demuestra ser apto para el entorno RIO manteniendo las condiciones de esterilidad y dejando despejado el campo quirúrgico además de ser adaptable a cualquier quirófano similar.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Raúl San José Estépar.- Secretario: María Arrate Muñoz Barrutia.- Vocal: Carlos Ferrer Albiac

    PACE Technical Report Series, Volume 5: Mission Formulation Studies

    Get PDF
    This chapter summarizes the mission architecture for the Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) mission, ranging from its scientific rationale to the history of its realized conception to itspresent-day organization and management. This volume in the PACE Technical Report series focuses ontrade studies that informed the formulation of the mission in its pre-Phase A (2014-2016; pre-formulation:define a viable and affordable concept) and Phase A (2016-2017; concept and technology development).With that in mind, this chapter serves to introduce the mission by providing: a brief summary of thescience drivers for the mission; a history of the direction of the mission to NASA's Goddard Space Flight Center (GSFC); a synopsis of the mission's and instruments' management and development structures; and a brief description of the primary components and elements that form the foundation ofthe mission, encompassing the major mission segments (space, ground, and science data processing) and their roles in integration, testing, and operations

    Towards a Robust Sensor Fusion Step for 3D Object Detection on Corrupted Data

    Full text link
    Multimodal sensor fusion methods for 3D object detection have been revolutionizing the autonomous driving research field. Nevertheless, most of these methods heavily rely on dense LiDAR data and accurately calibrated sensors which is often not the case in real-world scenarios. Data from LiDAR and cameras often come misaligned due to the miscalibration, decalibration, or different frequencies of the sensors. Additionally, some parts of the LiDAR data may be occluded and parts of the data may be missing due to hardware malfunction or weather conditions. This work presents a novel fusion step that addresses data corruptions and makes sensor fusion for 3D object detection more robust. Through extensive experiments, we demonstrate that our method performs on par with state-of-the-art approaches on normal data and outperforms them on misaligned data

    Learning to Calibrate - Estimating the Hand-eye Transformation without Calibration Objects

    Get PDF
    Hand-eye calibration is a method to determine the transformation linking between the robot and camera coordinate systems. Conventional calibration algorithms use a calibration grid to determine camera poses, corresponding to the robot poses, both of which are used in the main calibration procedure. Although such methods yield good calibration accuracy and are suitable for offline applications, they are not applicable in a dynamic environment such as robotic-assisted minimally invasive surgery (RMIS) because changes in the setup can be disruptive and time-consuming to the workflow as it requires yet another calibration procedure. In this paper, we propose a neural network-based hand-eye calibration method that does not require camera poses from a calibration grid but only uses the motion from surgical instruments in a camera frame and their corresponding robot poses as input to recover the hand-eye matrix. The advantages of using neural network are that the method is not limited by a single rigid transformation alignment and can learn dynamic changes correlated with kinematics and tool motion/interactions. Its loss function is derived from the original hand-eye transformation, the re-projection error and also the pose error in comparison to the remote centre of motion. The proposed method is validated with data from da Vinci Si and the results indicate that the designed network architecture can extract the relevant information and estimate the hand-eye matrix. Unlike the conventional hand-eye approaches, it does not require camera pose estimations which significantly simplifies the hand-eye problem in RMIS context as updating the hand-eye relationship can be done with a trained network and sequence of images. This introduces a potential of creating a hand-eye calibratio

    Targetless Camera-LiDAR Calibration in Unstructured Environments

    Get PDF
    The camera-Lidar sensor fusion plays an important role in autonomous navigation research. Nowadays, the automatic calibration of these sensors remains a significant challenge in mobile robotics. In this article, we present a novel calibration method that achieves an accurate six-degree-of-freedom (6-DOF) rigid-body transformation estimation (aka extrinsic parameters) between the camera and LiDAR sensors. This method consists of a novel co-registration approach that uses local edge features in arbitrary environments to get 3D-to-2D errors between the data of both, camera and LiDAR. Once we have 3D-to-2D errors, we estimate the relative transform, i.e., the extrinsic parameters, that minimizes these errors. In order to find the best transform solution, we use the perspective-three-point (P3P) algorithm. To refine the final calibration, we use a Kalman Filter, which gives the system high stability against noise disturbances. The presented method does not require, in any case, an artificial target, or a structured environment, and therefore, it is a target-less calibration. Furthermore, the method we present in this article does not require to achieve a dense point cloud, which holds the advantage of not needing a scan accumulation. To test our approach, we use the state-of-the-art Kitti dataset, taking the calibration provided by the dataset as the ground truth. In this way, we achieve accuracy results, and we demonstrate the robustness of the system against very noisy observations.This work was supported by the Regional Valencian Community Government and the European Regional Development Fund (ERDF) through the grants ACIF/2019/088 and AICO/2019/020

    Visually-Guided Manipulation Techniques for Robotic Autonomous Underwater Panel Interventions

    Get PDF
    The long term of this ongoing research has to do with increasing the autonomy levels for underwater intervention missions. Bearing in mind that the speci c mission to face has been the intervention on a panel, in this paper some results in di erent development stages are presented by using the real mechatronics and the panel mockup. Furthermore, some details are highlighted describing two methodologies implemented for the required visually-guided manipulation algorithms, and also a roadmap explaining the di erent testbeds used for experimental validation, in increasing complexity order, are presented. It is worth mentioning that the aforementioned results would be impossible without previous generated know-how for both, the complete developed mechatronics for the autonomous underwater vehicle for intervention, and the required 3D simulation tool. In summary, thanks to the implemented approach, the intervention system is able to control the way in which the gripper approximates and manipulates the two panel devices (i.e. a valve and a connector) in autonomous manner and, results in di erent scenarios demonstrate the reliability and feasibility of this autonomous intervention system in water tank and pool conditions.This work was partly supported by Spanish Ministry of Research and Innovation DPI2011-27977-C03 (TRITON Project) and DPI2014-57746-C3 (MERBOTS Project), by Foundation Caixa Castell o-Bancaixa and Universitat Jaume I grant PID2010-12, by Universitat Jaume I PhD grants PREDOC/2012/47 and PREDOC/2013/46, and by Generalitat Valenciana PhD grant ACIF/2014/298. We would like also to acknowledge the support of our partners inside the Spanish Coordinated Projects TRITON and MERBOTS: Universitat de les Illes Balears, UIB (subprojects VISUAL2 and SUPERION) and Universitat de Girona, UdG (subprojects COMAROB and ARCHROV)
    corecore