2,551 research outputs found

    Medical image computing and computer-aided medical interventions applied to soft tissues. Work in progress in urology

    Full text link
    Until recently, Computer-Aided Medical Interventions (CAMI) and Medical Robotics have focused on rigid and non deformable anatomical structures. Nowadays, special attention is paid to soft tissues, raising complex issues due to their mobility and deformation. Mini-invasive digestive surgery was probably one of the first fields where soft tissues were handled through the development of simulators, tracking of anatomical structures and specific assistance robots. However, other clinical domains, for instance urology, are concerned. Indeed, laparoscopic surgery, new tumour destruction techniques (e.g. HIFU, radiofrequency, or cryoablation), increasingly early detection of cancer, and use of interventional and diagnostic imaging modalities, recently opened new challenges to the urologist and scientists involved in CAMI. This resulted in the last five years in a very significant increase of research and developments of computer-aided urology systems. In this paper, we propose a description of the main problems related to computer-aided diagnostic and therapy of soft tissues and give a survey of the different types of assistance offered to the urologist: robotization, image fusion, surgical navigation. Both research projects and operational industrial systems are discussed

    An annotated bibligraphy of multisensor integration

    Get PDF
    technical reportIn this paper we give an annotated bibliography of the multisensor integration literature

    Unifying terrain awareness for the visually impaired through real-time semantic segmentation.

    Get PDF
    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework

    Visibility in underwater robotics: Benchmarking and single image dehazing

    Get PDF
    Dealing with underwater visibility is one of the most important challenges in autonomous underwater robotics. The light transmission in the water medium degrades images making the interpretation of the scene difficult and consequently compromising the whole intervention. This thesis contributes by analysing the impact of the underwater image degradation in commonly used vision algorithms through benchmarking. An online framework for underwater research that makes possible to analyse results under different conditions is presented. Finally, motivated by the results of experimentation with the developed framework, a deep learning solution is proposed capable of dehazing a degraded image in real time restoring the original colors of the image.Una de las dificultades más grandes de la robótica autónoma submarina es lidiar con la falta de visibilidad en imágenes submarinas. La transmisión de la luz en el agua degrada las imágenes dificultando el reconocimiento de objetos y en consecuencia la intervención. Ésta tesis se centra en el análisis del impacto de la degradación de las imágenes submarinas en algoritmos de visión a través de benchmarking, desarrollando un entorno de trabajo en la nube que permite analizar los resultados bajo diferentes condiciones. Teniendo en cuenta los resultados obtenidos con este entorno, se proponen métodos basados en técnicas de aprendizaje profundo para mitigar el impacto de la degradación de las imágenes en tiempo real introduciendo un paso previo que permita recuperar los colores originales

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Embedded System Design of Robot Control Architectures for Unmanned Agricultural Ground Vehicles

    Get PDF
    Engineering technology has matured to the extent where accompanying methods for unmanned field management is now becoming a technologically achievable and economically viable solution to agricultural tasks that have been traditionally performed by humans or human operated machines. Additionally, the rapidly increasing world population and the daunting burden it places on farmers in regards to the food production and crop yield demands, only makes such advancements in the agriculture industry all the more imperative. Consequently, the sector is beginning to observe a noticeable shift, where there exist a number of scalable infrastructural changes that are in the process of slowly being implemented onto the modular machinery design of agricultural equipment. This work is being pursued in effort to provide firmware descriptions and hardware architectures that integrate cutting edge technology onto the embedded control architectures of agricultural machinery designs to assist in achieving the end goal of complete and reliable unmanned agricultural automation. In this thesis, various types of autonomous control algorithms integrated with obstacle avoidance or guidance schemes, were implemented onto controller area network (CAN) based distributive real-time systems (DRTSs) in form of the two unmanned agricultural ground vehicles (UAGVs). Both vehicles are tailored to different applications in the agriculture domain as they both leverage state-of-the-art sensors and modules to attain the end objective of complete autonomy to allow for the automation of various types of agricultural related tasks. The further development of the embedded system design of these machines called for the developed firmware and hardware to be implemented onto both an event triggered and time triggered CAN bus control architecture as each robot employed its own separate embedded control scheme. For the first UAGV, a multiple GPS waypoint navigation scheme is derived, developed, and evaluated to yield a fully controllable GPS-driven vehicle. Additionally, obstacle detection and avoidance capabilities were also implemented onto the vehicle to serve as a safety layer for the robot control architecture, giving the ground vehicle the ability to reliability detect and navigate around any obstacles that may happen to be in the vicinity of the assigned path. The second UAGV was a smaller robot designed for field navigation applications. For this robot, a fully autonomous sensor based algorithm was proposed and implemented onto the machine. It is demonstrated that the utilization and implementation of laser, LIDAR, and IMU sensors onto a mobile robot platform allowed for the realization of a fully autonomous non-GPS sensor based algorithm to be employed for field navigation. The developed algorithm can serve as a viable solution for the application of microclimate sensing in a field. Advisors: A. John Boye and Santosh Pitl

    Embedded System Design of Robot Control Architectures for Unmanned Agricultural Ground Vehicles

    Get PDF
    Engineering technology has matured to the extent where accompanying methods for unmanned field management is now becoming a technologically achievable and economically viable solution to agricultural tasks that have been traditionally performed by humans or human operated machines. Additionally, the rapidly increasing world population and the daunting burden it places on farmers in regards to the food production and crop yield demands, only makes such advancements in the agriculture industry all the more imperative. Consequently, the sector is beginning to observe a noticeable shift, where there exist a number of scalable infrastructural changes that are in the process of slowly being implemented onto the modular machinery design of agricultural equipment. This work is being pursued in effort to provide firmware descriptions and hardware architectures that integrate cutting edge technology onto the embedded control architectures of agricultural machinery designs to assist in achieving the end goal of complete and reliable unmanned agricultural automation. In this thesis, various types of autonomous control algorithms integrated with obstacle avoidance or guidance schemes, were implemented onto controller area network (CAN) based distributive real-time systems (DRTSs) in form of the two unmanned agricultural ground vehicles (UAGVs). Both vehicles are tailored to different applications in the agriculture domain as they both leverage state-of-the-art sensors and modules to attain the end objective of complete autonomy to allow for the automation of various types of agricultural related tasks. The further development of the embedded system design of these machines called for the developed firmware and hardware to be implemented onto both an event triggered and time triggered CAN bus control architecture as each robot employed its own separate embedded control scheme. For the first UAGV, a multiple GPS waypoint navigation scheme is derived, developed, and evaluated to yield a fully controllable GPS-driven vehicle. Additionally, obstacle detection and avoidance capabilities were also implemented onto the vehicle to serve as a safety layer for the robot control architecture, giving the ground vehicle the ability to reliability detect and navigate around any obstacles that may happen to be in the vicinity of the assigned path. The second UAGV was a smaller robot designed for field navigation applications. For this robot, a fully autonomous sensor based algorithm was proposed and implemented onto the machine. It is demonstrated that the utilization and implementation of laser, LIDAR, and IMU sensors onto a mobile robot platform allowed for the realization of a fully autonomous non-GPS sensor based algorithm to be employed for field navigation. The developed algorithm can serve as a viable solution for the application of microclimate sensing in a field. Advisors: A. John Boye and Santosh Pitl

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Situation Assessment for Mobile Robots

    Get PDF
    • …
    corecore