10 research outputs found

    underwater SLAM: Challenges, state of the art, algorithms and a new biologically-inspired approach

    Get PDF
    Abstract-The unstructured scenario, the extraction of significant features, the imprecision of sensors along with the impossibility of using GPS signals are some of the challenges encountered in underwater environments. Given this adverse context, the Simultaneous Localization and Mapping techniques (SLAM) attempt to localize the robot in an efficient way in an unknown underwater environment while, at the same time, generate a representative model of the environment. In this paper, we focus on key topics related to SLAM applications in underwater environments. Moreover, a review of major studies in the literature and proposed solutions for addressing the problem are presented. Given the limitations of probabilistic approaches, a new alternative based on a bio-inspired model is highlighted

    Monocular-Based Pose Determination of Uncooperative Known and Unknown Space Objects

    Get PDF
    In order to support spacecraft proximity operations, such as on-orbit servicing and spacecraft formation flying, several vision-based techniques exist to determine the relative pose of an uncooperative orbiting object with respect to the spacecraft. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously by making use of only optical measurements. In this paper, we investigate two vision-based approaches for pose estimation of uncooperative orbiting targets: one that is general and versatile such that it does not require a priori knowledge of any information of the target, and the other one that requires knowledge of the target's shape geometry. The former uses an estimation algorithm of translational and rotational dynamics to sequentially perform simultaneous pose determination and 3D shape reconstruction of the unknown target, while the latter relies on a known 3D model of the target's geometry to provide a point-by-point pose solution. The architecture and implementation of both methods are presented and their achievable performance is evaluated through numerical simulations. In addition, a computer vision processing strategy for feature detection and matching and the Structure from Motion (SfM) algorithm for on-board 3D reconstruction are also discussed and validated by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)

    Towards autonomous localization and mapping of AUVs: a survey

    Get PDF
    Purpose The main purpose of this paper is to investigate two key elements of localization and mapping of Autonomous Underwater Vehicle (AUV), i.e. to overview various sensors and algorithms used for underwater localization and mapping, and to make suggestions for future research. Design/methodology/approach The authors first review various sensors and algorithms used for AUVs in the terms of basic working principle, characters, their advantages and disadvantages. The statistical analysis is carried out by studying 35 AUV platforms according to the application circumstances of sensors and algorithms. Findings As real-world applications have different requirements and specifications, it is necessary to select the most appropriate one by balancing various factors such as accuracy, cost, size, etc. Although highly accurate localization and mapping in an underwater environment is very difficult, more and more accurate and robust navigation solutions will be achieved with the development of both sensors and algorithms. Research limitations/implications This paper provides an overview of the state of art underwater localisation and mapping algorithms and systems. No experiments are conducted for verification. Practical implications The paper will give readers a clear guideline to find suitable underwater localisation and mapping algorithms and systems for their practical applications in hand. Social implications There is a wide range of audiences who will benefit from reading this comprehensive survey of autonomous localisation and mapping of UAVs. Originality/value The paper will provide useful information and suggestions to research students, engineers and scientists who work in the field of autonomous underwater vehicles

    Monocular-Based Pose Determination of Uncooperative Known and Unknown Space Objects

    Get PDF
    In order to support spacecraft proximity operations, such as on-orbit servicing and spacecraft formation flying, several vision-based techniques exist to determine the relative pose of an uncooperative orbiting object with respect to the spacecraft. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously by making use of only optical measurements. In this paper, we investigate two vision-based approaches for pose estimation of uncooperative orbiting targets: one that is general and versatile such that it does not require a priori knowledge of any information of the target, and the other one that requires knowledge of the target's shape geometry. The former uses an estimation algorithm of translational and rotational dynamics to sequentially perform simultaneous pose determination and 3D shape reconstruction of the unknown target, while the latter relies on a known 3D model of the target's geometry to provide a point-by-point pose solution. The architecture and implementation of both methods are presented and their achievable performance is evaluated through numerical simulations. In addition, a computer vision processing strategy for feature detection and matching and the Structure from Motion (SfM) algorithm for on-board 3D reconstruction are also discussed and validated by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)

    Monocular-Based Pose Determination of Uncooperative Space Objects

    Get PDF
    Vision-based methods to determine the relative pose of an uncooperative orbiting object are investigated in applications to spacecraft proximity operations, such as on-orbit servicing, spacecraft formation flying, and small bodies exploration. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously in real-time by making use of only optical measurements. The Simultaneous Estimation of Pose and Shape (SEPS) algorithm that does not require a priori knowledge of the pose and shape of the target is presented. This makes use of a novel measurement equation and filter that can efficiently use optical flow information along with a star tracker to estimate the target's angular rotational and translational relative velocity as well as its center of gravity. Depending on the mission constraints, SEPS can be augmented by a more accurate offline, on-board 3D reconstruction of the target shape, which allows for the estimation of the pose as a known target. The use of Structure from Motion (SfM) for this purpose is discussed. A model-based approach for pose estimation of known targets is also presented. The architecture and implementation of both the proposed approaches are elucidated and their performance metrics are evaluated through numerical simulations by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)

    Monocular-Based Pose Determination of Uncooperative Space Objects

    Get PDF
    Vision-based methods to determine the relative pose of an uncooperative orbiting object are investigated in applications to spacecraft proximity operations, such as on-orbit servicing, spacecraft formation flying, and small bodies exploration. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously in real-time by making use of only optical measurements. The Simultaneous Estimation of Pose and Shape (SEPS) algorithm that does not require a priori knowledge of the pose and shape of the target is presented. This makes use of a novel measurement equation and filter that can efficiently use optical flow information along with a star tracker to estimate the target's angular rotational and translational relative velocity as well as its center of gravity. Depending on the mission constraints, SEPS can be augmented by a more accurate offline, on-board 3D reconstruction of the target shape, which allows for the estimation of the pose as a known target. The use of Structure from Motion (SfM) for this purpose is discussed. A model-based approach for pose estimation of known targets is also presented. The architecture and implementation of both the proposed approaches are elucidated and their performance metrics are evaluated through numerical simulations by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)

    Realidad aumentada en cirugía: una aproximación semántica mediante aprendizaje profundo

    Get PDF
    En este proyecto se ha desarrollado una herramienta que combina técnicas clásicas de visión por computador para tomar medidas precisas junto con técnicas de aprendizaje profundo. Esto permite crear un sistema capaz de entender la escena que está viendo, a la vez que la posiciona en el espacio de manera precisa, permitiendo incluso tomar medidas en verdadera magnitud. La fusión de estos dos tipos de tecnología permite abrir una nueva línea de trabajo, estableciendo las bases para la extracción semántica del interior de un paciente en un entorno quirúrgico.El principal reto abordado en el proyecto es la identificación de las regiones internas del paciente (el hígado en este caso) a partir de imágenes planas tomadas con cámaras monoculares estándar, como un endoscopio. El objetivo final es la estimación de la pose (posición con respecto a la cámara, compuesta de traslación y rotación) del hígado, que se utilizará para localizar partes internas no visibles, como vasos sanguíneos o tumores, sobre el órgano en realidad aumentada durante una intervención quirúrgica.Para el entrenamiento de la red neuronal se ha utilizado un modelo sintético del hígado, obtenido a partir de un simulador quirúrgico desarrollado en el Grupo AMB del Instituto de Investigación en Ingeniería de Aragón (I3A). La principal dificultad del proyecto radica en el entrenamiento de la red para la obtención de la pose de forma precisa. Para ello, se ha reentrenado un modelo de red neuronal con imágenes del hígado, tanto sobre fondos homogéneos como sobre fondos simulando una intervención laparoscópica, con el objetivo de realizar predicciones en condiciones lo más realistas posibles.Finalmente, la información obtenida mediante la red neuronal se ha incorporado a ORB-SLAM, para la obtención de resultados en tiempo real.La principal novedad introducida en este proyecto es el uso conjunto de redes neuronales con ORB-SLAM. Esto permite realizar estimaciones de pose y escalado automáticamente sin la necesidad de utilizar información adicional, de modo que la herramienta se puede utilizar directamente con cámaras de laparoscopia, sin tener que recurrir a sensores adicionales como acelerómetros o LIDAR.<br /

    Computer vision-based localization and mapping of an unknown, uncooperative and spinning target for spacecraft proximity operations

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 399-410).Prior studies have estimated that there are over 100 potential target objects near the Geostationary Orbit belt that are spinning at rates of over 20 rotations per minute. For a number of reasons, it may be desirable to operate in close proximity to these objects for the purposes of inspection, docking and repair. Many of them have an unknown geometric appearance, are uncooperative and non-communicative. These types of characteristics are also shared by a number of asteroid rendezvous missions. In order to safely operate in close proximity to an object in space, it is important to know the target object's position and orientation relative to the inspector satellite, as well as to build a three-dimensional geometric map of the object for relative navigation in future stages of the mission. This type of problem can be solved with many of the typical Simultaneous Localization and Mapping (SLAM) algorithms that are found in the literature. However, if the target object is spinning with signicant angular velocity, it is also important to know the linear and angular velocity of the target object as well as its center of mass, principal axes of inertia and its inertia matrix. This information is essential to being able to propagate the state of the target object to a future time, which is a key capability for any type of proximity operations mission. Most of the typical SLAM algorithms cannot easily provide these types of estimates for high-speed spinning objects. This thesis describes a new approach to solving a SLAM problem for unknown and uncooperative objects that are spinning about an arbitrary axis. It is capable of estimating a geometric map of the target object, as well as its position, orientation, linear velocity, angular velocity, center of mass, principal axes and ratios of inertia. This allows the state of the target object to be propagated to a future time step using Newton's Second Law and Euler's Equation of Rotational Motion, and thereby allowing this future state to be used by the planning and control algorithms for the target spacecraft. In order to properly evaluate this new approach, it is necessary to gather experiby Brent Edward Tweddle.Ph. D
    corecore